Compare commits

...

1077 Commits

Author SHA1 Message Date
Victor Vieux
51f6c4a737 Merge pull request #1227 from dotcloud/bump_0.5.0
Bump to 0.5.0
2013-07-18 11:51:29 -07:00
Andy Rothfusz
f4eaec3e1e Merge pull request #1226 from metalivedev/easydockerfile
Make dockerfile docs easier to find. Clean up formatting.
2013-07-18 10:14:22 -07:00
Victor Vieux
b083418257 change -b -> -v and add udp example 2013-07-18 16:25:14 +00:00
Victor Vieux
5794857f7a Merge pull request #1169 from crosbymichael/buildfile-tests
Add unit tests for buildfile config instructions
2013-07-18 08:35:23 -07:00
Michael Crosby
e7f3f6fa5a Add unit tests for buildfile config instructions
Add tests for instructions in the buildfile that
modify the config of the resulting image.
2013-07-18 05:37:28 -09:00
Andy Rothfusz
aa5671411b Make dockerfile docs easier to find. Clean up formatting. 2013-07-17 18:56:40 -07:00
Victor Vieux
f8dfd0aa5e Merge pull request #1225 from dotcloud/hotfix_docker_rmi
*Runtime: improve docker rmi via id
2013-07-17 14:31:56 -07:00
Andy Rothfusz
3dbf9c6560 Merge pull request #1219 from metalivedev/docs-repoupdate
Update docs with 0.5 repository information.
2013-07-17 13:53:12 -07:00
Guillaume J. Charmes
de563a3ea3 Merge pull request #1194 from crosbymichael/build-verbose
* Builder: Add verbose output to docker build
2013-07-17 12:53:06 -07:00
Victor Vieux
9cf2b41c05 change rm usage in docs 2013-07-17 19:24:54 +00:00
Victor Vieux
f310b875f8 Merge branch 'master' of https://github.com/kencochrane/docker into kencochrane-master 2013-07-17 19:23:06 +00:00
Solomon Hykes
ac14c463d5 Changed date on changelog 2013-07-17 11:51:26 -07:00
Guillaume J. Charmes
578e888915 Merge pull request #1212 from dotcloud/merge_v_b_options
* Runtime: Merge -b and -v options
2013-07-17 11:43:47 -07:00
Victor Vieux
5231bf3653 Merge pull request #1222 from lopter/master
Always stop the opposite goroutine in network_proxy.go (closes #1213)
2013-07-17 11:40:33 -07:00
Solomon Hykes
8af945f353 Small changes in changelog wording 2013-07-17 11:39:38 -07:00
Ken Cochrane
d0e8ca1257 updated with notes from @vieux 2013-07-17 13:46:11 -04:00
Victor Vieux
5a934fc923 fix docker rmi via id 2013-07-17 15:48:53 +00:00
Louis Opter
c766d064ac Always stop the opposite goroutine in network_proxy.go (closes #1213) 2013-07-17 01:05:11 -07:00
Andy Rothfusz
0356081c0a Update repository information. 2013-07-16 17:04:41 -07:00
Guillaume J. Charmes
18e91d5f85 Update docs 2013-07-16 10:14:21 -07:00
Guillaume J. Charmes
1004d57b85 Hotfix: make sure ./utils tests pass 2013-07-15 17:58:23 -07:00
Nick Stinemates
f9e4ef5eb0 Merge pull request #1210 from dotcloud/improve_configmerge
improve mergeconfig, ...
2013-07-15 18:04:12 -07:00
Guillaume J. Charmes
eefbadd230 Merge -b and -v options 2013-07-15 17:51:32 -07:00
Guillaume J. Charmes
bc21b3ebf0 Bump version to 0.5.0 2013-07-15 14:57:52 -07:00
Solomon Hykes
608fb2a21e Merge pull request #1184 from dotcloud/1176-packaging-release
Hack: document PPA release step
2013-07-15 13:59:55 -07:00
Solomon Hykes
45050d9887 Merge pull request #1188 from dotcloud/1174-packaging-binary
Packaging: add pure binary to docker release
2013-07-15 13:59:06 -07:00
Daniel Mizyrycki
75a0052e64 packaging, issue #1176: Document PPA release step 2013-07-15 12:13:51 -07:00
Guillaume J. Charmes
c8efd08384 Merge pull request #1208 from crosbymichael/1201-rw-volumes-from
- Volumes: Copy VolumesRW values when using --volumes-from
2013-07-15 10:59:51 -07:00
Guillaume J. Charmes
454cd147fb Merge pull request #1096 from dotcloud/remove_os_user
* Runtime: Remove the os.user dependency and manually lookup /etc/passwd instead
2013-07-15 10:19:09 -07:00
Guillaume J. Charmes
e41507bde2 Add unit test to check wrong uid case 2013-07-15 10:05:09 -07:00
Victor Vieux
193a7e1dc1 improve mergeconfig, if dns, portspec, env or volumes specify in docker run, apend and not replace 2013-07-15 13:12:33 +00:00
Michael Crosby
5ae8c7a985 Copy VolumesRW values when using --volumes-from
Fixes #1201
2013-07-14 18:23:20 -09:00
Victor Vieux
9b57f9187b Merge pull request #1200 from ToothlessGear/fix-whitespaces_progessbar
Fix progressbar, without messing up other outputs
2013-07-13 08:50:50 -07:00
Victor Vieux
50e45b485f Merge pull request #1190 from dotcloud/1189-add_debug_error
* RemoteAPI: Improve debug
2013-07-13 08:15:59 -07:00
Victor Vieux
2051ebc0eb Merge pull request #1198 from dotcloud/fix_pull_tag
Fixed tag option for "docker pull" (the option was ignored)
2013-07-13 08:14:47 -07:00
Guillaume J. Charmes
933b9d44e1 Merge pull request #1054 from nickstenning/getimage-by-tag
* Runtime: Reverse priority of tag lookup in TagStore.GetImage
2013-07-12 16:15:04 -07:00
Nick Stenning
44b3e8d51b Reverse priority of tag lookup in TagStore.GetImage
Currently, if you have the following images:

    foo/bar      1       23b27d50fb49
    foo/bar      2       f2b86ec3fcc4

And you issue the following command:

    docker tag foo/bar:2 foo/bar latest

docker will tag the "wrong" image, because the image id for foo/bar:1 starts
with a "2". That is, you'll end up with the following:

    foo/bar      1       23b27d50fb49
    foo/bar      2       f2b86ec3fcc4
    foo/bar      latest  23b27d50fb49

This commit reverses the priority given to tags vs. image ids in the
construction `<user>/<repo>:<tagOrId>`, meaning that if a tag that is an exact
match for the specified `tagOrId`, it will be tagged in preference to an image
with an id that happens to start with the correct character sequence.
2013-07-12 23:56:36 +01:00
Daniel Mizyrycki
9bf8ad741f Merge pull request #1083 from hukeli/debian
Keep debian package up-to-date
2013-07-12 15:24:37 -07:00
Daniel Mizyrycki
9913ebbe21 Merge pull request #1203 from dotcloud/1202-packaging-debian
Packaging, issue #1202: Upgrade vagrantfile go in debian packaging
2013-07-12 15:11:57 -07:00
Daniel Mizyrycki
c7a48e91d8 Packaging, issue #1202: Upgrade vagrantfile go in debian packaging 2013-07-12 15:06:12 -07:00
Solomon Hykes
2cbf2200ac Merge pull request #1195 from dotcloud/tests-cleanup
* Hack: tests cleanup
2013-07-12 14:51:59 -07:00
Marcus Farkas
bac5772312 *Client: Fix the progressbar, without manipulating other outputs
Prior this commit, 'docker images' and other cmd's, which used utils.HumanSize(),
showed unnecessary whitespaces.
Formatting of progress has been moved to FormatProgess(), justifing the string
directly in the template.
2013-07-12 20:15:25 +02:00
Marcus Farkas
a6e5a397bd Revert "Client: better progressbar output"
This reverts commit 3ac68f1966.
2013-07-12 20:08:45 +02:00
Ken Cochrane
364f48d6c7 updated the rmi command docs, the had typos 2013-07-12 14:05:26 -04:00
Ken Cochrane
4174e7aa7a updated the help commands on a few commands that were not correct 2013-07-12 13:55:26 -04:00
Guillaume J. Charmes
eb38750d99 Remove the os.user dependency and manually lookup /etc/passwd instead 2013-07-12 10:49:47 -07:00
Sam Alba
cd0fef633c Fixed tag option for "docker pull" (the option was ignored) 2013-07-12 10:42:54 -07:00
Michael Crosby
d0c73c28df Add param to api docs for verbose build output 2013-07-12 06:22:56 -09:00
Victor Vieux
8e6c249e48 Merge pull request #1197 from crosbymichael/buildfile-doc-ordering
Fix Docker Builder documentation section numbers
2013-07-12 05:27:47 -07:00
Victor Vieux
752f99e8a1 Merge pull request #977 from dotcloud/966-improve_docker_login_parameters-feature
* Client: Add options to docker login to be able to use it via script
2013-07-12 05:07:25 -07:00
Victor Vieux
a909223ee2 Merge pull request #1102 from dotcloud/1098-store_hostconfig_tmp
* Runtime: bind mounts are now preserved upon container restart
2013-07-12 05:04:10 -07:00
Victor Vieux
8ff271fc74 Merge pull request #1192 from dotcloud/docker_port-fix
hotfix: fix broken docker port
2013-07-12 04:57:53 -07:00
Victor Vieux
9dfac1dd65 Merge pull request #1055 from dotcloud/list_container_processes-feature
* RemoteApi: /top to list running processes in a container
* Client: docker top to list running processes in a container
2013-07-12 04:56:12 -07:00
Victor Vieux
a8a6848ce0 fix tests regarding the new test image 2013-07-12 11:54:53 +00:00
Victor Vieux
9232d1ef62 Merge branch 'master' into list_container_processes-feature 2013-07-12 11:47:27 +00:00
Michael Crosby
90483dc912 Fix Docker Builder documentation numbering 2013-07-11 16:41:19 -09:00
Solomon Hykes
6bdb6f226b Simplify unit tests code with mkRuntime() 2013-07-11 17:59:25 -07:00
Solomon Hykes
2ac1141980 Don't leave broken, commented out tests lying around. 2013-07-11 17:58:45 -07:00
Michael Crosby
1104d443cc Revert changes from PR 1030
With streaming output of the build
changes in 1030 are no longer required.
2013-07-11 15:52:08 -09:00
Michael Crosby
49044a9608 Fix buildfile tests after rebase 2013-07-11 15:37:26 -09:00
Guillaume J. Charmes
71d2ff4946 Hotfix: check the length of entrypoint before comparing. 2013-07-11 17:31:07 -07:00
Michael Crosby
474191dd7b Add verbose output to docker build
Verbose output is enabled by default and
the flag -q can be used to suppress the verbose output.
2013-07-11 15:27:33 -09:00
Guillaume J. Charmes
637eceb6a7 Merge pull request #1124 from crosbymichael/buildfile-volumes
+ Builder: Add VOLUME instruction to buildfile
2013-07-11 17:16:57 -07:00
Victor Vieux
976428f505 change output 2013-07-11 21:04:23 +02:00
Victor Vieux
affe7caf78 fix broken docker port 2013-07-11 19:28:15 +02:00
Victor Vieux
b7937e268f add debug for error in the server 2013-07-11 12:21:43 +00:00
Louis Opter
5a411fa38e Make the TestAllocate{UDP,TCP}PortLocalhost more reliable
- For the TCP test try again if socat wasn't listening yet;
- For the UDP test raise the timeout to a minute to workaround what
  seems to be an issue with Linux.
2013-07-10 18:25:53 -07:00
Daniel Mizyrycki
bf26ae03cf Packaging, issue #1174: Add pure binary to docker release 2013-07-10 17:39:00 -07:00
Andy Rothfusz
3363cd5cd0 Merge pull request #1178 from dotcloud/fix-dev-environment
Fix outdated docs explaining how to setup a dev environment
2013-07-10 16:53:22 -07:00
Daniel Mizyrycki
5c49a61353 Merge pull request #1183 from dotcloud/960-packaging-PPA
Packaging, issue #960: Document PUBLISH_PPA for staging/production release
2013-07-10 16:16:31 -07:00
Daniel Mizyrycki
f83c31e188 Packaging, issue #960: Document PUBLISH_PPA for staging/production release 2013-07-10 16:06:49 -07:00
Louis Opter
8f36467107 Raise the timeouts for the TCP/UDP localhost proxy tests
Sometimes these tests fail, let's see if that improves the situation.
2013-07-10 16:05:14 -07:00
Louis Opter
8e49cb453f Merge pull request #1181 from dotcloud/export_portmapping
Export PortMapping in container.go
2013-07-10 14:24:20 -07:00
Michael Crosby
40f1e4edbe Rebased changes buildfile_test 2013-07-10 07:12:57 -09:00
Michael Crosby
1267e15b0f Add unittest for volume config verification 2013-07-10 06:59:16 -09:00
Michael Crosby
eb9fef2c42 Add VOLUME instruction to buildfile 2013-07-10 06:59:16 -09:00
Victor Vieux
43b346d93b Merge pull request #1151 from alex/patch-1
Replaced gendered language in the README
2013-07-10 07:52:30 -07:00
Victor Vieux
d918c7d9de export portmapping in network.go 2013-07-10 14:09:35 +00:00
Victor Vieux
e962e9edcf Merge pull request #1168 from dotcloud/standalone_registry
* Server: Allow push on standalone registry
2013-07-10 04:14:23 -07:00
Victor Vieux
b7a62f1f1b Merge pull request #1177 from lopter/udp-support-final
* Network: Add UDP support
2013-07-10 03:55:18 -07:00
Victor Vieux
2e5d1a2d48 Merge pull request #1164 from dotcloud/1162-import_hangs-fix
* Runtime: Untar is now faster
2013-07-10 03:37:24 -07:00
Louis Opter
fac0d87d00 Add support for UDP (closes #33)
API Changes
-----------

The port notation is extended to support "/udp" or "/tcp" at the *end*
of the specifier string (and defaults to tcp if "/tcp" or "/udp" are
missing)

`docker ps` now shows UDP ports as "frontend->backend/udp". Nothing
changes for TCP ports.

`docker inspect` now displays two sub-dictionaries: "Tcp" and "Udp",
under "PortMapping" in "NetworkSettings".

Theses changes stand true for the values returned by the HTTP API too.

This changeset will definitely break tools built upon the API (or upon
`docker inspect`). A less intrusive way to add UDP ports in `docker
inspect` would be to simply add "/udp" for UDP ports but it will still
break existing applications which tries to convert the whole field to an
integer. I believe that having two TCP/UDP sub-dictionaries is better
because it makes the whole thing more clear and more easy to parse right
away (i.e: you don't have to check the format of the string, split it
and convert the right part to an integer)

Code Changes
------------

Significant changes in network.go:

- A second PortAllocator is instantiated for the UDP range;
- PortMapper maintains separate mapping for TCP and UDP;
- The extPorts array in NetworkInterface is now an array of Nat objects
  (so we can know on which protocol a given port was mapped when
  NetworkInterface.Release() is called);
- TCP proxying on localhost has been moved away in network_proxy.go.

localhost proxy code rewrite in network_proxy.go:

We have to proxy the traffic between localhost:frontend-port and
container:backend-port because Netfilter doesn't work properly on the
loopback interface and DNAT iptable rules aren't applied there.

- Goroutines in the TCP proxying code are now explicitly stopped when
  the proxy is stopped;
- UDP connection tracking using a map (more infos in [1]);
- Support for IPv6 (to be more accurate, the code is transparent to the
  Go net package, so you can use, tcp/tcp4/tcp6/udp/udp4/udp6);
- Single Proxy interface for both UDP and TCP proxying;
- Full test suite.

[1] https://github.com/dotcloud/docker/issues/33#issuecomment-20010400
2013-07-09 17:42:35 -07:00
Solomon Hykes
a839b36e55 Fix outdated docs explaining how to setup a dev environment. Building docker with docker ftw 2013-07-09 16:48:16 -07:00
Sam Alba
316c8328aa Hardened repos name validation 2013-07-09 16:46:55 -07:00
Sam Alba
e8db031112 Fixed tag parsing when the repos name contains both a port and a tag 2013-07-09 16:46:25 -07:00
Sam Alba
59b785a282 Fixing missing tag field when pulling containers which does not exist 2013-07-09 16:45:32 -07:00
Louis Opter
1a1daca621 Fix a typo in runtime_test.go: Availalble -> Available 2013-07-09 11:52:33 -07:00
Sam Alba
837be914ca Merge branch 'master' of github.com:dotcloud/docker into standalone_registry 2013-07-09 11:31:14 -07:00
Sam Alba
f44eac49fa Fixed potential security issue (never try http on official index when polling the endpoint). Also fixed local repos name when pulling index.docker.io/foo/bar 2013-07-09 11:30:12 -07:00
Guillaume J. Charmes
0acdef4549 Merge pull request #1166 from dotcloud/networkless_tests-2
* Tests: Remove all network dependencies from the test suite
2013-07-09 11:20:18 -07:00
Guillaume J. Charmes
7d8ef90ccb Merge pull request #1173 from dotcloud/1172-ghost_restart-fix
Make sure container is not marked as ghost when it starts
2013-07-09 10:49:17 -07:00
Guillaume J. Charmes
91520838fc Make sure container is not marked as ghost when it starts 2013-07-09 10:48:33 -07:00
Guillaume J. Charmes
ada0e1fb08 Merge pull request #1049 from dotcloud/1040_ignore_stderr_tests-fix
- Tests: Ignore stderr while doing tests
2013-07-09 10:32:24 -07:00
Sam Alba
33d97e81eb Removed DOCKER_INDEX_URL 2013-07-09 08:10:43 -07:00
Sam Alba
019324015b Moved parseRepositoryTag to the utils package 2013-07-09 08:06:10 -07:00
Victor Vieux
72d278fdac Merge pull request #1170 from dotcloud/fix_type_socket
Fix typo socket
2013-07-09 03:57:45 -07:00
Victor Vieux
05d7f85af9 fix typo 2013-07-09 10:55:28 +00:00
Solomon Hykes
7fba358ae2 Merge pull request #1013 from dotcloud/standardize-build
* Hack: standardized docker's build environment in a Dockerfile
2013-07-08 21:33:45 -07:00
Solomon Hykes
9f1fc40a64 * Hack: standardized docker's build environment in a Dockerfile 2013-07-08 21:30:29 -07:00
Sam Alba
3be7bc38e0 Fixed typo (thanks unit tests) 2013-07-08 17:42:18 -07:00
Sam Alba
31c66d5a00 Re-implemented a notion of local and private repos. This allows to consider the full qualified name of the repos as the name for the local repository without breaking the calls to the Registry API. 2013-07-08 17:26:50 -07:00
Sam Alba
e7d36c9590 It is now possible to include a ":" in a local repository name (it will not be the case for a remote name). This adds support for full qualified repository name in order to support private registry server 2013-07-08 17:22:41 -07:00
Sam Alba
3e8626c4a1 Changed the tag parsing to it will work even if there is a port in the repos registry url (full qualified name for pushing on a standalone registry) 2013-07-08 17:20:41 -07:00
Guillaume J. Charmes
e14dd4d33e Merge pull request #1157 from kstaken/1156-entrypoint-builder
Builder: Fix #1156 entrypoint override from base image
2013-07-08 16:57:26 -07:00
Kimbro Staken
87a69e6753 Merge branch '1156-entrypoint-builder' of github.com:kstaken/docker into 1156-entrypoint-builder 2013-07-08 16:06:09 -07:00
Kimbro Staken
f64dbdbe3a Override Entrypoint picked up from the base image that breaks run commands in builder 2013-07-08 16:04:39 -07:00
Kimbro Staken
2b5553144a Removing the save to disk as it was not really necessary 2013-07-08 16:03:18 -07:00
Guillaume J. Charmes
e43ef364cb Remove all network dependencies from the test suite 2013-07-08 15:23:04 -07:00
Guillaume J. Charmes
08a87d4b3b Fix #1162 - Remove bufio from Untar 2013-07-08 13:42:17 -07:00
Victor Vieux
90f372af5c Merge pull request #1163 from dotcloud/1137-change_search_size-feature
* Client : uses the terminal size to display search output, add -notrunc
2013-07-08 11:47:25 -07:00
Victor Vieux
3ec29eb5da Merge pull request #1066 from mhennings/fix-broken-streaming-result
* Server: Fix streaming status to the docker client while pushing images
2013-07-08 11:21:29 -07:00
Victor Vieux
3a20e4e15d add if to prevent crash 2013-07-08 18:19:12 +00:00
Victor Vieux
fd97190ee7 uses the terminal size to display search output, add -notrunc and fix bug in resize 2013-07-08 17:20:13 +00:00
Victor Vieux
70480ce7bc Merge pull request #1030 from dotcloud/builder_display_err_log
*Builder : Display containers logs in case of build failure
2013-07-08 07:26:46 -07:00
Victor Vieux
bf7d6cbb4a rebase master 2013-07-08 13:26:29 +00:00
Victor Vieux
c059785ffb Merge pull request #1161 from dotcloud/add_remote_addr_debug
Add remote addr in debug
2013-07-08 05:46:49 -07:00
Victor Vieux
a0f5fb7394 add remote addr in debug 2013-07-08 12:45:50 +00:00
Victor Vieux
ad33e9f388 Merge pull request #1138 from dotcloud/1123-rmi_conflict-fix
* Runtime: Fix error in rmi when conflict
2013-07-08 05:19:05 -07:00
Kimbro Staken
1d1d81b0bc Cleanup white space 2013-07-08 00:18:47 -07:00
Kimbro Staken
f3d2969560 Override Entrypoint picked up from the base image that breaks run commands in builder 2013-07-08 00:11:45 -07:00
Alex Gaynor
758ea61b77 Replaced gendered language in the README 2013-07-07 13:55:02 +10:00
Sam Alba
e2b8ee2723 Fixed runtime_test (ImagePull prototyped changed) 2013-07-05 16:03:22 -07:00
Guillaume J. Charmes
07dc0a5120 Merge pull request #1144 from dotcloud/standalone_registry
* Registry: Standalone registry
2013-07-05 15:56:48 -07:00
Sam Alba
d3125d8570 Code cleaning 2013-07-05 15:26:08 -07:00
Sam Alba
283ebf3ff9 fmt.Errorf instead of errors.New 2013-07-05 14:56:56 -07:00
Sam Alba
4c174e0bfb Fixed ping URL 2013-07-05 14:55:48 -07:00
Sam Alba
57a6c83547 Allowing namespaces in standalone registry 2013-07-05 14:30:43 -07:00
Sam Alba
cfc7684b7d Restoring old changeset lost by previous merge 2013-07-05 12:37:07 -07:00
Sam Alba
be49f0a118 Merging from master 2013-07-05 12:27:10 -07:00
Sam Alba
66a9d06d9f Adding support for nicer URLs to support standalone registry (+ some registry code cleaning) 2013-07-05 12:20:58 -07:00
Guillaume J. Charmes
6940cf1ecd Merge pull request #1127 from cespare/patch-1
Typo fix
2013-07-05 10:48:59 -07:00
Guillaume J. Charmes
4e0cdc016a Revert #1126. Remove mount shm 2013-07-05 10:47:00 -07:00
Guillaume J. Charmes
8a8109648a Merge pull request #1129 from cespare/style-fixes-2
Style fixes for fmt + err usage.
2013-07-05 10:31:53 -07:00
Guillaume J. Charmes
dc8b359319 Merge pull request #1126 from karanlyons/patch-1
* Runtime: Mount /dev/shm as a tmpfs.
2013-07-05 10:31:05 -07:00
Victor Vieux
dea29e7c99 Fix error in rmi when conflict 2013-07-05 16:58:39 +00:00
Daniel Mizyrycki
ab6379b3e0 Merge pull request #1133 from dotcloud/775-testing-notifications
testing, issue #775: Add automatic testing notifications to docker-ci
2013-07-04 21:48:00 -07:00
Daniel Mizyrycki
f7fed2ea5f testing, issue #775: Add automatic testing notifications to docker-ci 2013-07-04 21:43:46 -07:00
Daniel Mizyrycki
35e87ee571 Merge pull request #1132 from dotcloud/776-testing-commit
testing, issue #776: Ensure docker-ci test docker code as it was at commit time
2013-07-04 20:37:51 -07:00
Daniel Mizyrycki
ab3893ff4d testing, issue #776: Ensure docker-ci test docker code as it was at commit time 2013-07-04 20:28:54 -07:00
Caleb Spare
1277dca335 Style fixes for fmt + err usage.
fmt.Printf and friends will automatically format using the error
interface (.Error()) preferentially; no need to do err.Error().
2013-07-04 14:33:17 -07:00
Caleb Spare
ba9aef6f2c Typo fix
Error message grammar tweak
2013-07-04 12:40:14 -07:00
Karan Lyons
dd619d2bd6 Mount /dev/shm as a tmpfs.
Fixes #1122.
2013-07-04 09:58:50 -07:00
Marco Hennings
1e2ef274cd Pushing an Image causes the docker client to give an error message instead of
writing out streamed status.

This is caused by a Buffering message that is not in the correct json format:

[...]
{"status"
:"Pushing 6bba11a28f1ca247de9a47071355ce5923a45b8fea3182389f992f4
24b93edae"}Buffering to disk 244/? (n/a)..
{"status":"Pushing",[...]

The "Buffering to disk" message is originated in
srv.runtime.graph.TempLayerArchive

I am now using the StreamFormatter provided by the context from which the
method is called.
2013-07-04 10:50:37 +02:00
Guillaume J. Charmes
bcb5e36dd9 Merge pull request #1111 from cespare/style-fixes
Style fixes
2013-07-03 14:46:05 -07:00
Caleb Spare
19121c16d9 Implement several golint suggestions, including:
* Removing type declarations where they're inferred
* Changing Url -> URL, Id -> ID in names
* Fixing snake-case names
2013-07-03 14:36:04 -07:00
Caleb Spare
27ee261e60 Simplify the NopWriter code. 2013-07-03 14:35:18 -07:00
Caleb Spare
da3962266a Gofmt -s (simplify) 2013-07-03 14:35:18 -07:00
Caleb Spare
e93afcdd2b Use fmt.Errorf when appropriate. 2013-07-03 14:35:18 -07:00
Caleb Spare
dd1b9e38e9 Typo correction: Excepted -> Expected' 2013-07-03 14:35:18 -07:00
Guillaume J. Charmes
96bc9ea7c1 Merge pull request #1112 from cespare/mutex-style
Mutex style change.
2013-07-03 10:34:32 -07:00
Guillaume J. Charmes
16c8a10ef9 Merge pull request #1053 from dynport/do-not-copy-hostname-from-image
do not merge hostname from image
2013-07-03 10:34:15 -07:00
Joffrey F
5dcd11be16 Merge pull request #1109 from dynport/remote-lookup-fix
Fix remote lookup when pushing into registry
2013-07-03 06:29:19 -07:00
Ken Cochrane
dc91a7b641 Merge pull request #1113 from metalivedev/docs20130702
* Documentation: fix broken link on the documentation index page
2013-07-03 05:52:25 -07:00
Andy Rothfusz
11998ae7d6 Fix installation link from welcome page. 2013-07-02 16:48:57 -07:00
Caleb Spare
1cf9c80e97 Mutex style change.
For structs protected by a single mutex, embed the mutex for more
concise usage.

Also use a sync.Mutex directly, rather than a pointer, to avoid the
need for initialization (because a Mutex's zero-value is valid and
ready to be used).
2013-07-02 15:53:08 -07:00
Andy Rothfusz
6dbcdd3ed5 Merge pull request #1095 from metalivedev/docs20130627
Docs and images updates
2013-07-02 15:13:31 -07:00
Tobias Schwab
9632cf09bf fix two obvious bugs??? 2013-07-02 22:11:03 +00:00
Andy Rothfusz
96ab3c540d Merge branch 'docs20130627' of github.com:metalivedev/docker into docs20130627
Conflicts:
	docs/sources/concepts/manifesto.rst
	docs/sources/index.rst
2013-07-02 15:10:07 -07:00
Andy Rothfusz
ff964d327d Cleaning up the welcome page, terminology, and images. 2013-07-02 15:03:29 -07:00
Andy Rothfusz
4b8688f1e5 Fix broken quickstart link 2013-07-02 14:10:06 -07:00
Andy Rothfusz
55b5889a0f Merge branch 'docs20130627' of github.com:metalivedev/docker into docs20130627
Conflicts:
	docs/sources/concepts/manifesto.rst
2013-07-02 13:00:08 -07:00
Andy Rothfusz
dd4c6f6a09 Shortened lines to 80 columns 2013-07-02 12:09:57 -07:00
Andy Rothfusz
6058261a26 Clean up image text, minor updates to docs. 2013-07-02 12:09:57 -07:00
Andy Rothfusz
b461e4607d Adding files for terms 2013-07-02 12:09:57 -07:00
Andy Rothfusz
d399f72098 Cleaning up the welcome page, terminology. 2013-07-02 12:09:57 -07:00
Guillaume J. Charmes
c9e1c65c64 Merge pull request #1107 from eliasp/issue-1020
* Runtime: Don't remove the container= environment variable.
2013-07-02 11:42:44 -07:00
Victor Vieux
3042f11666 never remove the file and try to load it in start 2013-07-02 18:02:16 +00:00
Elias Probst
e5e47c9862 Don't remove the container= environment variable, as it is crucial for a lot of tools to detect, whether they're run inside an LXC container or not. 2013-07-02 19:13:37 +02:00
Victor Vieux
1c5083315d Merge pull request #1103 from shin-/1060-pull-only-tagged-images
*Registry: When no tag is specified in docker pull, skip images that are not tagged
2013-07-02 10:08:21 -07:00
Victor Vieux
27a137ccab change file location 2013-07-02 17:02:42 +00:00
shin-
7cc294e777 When no tag is specified in docker pull, skip images that are not tagged 2013-07-02 18:25:06 +02:00
Daniel Mizyrycki
a20dcfb049 Merge pull request #987 from dotcloud/601-packaging-ubuntu
Packaging|ubuntu, issue #601: Allow packaging prerm to do its job
2013-07-02 08:51:01 -07:00
Victor Vieux
06b53e3fc7 store hostConfig to /tmp while container is running 2013-07-02 12:19:25 +00:00
Victor Vieux
8f9dd86146 Merge pull request #1101 from dotcloud/fix-unit_tests
add sleep in tests and go fmt
2013-07-02 03:48:24 -07:00
Victor Vieux
ebba0a6024 add sleep in tests and go fmt 2013-07-02 10:47:37 +00:00
Victor Vieux
c9236d99d2 Merge pull request #1099 from lopter/master
*Test :  Fix
2013-07-02 02:42:21 -07:00
Louis Opter
f03c1b8eeb More unit test fixes
- Fix TestGetImagesJSON when there is more than one image in the test
  repository;
- Remove an hardcoded constant use in TestGetImagesByName;
- Wait in a loop in TestKillDifferentUser;
- Use env instead of /usr/bin/env in TestEnv;
- Create a daemon user in contrib/mkimage-unittest.sh.
2013-07-01 17:24:21 -07:00
Guillaume J. Charmes
6f23e39e6b Merge pull request #1097 from dotcloud/bump_0.4.8
Bump version to 0.4.8
2013-07-01 17:13:19 -07:00
Solomon Hykes
fe0378e9b3 Rephrase changelog 2013-07-01 17:05:49 -07:00
Guillaume J. Charmes
96a1d7c645 Bump version to 0.4.8 2013-07-01 16:58:25 -07:00
Guillaume J. Charmes
79ee8b46f4 Merge pull request #1046 from dotcloud/1043-output_id_non_attach-fix
- Runtime: Make sure the ID is displayed usgin run -d
2013-07-01 16:49:43 -07:00
Victor Vieux
55a7a8b8c9 Merge pull request #1092 from lopter/master
Fix TestGetInfo when there is more than one image in the test repository
2013-07-01 16:41:01 -07:00
Andy Rothfusz
b47873c5ac Clean up image text, minor updates to docs. 2013-07-01 16:37:13 -07:00
Andy Rothfusz
adf75d402a Adding files for terms 2013-07-01 16:37:13 -07:00
Andy Rothfusz
cb1fdb2f03 Cleaning up the welcome page, terminology. 2013-07-01 16:37:13 -07:00
Victor Vieux
d1d66b9c5f Merge pull request #1078 from kstaken/fix_json_error
* Remote API: Small fix in /start if empty host config
2013-07-01 16:36:58 -07:00
Louis Opter
6dacbb451f Fix TestGetInfo when there is more than one image in the test repository
See also #1089, #1072.
2013-07-01 15:06:08 -07:00
Guillaume J. Charmes
ead9cefadb Merge pull request #1089 from dotcloud/multiple_test_images-fix
- Tests: Fix unit tests when there is more than one tag within the test image
2013-07-01 13:58:04 -07:00
Guillaume J. Charmes
185a2fc55e Merge pull request #1086 from crosbymichael/1008-image-entrypoint
+ Builder: Add Entrypoint to builder and container config
2013-07-01 13:33:12 -07:00
Ken Cochrane
fb8fac6c60 Merge pull request #1088 from kpelykh/master
* Documentation: Update Docker Remote API client list to include Java library
2013-07-01 12:50:31 -07:00
Guillaume J. Charmes
b6f288a1ce Fix unit tests when there is more than one tag within the test image 2013-07-01 11:45:45 -07:00
zettaset-kpelykh
aa9bec96b1 Issue #1087 Docker Java API client -- added java to Docker Remote API Client document 2013-07-01 11:28:40 -07:00
Victor Vieux
11e28842ac change to top 2013-07-01 15:19:42 +00:00
Michael Crosby
b16ff9f859 Add Entrypoint to builder and container config
By setting an entrypoint in the Dockerfile this
allows one to run an image and only pass arguments.
2013-07-01 05:34:27 -09:00
Victor Vieux
348c5c4838 Merge pull request #1085 from dotcloud/1076-doc_delete-fix
fix status code in doc
2013-07-01 06:30:21 -07:00
Victor Vieux
8dcc6a0280 fix status code in doc 2013-07-01 13:28:58 +00:00
Victor Vieux
3b5ad44647 rebase master 2013-07-01 12:31:16 +00:00
Victor Vieux
5e029f7600 Merge pull request #1061 from proppy/fix-slices-ref
api,server: slice are already refs, no need to return ptr
2013-07-01 04:51:56 -07:00
Keli Hu
52cebe19e5 Keep debian package up-to-date 2013-07-01 16:15:56 +08:00
Kimbro Staken
d8d33e8b8b Adding check for content-type header 2013-06-30 10:46:09 -07:00
Solomon Hykes
b37f7d49d8 Documented release process for maintainers in hack/RELEASE.md 2013-06-29 22:08:25 -07:00
Solomon Hykes
d67d5dd963 Merge pull request #1065 from dotcloud/bump_0.4.7
Bump version to 0.4.7
2013-06-29 21:23:59 -07:00
Solomon Hykes
273e0d42b7 * Hack: change builder tests to always use the current unit test image, instead of hardcoding 'docker-ut' 2013-06-29 21:22:15 -07:00
Solomon Hykes
ca497a82ab Bump version to 0.4.7 2013-06-29 21:12:29 -07:00
Solomon Hykes
b7226316c7 * Hack: move unit tests to a different source image, to work around issues when docker-ut has more than 1 tag. 2013-06-28 19:43:55 -07:00
Guillaume J. Charmes
84f41954ae Merge pull request #1052 from lopter/master
Fix a couple critical bugs on the test suite
2013-06-28 17:00:51 -07:00
Johan Euphrosine
54da339b2c api,server: slice are already refs, no need to return ptr 2013-06-28 12:41:09 -07:00
Sam Alba
ac37fcf6f3 Fixed conflicts 2013-06-28 12:36:59 -07:00
Sam Alba
893c974b08 Resolve conflict 2013-06-28 12:32:41 -07:00
Joffrey F
30342efa37 Merge pull request #700 from dotcloud/615-pushbyid
Allow to push/pull on independent registries (by repository or image ID)
2013-06-28 10:29:10 -07:00
Daniel Mizyrycki
6165c246d4 Merge pull request #1057 from dotcloud/973-testing-stabilization
testing|stabilization, issue 973: Use docker-golang PPA and lts-raring kernel
2013-06-28 09:54:06 -07:00
shin-
72befeef24 Fixed issue in registry.GetRemoteTags 2013-06-28 18:42:37 +02:00
Victor Vieux
648c4f198b Add test 2013-06-28 16:27:00 +00:00
Daniel Mizyrycki
af2a92f22b testing|stabilization, issue 973: Use docker-golang PPA and lts-raring kernel 2013-06-28 09:23:25 -07:00
shin-
ad2f826a82 go fmt pass 2013-06-28 18:19:58 +02:00
shin-
e095a1572f Allow push by ID when using a custom registry 2013-06-28 18:19:58 +02:00
shin-
c3dd6e1926 Several fixes and updates to make this work with latest changes in master 2013-06-28 18:19:58 +02:00
Guillaume J. Charmes
67ecd2cb82 Reenable writeflusher for pull 2013-06-28 18:19:58 +02:00
Guillaume J. Charmes
57d751c377 Remove https prefix from registry 2013-06-28 18:19:58 +02:00
shin-
50075106b6 Rolled back of previous commit (skip cert verification) 2013-06-28 18:19:58 +02:00
shin-
2a1f8f6fda Ignore 'registry not found' when pushing on independent registries 2013-06-28 18:19:58 +02:00
shin-
1c817913ee Skip certificate check (don't error out on self-signed certs) 2013-06-28 18:19:58 +02:00
shin-
de0a48bd6f Tentative support for independent registries 2013-06-28 18:19:58 +02:00
Victor Vieux
8589fd6db8 Add doc 2013-06-28 18:05:41 +02:00
Victor Vieux
2e79719622 add /proc to list running processes inside a container 2013-06-28 15:51:58 +00:00
Tobias Schwab
9bfec5a538 do not merge hostname from image 2013-06-28 15:22:01 +02:00
Victor Vieux
a11fc9f067 Merge pull request #1032 from andrewsmedina/govet
following the 'go vet' suggestions for the docker package.
2013-06-28 05:27:53 -07:00
Thatcher
e12a204bcc Merge pull request #1028 from dhrp/bugfixes-on-docs
Bugfixes on documentation code
2013-06-27 19:01:13 -07:00
Louis Opter
fe014a8e6c Always return the correct test image.
And not a *random* one from its history.
2013-06-27 18:01:20 -07:00
Louis Opter
aa8ea84d11 Fix a nil dereference in buildfile_test.go
The test runtime object wasn't properly initialized.
2013-06-27 18:01:10 -07:00
Sam Alba
3175e56ad0 URL schemes of both Registry and Index are now consistent 2013-06-27 17:55:17 -07:00
Guillaume J. Charmes
800d900688 Ignore stderr while doing tests 2013-06-27 15:25:31 -07:00
Guillaume J. Charmes
1a201d2433 Merge pull request #1035 from dotcloud/fix_panic_cast
- Runtime: fix panic with unix socket
2013-06-27 15:22:18 -07:00
Guillaume J. Charmes
750c94efbb Merge pull request #1041 from unclejack/fix_minor_kernel_version_for_git_kernels
remove + from minor kernel version for kernels built from git
2013-06-27 15:21:34 -07:00
Guillaume J. Charmes
bd144a64f6 Make sure the ID is displayed usgin run -d 2013-06-27 12:48:25 -07:00
Guillaume J. Charmes
2a20e85203 Improve last log output 2013-06-27 11:10:19 -07:00
unclejack
5ed4386bbf remove + from minor kernel version 2013-06-27 17:51:17 +03:00
Victor Vieux
9d3ec7b39f fix panic with unix socket 2013-06-27 12:57:19 +00:00
Victor Vieux
e68a23bdc1 Merge pull request #1019 from dotcloud/1002-change_update_progress_bar_rate-feature
*Remote API: update progressbar every MIN(1%, 512kB)
2013-06-27 04:19:42 -07:00
Andrews Medina
6cf493bea7 following 'go vet' in utils pkg. 2013-06-27 01:40:13 -03:00
Andrews Medina
3d5633a0a0 following the 'go vet' suggestions. 2013-06-27 01:33:55 -03:00
Solomon Hykes
c4a44f6f0b Merge pull request #1029 from lopter/master
* Hack: add a script to create the docker-ut image (busybox + socat)
2013-06-26 16:28:58 -07:00
Solomon Hykes
3e29695c1f Merge pull request #602 from gabrtv/111-bind-mounts
+ Runtime: mount volumes from a host directory with 'docker run -b'
2013-06-26 15:59:35 -07:00
Solomon Hykes
46a9f29bae - Runtime: small bugfixes in external mount-bind integration 2013-06-26 15:26:47 -07:00
Gabriel Monroy
67239957c9 - Fix a few bugs in external mount-bind integration 2013-06-26 15:10:38 -07:00
Solomon Hykes
d4e62101ab * Runtime: better integration of external bind-mounts (run -b) into the volume subsystem (run -v) 2013-06-26 15:08:07 -07:00
Gabriel Monroy
4fdf11b2e6 + Runtime: mount volumes from a host directory with 'docker run -b' 2013-06-26 15:07:31 -07:00
Guillaume J. Charmes
cd0f22ef72 Merge pull request #1005 from dotcloud/1004-stdin_piping-fix
- Runtime: Fix issue when attaching stdin alone
2013-06-26 12:56:14 -07:00
Guillaume J. Charmes
27d6777376 Display containers logs in case of build failure 2013-06-26 12:50:20 -07:00
Louis Opter
e5c0b31107 Add a script to create the docker-ut image
It's a fork of the mkimage-busybox.sh script and it adds socat to the
image. (socat being needed to add udp support, see #33).

This script, like mkimage-busybox.sh, probably only works on
Debian/Ubuntu.
2013-06-26 12:35:14 -07:00
Guillaume J. Charmes
5cdbd2ed7a Merge pull request #1021 from errnoh/fix-test-filled-tmpfs
TestKill and TestMultipleContainers: run sleep instead of cat /bin/zero....
2013-06-26 11:59:06 -07:00
Guillaume J. Charmes
b44e2e71aa Merge pull request #1010 from dotcloud/1009-testing-hack
Testing|hack, issue #1009: Update make hack environment
2013-06-25 17:07:18 -07:00
Thatcher Peskens
73afc6311d Bugfixes on docs
* fixed canonical link from index
* added http redirect from builder/basics
* fixed url in redirect_home
2013-06-25 15:31:22 -07:00
Guillaume J. Charmes
6127d757a7 Add missing fprintf instead of printf 2013-06-25 10:39:11 -07:00
Erno Hopearuoho
fb86dcfb17 TestKill and TestMultipleContainers: run sleep instead of cat /bin/zero. fixes #737 2013-06-25 17:52:10 +03:00
Victor Vieux
bccf06c748 update progressbar every MIN(1%, 512kB) 2013-06-25 14:03:15 +00:00
Victor Vieux
862e223cec Merge branch 'add-daemon-storage-path-param' of https://github.com/heavenlyhash/docker into heavenlyhash-add-daemon-storage-path-param 2013-06-25 13:33:45 +00:00
Ken Cochrane
e1e2ff52fe Merge pull request #1018 from nahiluhmot/add-swipely-docker-gem
* Documentation: Added Swipely's `docker-api` gem to the table of Remote API Client Libraries.
2013-06-25 05:49:52 -07:00
Tom Hulihan
d03edf12e4 Added Swipely's docker-api gem to the table of Remote API Client
Libraries.
2013-06-25 08:26:41 -04:00
Guillaume J. Charmes
ec1dfc521c Merge pull request #992 from unclejack/use_numeric_owner_for_tar
* Runtime: use --numeric-owner for Tar and Untar
2013-06-24 18:40:43 -07:00
Guillaume J. Charmes
5190f7f33a Implement regression test for stdin attach 2013-06-24 18:36:04 -07:00
Guillaume J. Charmes
873a5aa8e7 Make NewDockerCli handle terminal 2013-06-24 18:29:08 -07:00
Guillaume J. Charmes
672d3a6c6c Make term function consistent with each other 2013-06-24 18:27:57 -07:00
Guillaume J. Charmes
a749fb2130 Make DockerCli use its own stdin/out/err instead of the os.Std* 2013-06-24 18:27:57 -07:00
Guillaume J. Charmes
25d1bc2c09 Fix issue when attaching stdin alone 2013-06-24 18:27:57 -07:00
Daniel Mizyrycki
cc63c1b584 Testing|hack, issue #1009: Update make kack environment 2013-06-24 15:01:51 -07:00
Solomon Hykes
145c622aba Merge pull request #990 from dotcloud/fix-tests-cgo
* Hack: remove dependency of unit tests on 'os/user', which cannot be used with CGO_ENABLED=0
2013-06-24 12:31:54 -07:00
Ken Cochrane
e2516c01b4 Merge pull request #932 from metalivedev/docs20130614
+ Documentation: Add terminology section
2013-06-24 10:39:25 -07:00
Victor Vieux
a3cb18d0f0 Merge pull request #1003 from dotcloud/fix_utils_tests
fix regression in utils tests introduced by #980
2013-06-24 09:14:13 -07:00
Victor Vieux
eca9f9c1a1 fix regrettion in utils tests introduced by #980 2013-06-24 16:12:39 +00:00
Daniel Mizyrycki
aee845682f Merge pull request #998 from dotcloud/861-hack-vagrant
Fixing hack/Vagrantfile to use uname for aufs linux extras
2013-06-22 20:10:00 -07:00
Anthony Bishopric
e3dbe2f2ba Fixing hack/Vagrantfile to use uname for aufs linux extras
Conflicts: hack/Vagrantfile
Resolved by: Daniel Mizyrycki <daniel@dotcloud.com>
2013-06-22 19:58:33 -07:00
Solomon Hykes
193888a2b4 Merge pull request #997 from dotcloud/bump_0.4.6
Bump version to 0.4.6
2013-06-22 13:41:48 -07:00
Solomon Hykes
9fe8bfb2bc Bump version to 0.4.6 2013-06-22 13:36:45 -07:00
Solomon Hykes
fc25973371 Merge pull request #996 from dotcloud/995-volumes-crash
- Runtime: fix a bug which caused creation of empty images (and volumes) to crash.
2013-06-22 13:35:26 -07:00
Solomon Hykes
f9acd605dc - Runtime: add regression test for issue #995 2013-06-22 13:33:43 -07:00
Solomon Hykes
290b1973a9 Fix a bug which caused creation of empty images (and volumes) to crash. FIxes #995. 2013-06-22 12:29:42 -07:00
unclejack
d7d42ff4fe use --numeric-owner for tar and untar 2013-06-22 18:53:10 +03:00
Solomon Hykes
ce9e50f4ee Remove dependency on 'os/user', which cannot be used with CGO_ENABLED=0. This allows running the tests without CGO. 2013-06-21 19:40:42 -07:00
Solomon Hykes
ecd1fff9b0 Fix formatting in Changelog 2013-06-21 20:22:14 -06:00
Daniel Mizyrycki
b0f12bd5e8 Merge pull request #988 from dotcloud/526-packaging-ubuntu
Packaging|Ubuntu, issue #526: Follow init permission pattern for docker.conf
2013-06-21 18:03:12 -07:00
Daniel Mizyrycki
5d61ec11e3 Packaging|Ubuntu, issue #526: Follow init permission pattern for docker.conf 2013-06-21 17:57:45 -07:00
Daniel Mizyrycki
41cdd9b27f Packaging|ubuntu, issue #601: Allow packaging prerm to do its job 2013-06-21 17:37:32 -07:00
Victor Vieux
ec6b35240e fix raw terminal 2013-06-22 00:37:02 +00:00
Guillaume J. Charmes
c792c0a6c9 Merge pull request #986 from dotcloud/bump_0.4.5
Bump version to 0.4.5
2013-06-21 17:13:38 -07:00
Solomon Hykes
d9d2540162 Small copy improvements in Changelog 2013-06-21 17:09:49 -07:00
Guillaume J. Charmes
f5d08fc49c Bump version to 0.4.5 2013-06-21 17:01:01 -07:00
Victor Vieux
b24759af1c Merge pull request #959 from dotcloud/947-add_ps_size_option
+ Client: Add docker ps -s option to display containers' sizes
2013-06-21 16:10:45 -07:00
Victor Vieux
4d1692726b merge master and add doc 2013-06-22 01:08:20 +02:00
Victor Vieux
1581ed52ba consistent codebase fix 2013-06-21 22:55:33 +00:00
Guillaume J. Charmes
de1a5a75cc Merge pull request #848 from dotcloud/builder_server-3
Improve Docker build
2013-06-21 14:55:08 -07:00
Guillaume J. Charmes
169ef21de7 Remove deprecated INSERT from documentation 2013-06-21 14:12:52 -07:00
Guillaume J. Charmes
d0fa6927f8 Update api docs 2013-06-21 13:51:48 -07:00
Victor Vieux
63e8a4ac74 Merge pull request #970 from titanous/go1.1-unreachable
Remove code unreachable using Go 1.1
2013-06-21 10:44:40 -07:00
Victor Vieux
459230d3f9 Merge pull request #980 from ToothlessGear/fix-progressbar
* Client: fix progressbar output
2013-06-21 10:42:14 -07:00
Guillaume J. Charmes
070e1aec7e Merge pull request #975 from unclejack/891-mark_command_as_optional
891 mark command as optional for docker run
2013-06-21 10:22:52 -07:00
Marcus Farkas
3ac68f1966 Client: better progressbar output 2013-06-21 17:19:27 +02:00
Victor Vieux
42bcfcc927 add options to docker login 2013-06-21 10:00:25 +00:00
Victor Vieux
5ccde4dffc Merge pull request #801 from dotcloud/build_docker_static
* Makefile: Add link flags in order to link statically and without debug symbols
2013-06-21 02:30:57 -07:00
Victor Vieux
dc847001a5 Merge pull request #888 from dotcloud/fix-auth
* Auth: fix auth small issue in case you change your password on index.io
2013-06-21 02:22:01 -07:00
Victor Vieux
639833aaf5 fix tests 2013-06-21 09:20:57 +00:00
Victor Vieux
8f2a80804c Merge branch 'master' into fix-auth 2013-06-21 09:18:03 +00:00
Victor Vieux
78842970cf Merge pull request #976 from dotcloud/fix_doc_post
fix typo in documentation
2013-06-21 02:14:39 -07:00
Victor Vieux
8e7d4cda07 fix doc post 2013-06-21 11:13:13 +02:00
Victor Vieux
5b3ad0023b inverse if 2013-06-21 09:06:09 +00:00
unclejack
6a1279fb90 docs: mark command as optional for docker run 2013-06-21 11:07:14 +03:00
unclejack
66910a7602 mark command as optional for docker run 2013-06-21 11:06:41 +03:00
Solomon Hykes
d9bce2defd - Builder: return an error when the build fails 2013-06-20 22:15:19 -07:00
Solomon Hykes
352991bdf4 Merge branch 'simpler-build-upload' (#900) into builder_server-3 (#848) 2013-06-20 22:02:36 -07:00
Solomon Hykes
4383d7b603 Merge pull request #953 from rhysh/952-swapaccount-docs
- Documentation: fix inconsistent/outdated instructions for setting up "swapaccount"
2013-06-20 20:49:44 -07:00
Solomon Hykes
89ae56820a Merge pull request #965 from freegenie/patch-1
* Documentation: use sudo for quick install script
2013-06-20 20:48:42 -07:00
Solomon Hykes
17489cac1a Merge pull request #972 from titanous/update-authors
Update AUTHORS
2013-06-20 20:42:52 -07:00
Solomon Hykes
c1a5318d8e Merge branch 'build-add-file' into simpler-build-upload
Conflicts:
	buildfile_test.go
2013-06-20 20:42:19 -07:00
Jonathan Rudenberg
b0b690cf23 Update AUTHORS 2013-06-20 23:29:20 -04:00
Solomon Hykes
86e83186b5 Merge branch 'master' into simpler-build-upload
Conflicts:
	commands.go
2013-06-20 20:25:59 -07:00
Solomon Hykes
36d610a388 - Builder: fixed a regression in ADD. Improved regression tests for build behavior. 2013-06-20 20:20:16 -07:00
Jonathan Rudenberg
50b70eeb68 Remove code unreachable using Go 1.1 2013-06-20 23:19:44 -04:00
Solomon Hykes
cc0f59742f * Builder: simplified unit tests. The tests are now embedded in the build itself. Yeah baby. 2013-06-20 20:16:39 -07:00
Solomon Hykes
09dd7f14de Merge pull request #969 from dotcloud/fix_logs
- Remote API: Fix a bug which caused 'docker logs' to fail
2013-06-20 18:33:33 -07:00
Guillaume J. Charmes
b419699ab8 Use hijack for logs instead of stream 2013-06-20 18:18:36 -07:00
Guillaume J. Charmes
08825fa611 remove unused files 2013-06-20 16:31:11 -07:00
Solomon Hykes
02f0c1e46d Bump version to 0.4.4 2013-06-20 14:33:59 -07:00
Eric Myhre
e44f62a95c Add argument to allow setting base directory for docker daemon's storage to values other than "/var/lib/docker". 2013-06-20 16:29:54 -05:00
Solomon Hykes
dbfb3eb923 - Builder: hotfix for bug introduced in 3adf9ce04e 2013-06-20 14:29:34 -07:00
Solomon Hykes
e43323221b Merge branch 'master' into simpler-build-upload
Conflicts:
	api.go
	builder_client.go
	commands.go
2013-06-20 14:19:09 -07:00
Fabrizio Regini
da06349723 Update README.md 2013-06-20 23:15:38 +02:00
Solomon Hykes
cff2187a4c Fixed API version numbers in api docs 2013-06-20 12:30:02 -07:00
Guillaume J. Charmes
a078d3c872 Merge pull request #950 from dotcloud/bump_0.4.3
Bumped version to 0.4.3
2013-06-20 12:24:55 -07:00
Guillaume J. Charmes
da5bb4db96 Bumped version to 0.4.3 2013-06-20 12:23:14 -07:00
Daniel Mizyrycki
1b19939742 Merge pull request #946 from unclejack/speed_up_dockerbuilder_image_creation
dockerbuilder : batch apt-get install operations for speed
2013-06-20 11:51:43 -07:00
Guillaume J. Charmes
930e1d8830 Merge pull request #941 from dotcloud/makefile_test_subpackages
gofmt and test sub directories in makefile
2013-06-20 11:18:37 -07:00
Guillaume J. Charmes
fa68fe6ff3 Merge pull request #938 from dotcloud/add_unix_socket-feature
* Runtime: Add unix socket and multiple -H
2013-06-20 11:17:16 -07:00
Guillaume J. Charmes
21a5a6202d Merge pull request #907 from dotcloud/go1.1_cookie_jar-feature
* Runtime: use go 1.1 cookiejar and remove ResetClient
2013-06-20 10:48:36 -07:00
Solomon Hykes
db60337598 Makefile: added missing -a option 2013-06-20 10:39:09 -07:00
Guillaume J. Charmes
c5be64fec4 Add link flags in order to link statically and without debug symbols 2013-06-20 10:33:54 -07:00
Guillaume J. Charmes
659e846006 Merge branch 'master' into builder_server-3
Conflicts:
	docs/sources/use/builder.rst
2013-06-20 10:27:12 -07:00
Solomon Hykes
d8f56352da Merge pull request #961 from dotcloud/933-warning_rm-running
* Runtime: refuse to remove a running container
2013-06-20 10:25:02 -07:00
Guillaume J. Charmes
d1a3d020aa Merge pull request #913 from dotcloud/fix_detach_eof
- Runtime: Impossible to detach from attached container fix
2013-06-20 10:21:19 -07:00
Guillaume J. Charmes
8807b7dd46 Merge pull request #909 from dotcloud/fix_auth_tests
Fix the auth tests
2013-06-20 10:14:55 -07:00
Daniel Mizyrycki
cd155a1f25 Merge pull request #962 from dotcloud/960-packaging-ubuntu
Packaging|ubuntu, issue #960: Add docker PPA staging in release process
2013-06-20 09:03:15 -07:00
Daniel Mizyrycki
d8887f3488 Packaging|ubuntu, issue #960: Add docker PPA staging in release process 2013-06-20 08:57:28 -07:00
Victor Vieux
1c841d4fee add warning when you rm a running container 2013-06-20 15:45:30 +00:00
Victor Vieux
da199846d2 use strconv.ParseBool in getBoolParam 2013-06-20 14:34:58 +00:00
Victor Vieux
bd04d7d475 add ps -s 2013-06-20 14:19:50 +00:00
Victor Vieux
5f93aa0ecf rebase master 2013-06-20 13:56:36 +00:00
Victor Vieux
05796bed57 update docs 2013-06-20 12:34:08 +00:00
Solomon Hykes
8a131dffb6 Merge pull request #948 from dotcloud/registry_pathencode
* Registry: Use opaque requests when we need to preserve urlencoding in registry requests
2013-06-19 22:41:16 -07:00
Solomon Hykes
79efcb545d Merge branch 'master' into simpler-build-upload 2013-06-19 18:48:19 -07:00
Daniel Mizyrycki
88dcba3482 Packaging|ubuntu, issue #954: Generate debian/changelog from main CHANGELOG.md 2013-06-19 16:32:51 -07:00
Guillaume J. Charmes
754609ab69 Merge pull request #945 from globalcitizen/master
Security warnings in LXC configuration
2013-06-19 15:06:48 -07:00
Solomon Hykes
d6ab71f450 * Remote API: updated docs for 1.3 2013-06-19 15:03:33 -07:00
Solomon Hykes
55edbcd02f * Builder: remove duplicate unit test 2013-06-19 14:59:42 -07:00
Solomon Hykes
90dde9beab *Builder: warn pre-1.3 clients that they need to upgrade. This breaks semver, but our API should still be in 0.X versioning, in which case semver allows breaking changes. 2013-06-19 14:59:28 -07:00
Rhys Hiltner
5fc1329b2f the kernel needs "swapaccount=1" set - some docs are updated, this one seems to have slipped through
Fixes #952
2013-06-19 14:45:03 -07:00
Solomon Hykes
9c8085a0aa Merge pull request #951 from dotcloud/add-fix
* Builder: correct the behavior of ADD when copying directories.
2013-06-19 14:36:09 -07:00
Solomon Hykes
507ea757a5 * Builder: correct the behavior of ADD when copying directories. 2013-06-19 14:26:11 -07:00
Guillaume J. Charmes
7e065aaacd Merge pull request #917 from dotcloud/pull_pool
- Runtime: Forbid parralel push/pull for a single image/repo. Fixes #311
2013-06-19 14:11:29 -07:00
shin-
0312bbc535 Use opaque requests when we need to preserve urlencoding in registry requests 2013-06-19 13:49:45 -07:00
Solomon Hykes
a056f1deec Merge pull request #924 from eliasp/remove-ifconfig-usage
* Documentation: replace `ifconfig` in docs with `iproute`
2013-06-19 13:20:35 -07:00
Solomon Hykes
fdaefe6997 Merge pull request #944 from hansent/patch-1
* Documentation: use https repo url to clone for dev setup instructions
2013-06-19 13:17:08 -07:00
unclejack
88279439af batch apt-get install operations for speed
The dockerbuilder Dockerfile was installing one package per apt-get
install operation.

This changes it so that consecutive run apt-get install operations are
batched into a single operation.
2013-06-19 22:07:56 +03:00
Victor Vieux
2d6a49215c add testall rule 2013-06-19 18:21:53 +00:00
Guillaume J. Charmes
a7e14a3065 hotfix: nil pointer uppon some registry error 2013-06-19 11:08:19 -07:00
Guillaume J. Charmes
a660cc0d01 Merge pull request #934 from dotcloud/fix-add-behavior
* Build: Stabilize ADD behavior
2013-06-19 10:56:39 -07:00
globalcitizen
788d66f409 Add note about lxc.cap.keep > lxc.cap.drop 2013-06-20 00:39:35 +07:00
globalcitizen
96988a37f5 Add healthy procfs/sysfs warnings 2013-06-20 00:37:08 +07:00
Solomon Hykes
b368d21568 Merge branch 'master' into fix-add-behavior 2013-06-19 10:31:50 -07:00
Thomas Hansen
c88b763e80 use https repo url to clone for dev setup instructions
the git clone line in the dev setup instructions does not work as is, unless the user has write access
2013-06-19 11:38:58 -05:00
Victor Vieux
ec3c89e57c Merge pull request #849 from dotcloud/improve_progressbar_pull
* Client: HumanReadable ProgressBar sizes in pull
2013-06-19 08:02:40 -07:00
Victor Vieux
5dcab2d361 gofmt and test sub directories in makefile 2013-06-19 14:50:58 +00:00
Victor Vieux
5f7e98be20 Merge pull request #930 from andrewmunsell/patch-1
Fix Mac OS X installation instructions URL
2013-06-19 07:31:24 -07:00
Victor Vieux
d52af3f58f Merge branch 'master' into add_unix_socket-feature 2013-06-19 12:49:27 +00:00
Victor Vieux
063c838c92 update docs 2013-06-19 12:48:50 +00:00
Victor Vieux
9632bf2287 add tests 2013-06-19 12:40:01 +00:00
Victor Vieux
dede1585ee add the possibility to use multiple -H 2013-06-19 12:31:54 +00:00
Solomon Hykes
5be7b9af3e * Builder: fixed the behavior of ADD to be (mostly) reverse-compatible, predictable and well-documented. 2013-06-18 20:28:49 -07:00
Andy Rothfusz
5183399f50 Added multilayer example image. 2013-06-18 19:31:35 -07:00
Andy Rothfusz
a780b7c6b5 New Terminology section and illustrations. 2013-06-18 19:31:35 -07:00
Daniel Mizyrycki
0ae778c881 Merge pull request #788 from samjsharpe/master
Vagrantfile: Add support for VMWare Fusion provider
2013-06-18 18:35:25 -07:00
Andrew Munsell
1f8b679b18 Fix Mac OS X installation instructions URL 2013-06-18 19:19:07 -06:00
Guillaume J. Charmes
ee5df76579 Merge pull request #885 from dotcloud/remove_bsdtar
* Runtime: Remove bsdtar dependency
2013-06-18 17:24:26 -07:00
Guillaume J. Charmes
b431720dac Merge branch 'remove_bsdtar' of https://github.com/dotcloud/docker into remove_bsdtar 2013-06-18 17:23:22 -07:00
Guillaume J. Charmes
42ce68894a Fix issue within TestDelete. The archive is now consumed by graph functions 2013-06-18 17:22:32 -07:00
Guillaume J. Charmes
c063fc0238 Merge branch 'master' into fix_detach_eof
Conflicts:
	commands.go
2013-06-18 17:15:31 -07:00
Guillaume J. Charmes
0a9ac63a05 Merge pull request #916 from dotcloud/race_attach-fix
- Runtime: Fix race condition within Run command when attaching.
2013-06-18 17:13:38 -07:00
Guillaume J. Charmes
6dccdd657f remove offline mode from auth unit tests 2013-06-18 17:09:47 -07:00
Guillaume J. Charmes
34a434616a Merge branch 'master' into builder_server-3
Conflicts:
	buildfile.go
2013-06-18 16:12:30 -07:00
Elias Probst
bc9b91e501 Use the canonical 'ip' commands to make it easier for new 'iproute2' users to understand the usage. 2013-06-19 00:57:43 +02:00
Solomon Hykes
edbd3da33a Merge pull request #927 from dotcloud/nicer-build-output
* Builder: nicer output for 'docker build'
2013-06-18 15:48:00 -07:00
Guillaume J. Charmes
32e8f9beca Merge pull request #918 from dotcloud/hisotry_lookup
Add image lookup to history command
2013-06-18 15:36:05 -07:00
Guillaume J. Charmes
84ceeaa870 Update documentaiton 2013-06-18 14:36:35 -07:00
Solomon Hykes
cdeaba2acf Updated FIXME 2013-06-18 13:02:12 -07:00
Solomon Hykes
c0b82bd807 Fix incorrect docs for 'docker build' 2013-06-18 12:52:37 -07:00
Solomon Hykes
88e35b6f80 Merge pull request #926 from josephholsten/fix-znc-example
- Documentation: fix missing command in irc bouncer example
2013-06-18 12:37:21 -07:00
Guillaume J. Charmes
6e17cc45ea Fix merge issue 2013-06-18 12:33:06 -07:00
Solomon Hykes
cb9d0fd3bc Nicer output for 'docker build' 2013-06-18 12:26:56 -07:00
Victor Vieux
3adf9ce04e add basic support for unix sockets 2013-06-18 18:59:56 +00:00
Elias Probst
c2e95997d4 Fixed #923 by replacing the usage of 'ifconfig' with 'ip a' where appropriate and added a note to use 'ip a' instead of 'ifconfig' for a screencast transscript. 2013-06-18 19:55:59 +02:00
Guillaume J. Charmes
808faa6371 * API: Send all tags on History API call 2013-06-18 10:31:07 -07:00
Solomon Hykes
6f511ac29b Remove bsdtar dependency in various install scripts 2013-06-18 10:23:45 -07:00
Guillaume J. Charmes
3dc93e390a Remove useless goroutine 2013-06-18 10:10:03 -07:00
Guillaume J. Charmes
e2d034e488 Remove useless goroutine 2013-06-18 10:06:26 -07:00
Guillaume J. Charmes
86205540d8 Merge branch 'master' into race_attach-fix 2013-06-18 10:03:34 -07:00
Andy Rothfusz
702c3538a4 Merge pull request #921 from dhrp/docs-redirects-and-fixes
Fixes on documentation. LGTM
2013-06-18 09:58:24 -07:00
Victor Vieux
069a7c1e99 Merge pull request #914 from ToothlessGear/fix-version-output
* Client: Fix docker version's git commit output
2013-06-18 05:56:42 -07:00
Daniel Mizyrycki
2e7649beda Merge pull request #920 from dotcloud/919-packaging
Packaging, issue 919: Bump version to 0.4.2
2013-06-18 01:08:03 -07:00
Sam J Sharpe
8281a0fa1c Vagrantfile: Add support for VMWare Fusion provider
As a user who has blown $150 on VMWare Fusion and vagrant-vmware, I
would like to use my new shiny to hack on Docker. Docker already has a
multi-provider Vagrantfile, so adding another one presents little risk.

Known Issues:

- The docker install of a new kernel breaks the Vagrant shared folder.
    - This seems to be because the VMWare hgfs module doesn't build
      against a 3.8 kernel.
    - I don't believe that shared folder support is actually in use
2013-06-18 06:19:14 +01:00
Thatcher Peskens
3491d7d2f1 Fixed on documentation.
* replaced previously removed concepts/containers and concepts/introcution by a redirect
* moved orphan index/varable to workingwiththerepository
* added favicon to the layout.html
* added redirect_home which is a http refresh redirect. It works like a 301 for google
* fixed an issue in the layout that would make it break when on small screens
2013-06-17 20:16:56 -07:00
Daniel Mizyrycki
e664a46ff3 Packaging, issue 919: Bump version to 0.4.2 2013-06-17 19:50:31 -07:00
Solomon Hykes
0809f649d3 * Builder: upload progress bar
Fix progress bar
2013-06-17 18:49:16 -07:00
Guillaume J. Charmes
02a002d264 Update documentation 2013-06-17 18:41:13 -07:00
Guillaume J. Charmes
3bfc822578 * API: Add tag lookup to history command. Fixes #882 2013-06-17 18:39:30 -07:00
Guillaume J. Charmes
02c291d13b Fix bug on compression detection when chunck < 10bytes 2013-06-17 18:11:58 -07:00
Marcus Farkas
b25bcf1a66 fix docker version git output 2013-06-17 23:32:48 +00:00
Guillaume J. Charmes
fe204e6f48 - Runtime: Forbid parralel push/pull for a single image/repo. Fixes #311 2013-06-17 16:10:00 -07:00
Guillaume J. Charmes
2b6ca38728 Remove Run race condition 2013-06-17 15:45:08 -07:00
Guillaume J. Charmes
c106ed32ea Move the attach prevention from server to client 2013-06-17 15:40:04 -07:00
Joseph Anthony Pasquale Holsten
2626d88a21 fix missing command in irc bouncer example 2013-06-17 14:50:58 -07:00
Guillaume J. Charmes
3a0ffbc772 - Runtime: Fixes #884 enforce stdout/err sync by merging the stream 2013-06-17 14:44:35 -07:00
Daniel Mizyrycki
bd9bf9b646 Packaging|dockerbuilder, issue #761: use docker-golang PPA. Add Dockerfile header 2013-06-17 14:08:50 -07:00
Victor Vieux
7b6f50772c Merge pull request #912 from dotcloud/bump_0.4.1
Bumped version to 0.4.1
2013-06-17 14:06:17 -07:00
Guillaume J. Charmes
555552340d Merge branch 'master' into builder_server-3
Conflicts:
	buildfile_test.go
2013-06-17 14:01:32 -07:00
Solomon Hykes
6e2c32eb9a Merge pull request #911 from dotcloud/add_port_redirection_doc
* Documentation: add port redirection doc
2013-06-17 13:57:08 -07:00
Solomon Hykes
22b0a38df5 Merge pull request #897 from dotcloud/fix-overlapping-add
* Builder: ADD improvements: use tar for copy + automatically unpack local archives
2013-06-17 13:32:34 -07:00
Solomon Hykes
cb58e63fc5 Typo 2013-06-17 14:28:04 -06:00
Solomon Hykes
8626598753 Added content to port redirect doc 2013-06-17 13:25:50 -07:00
Victor Vieux
36231345f1 add port redirection doc 2013-06-17 22:05:58 +02:00
Victor Vieux
e8f001d451 Bumped version to 0.4.1 2013-06-17 19:15:21 +00:00
Guillaume J. Charmes
13e03a6911 Fix the auth tests and add the offline mode 2013-06-17 11:29:02 -07:00
Victor Vieux
fde82f448f use go 1.1 cookiejar and revome ResetClient 2013-06-17 18:13:40 +00:00
Guillaume J. Charmes
79b3265ef1 Merge branch 'master' into builder_server-3
Conflicts:
	buildfile.go
2013-06-17 11:09:53 -07:00
Daniel Mizyrycki
389db5f598 Merge pull request #887 from dotcloud/761-packaging-dockerbuilder
Packaging, dockerbuilder: Automate pushing docker binary packages
2013-06-17 08:40:01 -07:00
Solomon Hykes
fe88b5068d Merge branch 'master' into simpler-build-upload 2013-06-15 12:35:26 -07:00
Solomon Hykes
6746c385bd FIXMEs 2013-06-15 12:24:20 -07:00
Solomon Hykes
f50e40008f * Builder: added a regression test for #895 2013-06-15 11:35:56 -07:00
Solomon Hykes
061f8d12e0 * Builder: reorganized unit tests for better code reuse, and to test non-empty contexts 2013-06-15 11:07:49 -07:00
Solomon Hykes
38554fc2a7 * Builder: simplify the upload of the build context. Simply stream a tarball instead of multipart upload with 4 intermediary buffers. Simpler, less memory usage, less disk usage, and faster. 2013-06-15 09:38:18 -07:00
Solomon Hykes
cc7de8df75 Removed deprecated file builder_client.go 2013-06-15 09:30:52 -07:00
Solomon Hykes
30f604517a Merge pull request #895 from dotcloud/build-fixes
- Builder: fix a bug which caused builds to fail if ADD was the first command
2013-06-15 09:23:41 -07:00
Solomon Hykes
080f35fe65 Fix a bug which caused builds to fail if ADD was the first command 2013-06-15 09:16:35 -07:00
Guillaume J. Charmes
78f86ea502 Merge branch 'master' into builder_server-3
Conflicts:
	utils/utils.go
2013-06-14 17:08:39 -07:00
Solomon Hykes
5b8287617d + Builder: ADD of a local file will detect tar archives and unpack them
into the container instead of copying them as a regular file.

* Builder: ADD uses tar/untar for copies instead of calling 'cp -ar'.
	This is more consistent, reduces the number of dependencies, and
	fixe #896.
2013-06-14 16:43:39 -07:00
Solomon Hykes
5799806414 FIXMEs 2013-06-14 16:29:19 -07:00
Guillaume J. Charmes
76a568fc97 Fix merge issue 2013-06-14 16:08:08 -07:00
Solomon Hykes
14265d9a18 Various FIXME items 2013-06-14 15:11:34 -07:00
Solomon Hykes
17235eb089 Merge branch 'master' of ssh://github.com/dotcloud/docker 2013-06-14 15:07:05 -07:00
Solomon Hykes
7f118519eb Remove duplicate 'WARNING' 2013-06-14 14:46:08 -07:00
Solomon Hykes
250e47e2eb Merge branch 'dns_server_side'
+ Configure dns configuration host-wide with 'docker -d -dns'
+ Detect faulty DNS configuration and replace it with a public default
2013-06-14 14:39:05 -07:00
Guillaume J. Charmes
f413fb8e56 Merge pull request #857 from edx/856-vagrant-port-forwarding
* Vagrantfile: Add an option to forward all ports to the vagrant host that have been ex...
2013-06-14 14:37:55 -07:00
Guillaume J. Charmes
f0e43dcdb1 Merge pull request #607 from dotcloud/expose_api_port_vagrant-feature
* Vagrantfile: Add the rest api port to vagrantfile's port_forward
2013-06-14 14:35:53 -07:00
Guillaume J. Charmes
abf85b2508 Merge branch 'master' into remove_bsdtar
Conflicts:
	docs/sources/contributing/devenvironment.rst
2013-06-14 14:34:30 -07:00
Guillaume J. Charmes
813771e6b7 Merge pull request #892 from unclejack/validate_memory_limits
* Runtime: validate memory limits & error out if it's less than 524288
2013-06-14 14:32:28 -07:00
unclejack
d3f83a6592 add test: fail to create container if mem limit < 512KB 2013-06-14 22:55:00 +03:00
Andy Rothfusz
7958f1f694 Add examples for local import. 2013-06-14 13:42:59 -06:00
Guillaume J. Charmes
4a02c6dab1 Merge pull request #816 from unclejack/524-fix_aufs_links_related_warnings
524 fix aufs links related warnings
2013-06-14 12:32:47 -07:00
Guillaume J. Charmes
165d343d06 Merge pull request #663 from dotcloud/662-fix_push_html_404-fix
* Registry: add regexp check on repo's name
2013-06-14 12:30:44 -07:00
Guillaume J. Charmes
60fd7d686d Merge branch 'master' into improve_progressbar_pull 2013-06-14 12:01:40 -07:00
Solomon Hykes
c701de939f Merge branch 'master' of ssh://github.com/dotcloud/docker 2013-06-14 11:58:46 -07:00
Guillaume J. Charmes
05b87d2d5b Merge pull request #868 from dotcloud/postupload-endpoints-header
- Registry: Send X-Docker-Endpoints at the end of a push
2013-06-14 11:53:54 -07:00
Guillaume J. Charmes
78e4a385f7 Merge branch 'master' into postupload-endpoints-header
Conflicts:
	server.go
2013-06-14 11:50:58 -07:00
unclejack
822abab17e install aufs-tools when setting up the testing vagrant box 2013-06-14 21:41:12 +03:00
unclejack
f1d16ea003 install aufs-tools when setting up the hack vagrant box 2013-06-14 21:41:12 +03:00
unclejack
fb7eaf67d1 add aufs-tools package to dev env docs page 2013-06-14 21:41:12 +03:00
unclejack
e53721ef69 add aufs-tools to lxc-docker dependencies 2013-06-14 21:38:15 +03:00
unclejack
2f67a62b5b run auplink before unmounting aufs 2013-06-14 21:38:15 +03:00
Guillaume J. Charmes
79fe864d9a Update docs 2013-06-14 10:58:16 -07:00
Guillaume J. Charmes
6f7de49aa8 Add unit tests for tar/untar with multiple compression + detection 2013-06-14 10:47:49 -07:00
unclejack
9ee11161bf validate memory limits & error out if less than 512 KB 2013-06-14 19:52:44 +03:00
Victor Vieux
90f6bdd6e4 update docs, remove config file on 401 2013-06-14 13:38:51 +00:00
Victor Vieux
e49f82b9e1 update docs 2013-06-14 10:10:26 +00:00
Victor Vieux
ddf5a1940f Merge branch 'master' into 22-add_sizes_images_and_containers-feature 2013-06-14 10:05:06 +00:00
Victor Vieux
00cf2a1fa2 fix virtual size on images 2013-06-14 10:05:01 +00:00
Victor Vieux
9cc72ff1a9 fix auth in case you change your password on index.io 2013-06-14 09:53:48 +00:00
Daniel Mizyrycki
3384943cd3 Packaging, dockerbuilder: Automate pushing docker binary packages 2013-06-13 22:14:43 -07:00
Guillaume J. Charmes
2f14dae83f Add build UT 2013-06-13 18:52:41 -07:00
Guillaume J. Charmes
f03ebc20aa Fix issue with ADD 2013-06-13 18:42:27 -07:00
Guillaume J. Charmes
4b4918f2a7 Merge branch 'master' into builder_server-3
Conflicts:
	buildfile.go
	commands.go
	docs/sources/api/docker_remote_api.rst
2013-06-13 18:11:22 -07:00
Guillaume J. Charmes
0425f65e63 Remove bsdtar by checking magic 2013-06-13 17:53:38 -07:00
Guillaume J. Charmes
452128f0da Remove run() where it is not needed within the builder 2013-06-13 15:18:15 -07:00
Guillaume J. Charmes
f5fe3ce34e Remove run from non-running commmands 2013-06-13 15:08:53 -07:00
Guillaume J. Charmes
d0084ce5f2 Remove run from the ADD instruction 2013-06-13 14:57:50 -07:00
Guillaume J. Charmes
2eaa0a1dd7 Fix non-tty run issue 2013-06-13 12:57:35 -07:00
Guillaume J. Charmes
8085754507 Merge pull request #751 from dotcloud/660-auth_client-feature
* Registry: Move auth to the client
2013-06-13 11:52:40 -07:00
Victor Vieux
c46382ba29 rebase master 2013-06-13 17:58:06 +00:00
Guillaume J. Charmes
b38c6929be Updated build usage 2013-06-13 10:50:55 -07:00
Guillaume J. Charmes
42d1c36a5c Fix merge issue 2013-06-13 10:25:43 -07:00
Guillaume J. Charmes
51a4b65101 Merge pull request #883 from unclejack/build-docker-with-go1.1.1
use Go 1.1.1 to build docker
2013-06-13 10:20:38 -07:00
Guillaume J. Charmes
30fb45c494 Merge pull request #799 from dotcloud/691-run_id-feature
* Runtime: allow docker run <name>:<id>
2013-06-13 10:14:40 -07:00
Victor Vieux
9cdd39e0d7 Merge branch 'master' into 691-run_id-feature 2013-06-13 13:18:43 +00:00
Victor Vieux
45a8945746 added test 2013-06-13 13:17:56 +00:00
Victor Vieux
697282d6ad Merge pull request #804 from dotcloud/no_stdout_stale-fix
*Runtime: Fix stale command when stdout is not allocated
2013-06-13 04:22:29 -07:00
unclejack
78a76ad50e use Go 1.1.1 to build docker 2013-06-13 08:59:41 +03:00
Solomon Hykes
5ecfe13be9 Merge branch '610-improve_rmi-feature'
* Runtime: improved image removal to garbage-collect unreferenced parents
- Runtime: fixed image removal to cleanly remove tags and repositories
2013-06-12 20:30:07 -07:00
Guillaume J. Charmes
0bc1c6d57a Merge pull request #826 from dotcloud/825-move_xino_shm-fix
- Runtime: fix aufs mount on ubuntu13.04+btrfs
2013-06-12 17:20:42 -07:00
Solomon Hykes
f57175cbad FIXME: a loose collection of FIXMEs for internal use by the maintainers 2013-06-12 15:28:59 -07:00
Sam Alba
81a11a3c30 Update NOTICE 2013-06-12 15:50:30 -06:00
Sam Alba
04cca097ae Update README.md 2013-06-12 15:50:09 -06:00
Andy Rothfusz
48897b5fa1 Merge pull request #845 from unclejack/841-update_docs_no_add_without_context
841 - docs: warn about the transmission of data to the docker daemon & ADD without context
2013-06-12 14:25:44 -07:00
Solomon Hykes
ecae342434 New roadmap item: advanced port redirections 2013-06-12 10:50:47 -07:00
Victor Vieux
f2383151cb bump to master 2013-06-12 17:39:32 +00:00
Solomon Hykes
b4565af256 Merge branch 'master' of ssh://github.com/dotcloud/docker 2013-06-12 10:23:14 -07:00
Guillaume J. Charmes
c85e775162 Merge pull request #844 from dotcloud/843-inspect_multiple_params-feature
* Runtime: allow multiple params in inspect
2013-06-12 10:18:42 -07:00
Guillaume J. Charmes
3491df6edb Merge pull request #852 from dotcloud/556-docker-search-fmt
Remove CR/NL from description in docker CLI
2013-06-12 10:17:05 -07:00
Guillaume J. Charmes
0e6ec57996 Merge pull request #874 from fsouza/fix-build-newline
- Builder: don't ignore last line in Dockerfile when it doesn't end with \n
2013-06-12 10:15:00 -07:00
Guillaume J. Charmes
f37b158982 Merge pull request #877 from fsouza/fix-hijack
- Runtime: use in instead of os.Stdin in hijack
2013-06-12 10:14:37 -07:00
Francisco Souza
da54abaf2e commands: use in instead of os.Stdin in hijack 2013-06-12 09:54:37 -03:00
Solomon Hykes
092c761cec Merge pull request #853 from kencochrane/registry-api-1.1-fix
* Documentation: separate the registry and index API's into their own docs
2013-06-11 18:34:10 -07:00
Solomon Hykes
5edafd6284 Merge branch 'master' of ssh://github.com/dotcloud/docker 2013-06-11 11:21:57 -07:00
Solomon Hykes
d64f105b44 Added a readme explaining the role of each API 2013-06-11 10:39:02 -07:00
Victor Vieux
2d5eda5141 Merge pull request #864 from dotcloud/851-choose_public_port-feature
* Runtime: you can now specify public port (ex: -p 80:4500)
2013-06-11 10:11:10 -07:00
Solomon Hykes
be15d5f2d9 Merge branch 'master' of ssh://github.com/dotcloud/docker 2013-06-11 10:09:34 -07:00
Solomon Hykes
5918a5a322 More principles. Raw and unstructured to spawn discussion. 2013-06-11 09:27:36 -07:00
Solomon Hykes
f8af296e6f Merge pull request #865 from dotcloud/errors_commands-fix
Display StatusText as error when empty body in commands.go
2013-06-11 09:16:54 -07:00
Solomon Hykes
432e18990b Merge branch 'master' of ssh://github.com/dotcloud/docker 2013-06-11 08:58:23 -07:00
Francisco Souza
2e9403b047 build: don't ignore last line in Dockerfile when it doesn't end with \n 2013-06-11 11:39:06 -03:00
Victor Vieux
3ea6a2c7c3 add Michael Crosby to AUTHORS 2013-06-11 10:17:39 +00:00
Victor Vieux
20bf0e00e8 * Remote Api: Add flag to enable cross domain requests 2013-06-11 10:12:36 +00:00
Michael Crosby
dd53c457d7 Add OPTIONS to route map
Move the OPTIONS method registration into the existing
route map.  Also add support for empty paths in
the map.
2013-06-10 16:10:40 -09:00
Michael Crosby
ac599d6528 Add explicit status response to OPTIONS handler
Write the http.StatusOK header in the OPTIONS
handler and update the unit tests to refer to the
response code using the const from the http package.
2013-06-10 14:44:10 -09:00
Andy Rothfusz
ca4597e9d7 Add links to libraries, fix #800 2013-06-10 15:22:34 -07:00
Andy Rothfusz
eeea9ac946 Add list of Docker Remote API Client Libraries. Fixes #800. 2013-06-10 15:17:27 -07:00
Michael Crosby
0a28628c02 Add Cors and OPTIONS route unit tests
Move creating the router and populating the
routes to a separate function outside of
ListenAndServe to allow unit tests to make
assertions on the configured routes and handler
funcs.
2013-06-10 13:02:40 -09:00
Victor Vieux
bcc4754dc1 Merge pull request #869 from fsouza/fix-api-docs
docs/api/remote: fix rst syntax in the "Search images" section
2013-06-10 14:21:39 -07:00
Victor Vieux
66d9a73362 rebump 2013-06-10 21:05:54 +00:00
Andy Rothfusz
5712e37437 Merge pull request #840 from dhrp/just-fixed-some-links
Fixed some links. Closes #839 #838 #835
2013-06-10 13:51:43 -07:00
Francisco Souza
b1ed75078e docs/api/remote: fix rst syntax in the "Search images" section 2013-06-10 16:07:57 -03:00
Joffrey F
47d7486bbe Merge pull request #814 from dotcloud/708-pushpull-multislash
Support for special namespace 'src' (highland support)
2013-06-10 11:29:39 -07:00
shin-
d227af1edd Escape remote names on repo push/pull 2013-06-10 11:28:27 -07:00
shin-
4e18010731 Support for special namespace 'src' (highland support) 2013-06-10 11:28:26 -07:00
shin-
db3242e4bb Send X-Docker-Endpoints header when validating the images upload with the index at the end of a push 2013-06-10 11:21:56 -07:00
Guillaume J. Charmes
7169212683 Fix typo 2013-06-10 11:08:40 -07:00
Guillaume J. Charmes
2a6a1d439c Merge pull request #867 from Turbo87/patch-1
Fixed broken link in README
2013-06-10 10:57:09 -07:00
Guillaume J. Charmes
8984aef899 Fix typo in docs 2013-06-10 09:32:31 -07:00
Guillaume J. Charmes
b103ac70bf Allow multiple tab/spaces between instructions and arguments 2013-06-10 09:31:59 -07:00
Tobias Bieniek
37c20fa64b Fixed broken link in README 2013-06-10 19:03:54 +03:00
Victor Vieux
ab0d0a28a8 fix errors when no body 2013-06-10 15:06:52 +00:00
Victor Vieux
0de3f1ca9a add tests 2013-06-10 14:14:54 +00:00
Victor Vieux
95d66ebc6b specify public port 2013-06-10 13:56:43 +00:00
Michael Crosby
393e873d25 Add Access-Control-Allow-Methods header
Add the Access-Control-Allow-Methods header so that
DELETE operations are allowed.

Also move the write CORS headers method before
docker writes a 404 not found so that the client
receives the correct response and not an invalid
CORS request.
2013-06-09 17:17:35 -09:00
Guillaume J. Charmes
956491f853 Merge pull request #855 from samjsharpe/fix_missing_hyphen
Build from Dockerfile on stdin requires a hyphen
2013-06-07 13:04:27 -07:00
Calen Pennington
302660e362 Add an option to forward all ports to the vagrant host that have been exported from docker containers 2013-06-07 15:56:39 -04:00
Sam J Sharpe
5e6cd21f8b Build from Dockerfile on stdin requires a hypen
There is a missing hypen in the documentation:
    `docker build < Dockerfile` will complain
    `docker build - < Dockerfile` will not complain
2013-06-07 20:35:34 +01:00
Ken Cochrane
9e1cd37bbc seperated the registry and index API's into their own docs
seperated the registry and index API's into their own docs and merged
in the index search api into the index api. Also renamed the original
registry api to registry_index_spec.
2013-06-07 13:42:52 -04:00
shin-
8d4282cd36 Remove CR/NL from description in docker CLI. Also moved description shortening to the client 2013-06-07 06:09:24 -07:00
Guillaume J. Charmes
1e0738f63f Make the progressbar human readable 2013-06-06 18:42:52 -07:00
Guillaume J. Charmes
f355d33b5f Make the progressbar take the image size into consideration 2013-06-06 18:16:16 -07:00
Solomon Hykes
968e08a9ba Merge branch 'master' of ssh://github.com/dotcloud/docker 2013-06-07 02:41:08 +02:00
Guillaume J. Charmes
2cc22de696 Update documentation for docker build 2013-06-06 16:48:36 -07:00
Guillaume J. Charmes
12c9b9b3c9 Implement build from git 2013-06-06 16:41:41 -07:00
Guillaume J. Charmes
a11e61677c Move the docker build URL form client to server, prepare for GIT support 2013-06-06 16:09:46 -07:00
Guillaume J. Charmes
01f446e908 Allow to docker build URL 2013-06-06 15:56:09 -07:00
Guillaume J. Charmes
f4a4cfd2cc Move isUrl to utils.IsURL 2013-06-06 15:50:09 -07:00
Guillaume J. Charmes
eaa2183d77 Fix issue EXPOSE override CMD within builder 2013-06-06 15:48:12 -07:00
Guillaume J. Charmes
31d2b258c1 Allow remote url to be passed to the ADD instruction within the builder 2013-06-06 15:40:46 -07:00
unclejack
4b3a381f39 docs: build: ADD copies just needed data w/ full path 2013-06-07 00:39:43 +03:00
Solomon Hykes
56473d4cce Typo in MAINTAINERS file 2013-06-06 22:52:55 +02:00
unclejack
efa7ea592c docs: warn about build data tx & ADD w/o context
This updates the documentation to mention that:
1. a lot of data may get sent to the docker daemon if there is a lot of
data in the directory passed to docker build
2. ADD doesn't work in the absence of the context
3. running without a context doesn't send file data to the docker
daemon
4. explain that the data sent to the docker daemon will be used by ADD
commands
2013-06-06 22:06:12 +03:00
Guillaume J. Charmes
afd325a884 Solve an issue with the -dns in daemon mode 2013-06-06 11:01:29 -07:00
Guillaume J. Charmes
a3f6054f97 Check for local dns server and output a warning 2013-06-06 11:01:09 -07:00
Sam Alba
da937bf214 Update README.md 2013-06-06 11:09:11 -06:00
Solomon Hykes
42b63eb818 Daniel Mizyrycki is maintainer of dockerbuilder, the official build environment for docker binary releases 2013-06-06 19:06:54 +02:00
Solomon Hykes
0d6db333d6 docker-build contrib script is deprecated by the new 'build' command 2013-06-06 19:05:21 +02:00
Solomon Hykes
3999465c85 Merge branch 'master' of ssh://github.com/dotcloud/docker 2013-06-06 19:04:17 +02:00
Daniel Mizyrycki
1cc4049e82 Merge pull request #827 from dotcloud/dev_environment_update
Update "Setting Up a Dev Environment" doc, with modern golang  PPA and stable lxc kernel
2013-06-06 09:48:19 -07:00
Victor Vieux
4107701062 add [] and move errors to stderr 2013-06-06 15:45:08 +00:00
Victor Vieux
a799cdad3e allow multiple params in inspect 2013-06-06 15:22:54 +00:00
Victor Vieux
a118ad90ed changed to 12.04 and add kernel 2013-06-06 12:36:28 +00:00
Thatcher Peskens
0f23fb949d Fixed some links
* Added Google group to FAQ on docs
* Changed IRC link
* Fixed link to contributing broken by 326faec
2013-06-05 18:06:51 -07:00
Thatcher
f1992eeea5 Merge pull request #817 from dhrp/blog-in-navigation
Modified the navigation in both website and documentaion to include the blog.
2013-06-05 17:28:19 -07:00
Guillaume J. Charmes
84d68007cb Add -dns to docker daemon 2013-06-05 14:20:54 -07:00
Victor Vieux
bf63cb9045 bump to master again 2013-06-05 16:01:36 +00:00
Victor Vieux
ce0041832c bump to master 2013-06-05 15:30:45 +00:00
Solomon Hykes
97d5f525f4 hack/PRINCIPLES.md: a list of principles guiding Docker's design. The goal is to scale the decision-making in the project and remove @shykes as a bottleneck as much as possible 2013-06-05 17:27:53 +02:00
Solomon Hykes
2ea29ce0ef hack/ROADMAP.md: a high-level roadmap. Make a pull request to suggest changes 2013-06-05 17:26:26 +02:00
Solomon Hykes
068076f775 Merge pull request #822 from lopter/master
* Client: Print the container id before the hijack in `docker run` (see also #804)
2013-06-05 08:08:30 -07:00
Solomon Hykes
34c8b24211 Merge pull request #812 from dotcloud/809-progress_message-fix
* Remote API: Fix progress message in client
2013-06-05 07:27:31 -07:00
Victor Vieux
e3cc625315 update doc to newer go 2013-06-05 13:19:49 +00:00
Victor Vieux
f67ea78cce move xino stuff to /dev/shm 2013-06-05 12:59:05 +00:00
Victor Vieux
6255112926 updated doc 2013-06-05 13:19:57 +02:00
Victor Vieux
c906239220 bump to master 2013-06-05 10:23:45 +00:00
Victor Vieux
b4682e6707 bump to master 2013-06-05 10:19:51 +00:00
Solomon Hykes
04050c4173 Merge pull request #818 from johncosta/ubuntu-1304-add-apt-repository
Remove provider specifc language
2013-06-05 02:40:09 -07:00
Louis Opter
7e6ede6379 Print the container id before the hijack in docker run (see also #804)
This is useful when you want to get the container id before you start to
interact with stdin (which is what I'm doing in dotcloud/sandbox).
2013-06-04 15:32:59 -07:00
Guillaume J. Charmes
63e80384ea Fix nil pointer on some situatuion 2013-06-04 14:35:32 -07:00
Guillaume J. Charmes
7ef9833dbb Put back panic for go1.0.3 compatibility 2013-06-04 14:26:40 -07:00
Guillaume J. Charmes
c1ee9bf881 Merge pull request #808 from dotcloud/795-lintify
Cleanup source
2013-06-04 14:20:38 -07:00
John Costa
c000ef194c Remove provider specifc language 2013-06-04 16:01:38 -04:00
Ken Cochrane
479ac9afa7 Merge pull request #802 from johncosta/ubuntu-1304-add-apt-repository
- Documentation: adding missing dependency to the ubuntu linux install page.
2013-06-04 11:55:34 -07:00
Thatcher Peskens
716892b95d Modified the navigation in both website and documentaion to include the Blog. 2013-06-04 11:41:54 -07:00
Thatcher
d7a6485dfe Merge pull request #796 from dhrp/added-and-fixed-links
Added and fixed some links (closes #502)
2013-06-04 11:28:40 -07:00
Victor Vieux
fd224ee590 linted names 2013-06-04 18:00:22 +00:00
Victor Vieux
3922691fb9 fix progress message in client 2013-06-04 16:09:08 +00:00
Sam Alba
c566c8efc7 Merge pull request #810 from dotcloud/proxy-fix
fix regression on proxy
2013-06-04 08:46:40 -07:00
Victor Vieux
06b585ce8a fix proxy 2013-06-04 15:44:27 +00:00
John Costa
e61af8bc62 some installations of ubuntu 13.04 (digital ocean Ubuntu 13.04 x64 Server in this case) require software-properties-common to be installed before being able to use add-apt-repository 2013-06-04 10:40:44 -04:00
Victor Vieux
b6825f98c0 bump to master 2013-06-04 14:00:18 +00:00
Victor Vieux
86ada2fa5d drop/omit 2013-06-04 13:51:12 +00:00
Victor Vieux
b515a5a9ec go vet 2013-06-04 13:24:58 +00:00
Michael Crosby
6d5bdff394 Add flag to enable cross domain requests in Api
Add the -api-enable-cors flag when running docker
in daemon mode to allow CORS requests to be made to
the Remote Api.  The default value is false for this
flag to not allow cross origin request to be made.

Also added a handler for OPTIONS requests the standard
for cross domain requests is to initially make an
OPTIONS request to the api.
2013-06-03 21:39:00 -04:00
Guillaume J. Charmes
0ca8844398 Fix stale command with stdout is not allocated 2013-06-03 17:39:29 -07:00
Sam Alba
10ef4f7f39 Merge pull request #797 from dotcloud/registry-fix-missing-body-close
registry.go: Fixed missing Body.Close()
2013-06-03 14:43:50 -07:00
Sam Alba
cff3b37a61 Disabled HTTP keep-alive in the default HTTP client for Registry calls 2013-06-03 14:42:21 -07:00
Victor Vieux
d26a3b37a6 allow docker run <name>:<id> 2013-06-03 20:00:15 +00:00
Guillaume J. Charmes
82dd963e08 Minor changes in registry.go 2013-06-03 12:20:52 -07:00
Sam Alba
830c458fe7 Fixed missing Body.Close when doing some HTTP requests. It should improve some request issues. 2013-06-03 12:14:57 -07:00
Thatcher Peskens
38f29f7d0c Changed some text on the website to include docker-club
Fixed a link in the FAQ on the docs
Fixed a detail in the Makefile for pushing the site to dotcloud
2013-06-03 11:45:19 -07:00
Solomon Hykes
a8ae398bf5 Bumped version to 0.4.0 2013-06-03 10:59:48 -07:00
Victor Vieux
7e59b83053 removed auth in pull 2013-06-03 17:51:52 +00:00
Guillaume J. Charmes
7a4408f608 Merge pull request #794 from dotcloud/780-diff-fix2
- Runtime: remove TrimLeft as it's go1.1
2013-06-03 10:41:08 -07:00
Victor Vieux
854039b6ba remove TrimLeft as it's go1.1 2013-06-03 17:25:51 +00:00
Guillaume J. Charmes
070923b14f Merge pull request #792 from dotcloud/780-diff-fix
- Runtime: fix Path corruption in 'docker diff'
2013-06-03 10:06:25 -07:00
Victor Vieux
71b1657e8d added test 2013-06-03 17:02:57 +00:00
Guillaume J. Charmes
1bafe9da26 Merge pull request #793 from dotcloud/774-remove_login_check_on_pull-fix
- Registry: remove login check on pull
2013-06-03 09:37:02 -07:00
Victor Vieux
1ce4ba6c9f remove check on login 2013-06-03 15:33:29 +00:00
Victor Vieux
a55a0d370d ([a-z0-9_]{4,30})/([a-zA-Z0-9-_.]+) 2013-06-03 14:23:57 +00:00
Guillaume J. Charmes
2b1b3c1270 Merge pull request #784 from dotcloud/remove_cgo_dependency
* Runtime: Remove cgo dependency
2013-06-03 07:03:17 -07:00
Guillaume J. Charmes
8243f2510e Update test to reflect new ApiInfo struct 2013-06-03 06:44:00 -07:00
Guillaume J. Charmes
0443cc351d Merge pull request #772 from dotcloud/improve_version_info_cmds
* API: Improve version info cmds
2013-06-03 06:36:09 -07:00
Victor Vieux
ca902b6be4 bump master 2013-06-03 12:37:51 +00:00
Victor Vieux
844a8db6c6 add debug 2013-06-03 12:21:22 +00:00
Victor Vieux
3dd1e4d58c added docs and moved to api version 1.2 2013-06-03 12:09:16 +00:00
Victor Vieux
62c78696cd bump to master 2013-06-03 11:06:13 +00:00
Victor Vieux
e16c93486d fix Path corruption in 'docker diff' 2013-06-03 10:19:20 +00:00
Solomon Hykes
e42eb7fa8c Meta: added Guillaume as primary maintainer for tty code 2013-06-02 23:42:18 -07:00
Solomon Hykes
cebfde9ea5 Merge pull request #787 from gasi/nodejs-centos-docs
* Documentation: Deploying a Node.js Web App on CentOS
* Documentation: small formatting improvements
2013-06-02 23:27:29 -07:00
Solomon Hykes
eff7a15bea Merge pull request #785 from jweede/master
* Documentation: spelling correction in website
2013-06-02 23:14:09 -07:00
Daniel Gasienica
82dadc2005 Document installation of npm dependencies /ht @grigio 2013-06-02 20:10:22 -07:00
Guillaume J. Charmes
2d52d4d614 Merge pull request #789 from samjsharpe/fix-extra-brace
Removes a brace in the description of the wait command
2013-06-02 15:22:11 -07:00
Sam J Sharpe
ca5ae266b7 Removes a brace in the description of the wait command 2013-06-02 22:40:56 +01:00
Daniel Gasienica
464765b940 Add link at the beginning 2013-06-01 22:16:26 -07:00
Daniel Gasienica
e9ffc1e499 Add Node.js web app example using CentOS 2013-06-01 22:06:53 -07:00
Daniel Gasienica
4fb9a6eafb Use code blocks 2013-06-01 22:03:41 -07:00
Daniel Gasienica
157547845a Name examples consistently 2013-06-01 22:03:28 -07:00
Daniel Gasienica
2935ca7ee2 Use title case for consistency 2013-06-01 22:03:12 -07:00
Daniel Gasienica
23452f1573 Use em dash in title 2013-06-01 22:02:24 -07:00
Daniel Gasienica
f6f345b1fe Fix typo 2013-06-01 21:55:01 -07:00
Daniel Gasienica
e3fd61ad74 Add more tags 2013-06-01 21:27:27 -07:00
Daniel Gasienica
01ce63aacd Make style consistent 2013-06-01 21:26:58 -07:00
Daniel Gasienica
3ca9c11110 Add Mac OS X instructions for doc tools 2013-06-01 21:26:18 -07:00
Daniel Gasienica
b4df0b17af Add make server command to preview docs 2013-06-01 21:25:51 -07:00
Jon Wedaman
7f65bf508e Spelling correction in docs 2013-06-01 21:48:32 -04:00
Guillaume J. Charmes
a70dd65964 Move Termios struct to os specific file 2013-06-01 16:19:50 -07:00
Guillaume J. Charmes
3cc0963ad1 Remove unused constants 2013-06-01 15:55:52 -07:00
Guillaume J. Charmes
31eb01ae8a Use uintptr instead of int for Fd 2013-06-01 15:55:05 -07:00
Guillaume J. Charmes
64f346779f Remove cgo from term 2013-06-01 15:51:45 -07:00
Guillaume J. Charmes
078a19d725 Merge pull request #777 from dotcloud/bsdtar-env
Pass a guaranteed environment to bsdtar for predictable behaviour
2013-06-01 11:28:55 -07:00
Solomon Hykes
561ceac55d * Runtime: pass a guaranteed environment to bsdtar for predictable behavior without depending on the underlying host configuration. 2013-05-31 22:25:48 -07:00
Guillaume J. Charmes
a373c770b6 Merge pull request #707 from unclejack/411-add-arch-field
411 add architecture field
2013-05-31 17:26:11 -07:00
Solomon Hykes
90b8c5ce67 Merge pull request #762 from dotcloud/761-packaging-dockerbuilder
Packaging: Ensure dockerbuilder can build docker PPA
2013-05-31 16:28:44 -07:00
Guillaume J. Charmes
9bc71c101c Merge pull request #719 from dotcloud/json_stream-feature
* API: push, pull, import, insert -> Json Stream
2013-05-31 16:05:15 -07:00
Guillaume J. Charmes
f41d2ec4d9 Update api docs 2013-05-31 15:56:30 -07:00
Guillaume J. Charmes
1dae7a25b9 Improve the docker version and docker info commands 2013-05-31 15:53:57 -07:00
Daniel Mizyrycki
926c1d45aa Merge pull request #533 from tianon/mkimage-debian
Update mkimage-debian.sh now that wheezy is officially the stable release
MD: Ref issue #447
2013-05-31 14:53:37 -07:00
Guillaume J. Charmes
80b8756da3 Merge pull request #727 from dotcloud/remove_hijack_logs-feature
* API: remove hijack on the client in logs, and split stdout / stderr
2013-05-31 14:43:59 -07:00
Guillaume J. Charmes
7d167590bc Merge pull request #766 from dotcloud/prevent_attach_stopped_container-feature
returns an error if the container we want to attach is not running
2013-05-31 14:41:57 -07:00
Solomon Hykes
76bb920449 Merge pull request #753 from dotcloud/registry_api_remove_teams
Remove teams from the registry API
2013-05-31 12:41:12 -07:00
Solomon Hykes
1ac36a3adf Merge pull request #767 from gasi/master
Fix minor documentation error in ‘Running Redis Service’ example
2013-05-31 12:24:25 -07:00
Solomon Hykes
46bdbbabba Merge pull request #755 from dotcloud/infrastructure2
+ Meta: organize the project infrastructure: servers, dns, email etc.
2013-05-31 12:23:55 -07:00
Daniel Gasienica
766a2db0d9 Add Daniel Gasienica to AUTHORS 2013-05-31 12:19:57 -07:00
Daniel Gasienica
fd0c501e6d Fix minor documentation error in ‘Running Redis Service’ example 2013-05-31 12:19:49 -07:00
Victor Vieux
468e4c4b56 returns an error if the container we want to attach is not running 2013-05-31 15:34:23 +00:00
Guillaume J. Charmes
9eda9154a7 Merge pull request #764 from dotcloud/add_t_parameter_doc_build-fix
add missing -t parameter in the doc
2013-05-31 07:49:29 -07:00
Victor Vieux
2baea24879 add -t parameter in the doc 2013-05-31 14:40:09 +00:00
Victor Vieux
9060b5c2f5 added proper returns type, move the auto-prune in v1.1 api 2013-05-31 14:37:02 +00:00
Daniel Mizyrycki
1040225e36 Packaging: Ensure dockerbuilder can build docker PPA 2013-05-31 00:59:18 -07:00
Andy Rothfusz
1c091657d4 Merge pull request #745 from rogaha/doc_ssh_service_example_uptd
Add screencast summary. Makes it easy to reproduce steps.
2013-05-30 18:49:32 -07:00
Solomon Hykes
8d73740343 Bumped version to 0.3.4 2013-05-30 17:27:45 -07:00
Solomon Hykes
a148301a03 + Runtime: 'docker build' builds a container, layer by layer, from a source repository containing a Dockerfile
+ Runtime: 'docker build -t FOO' applies the tag FOO to the newly built container.
2013-05-30 16:49:40 -07:00
Victor Vieux
3afdd82e42 bump to master 2013-05-30 23:38:40 +00:00
Victor Vieux
bd38b47552 bump to master 2013-05-30 23:32:57 +00:00
Solomon Hykes
caaea2e08f * Build: never remove temporary images and containers 2013-05-30 16:24:26 -07:00
Solomon Hykes
de7ce7c10d Merge pull request #752 from dotcloud/victor_maintainer
* Meta: make Victor Vieux a core maintainer
2013-05-30 16:19:18 -07:00
Victor Vieux
5aa95b667c WIP needs to fix HTTP error codes 2013-05-30 22:53:45 +00:00
Solomon Hykes
c903a6baf8 * Builder: keep temporary images after a build fails, to allow caching 2013-05-30 15:52:09 -07:00
Solomon Hykes
4205b6bb1d Merge branch 'master' into builder_server-2 2013-05-30 14:09:42 -07:00
Guillaume J. Charmes
ca6409059d Merge pull request #749 from dotcloud/unit_tests_use_available_port-fix
- Test: if `address already in use` in unit tests, try a different port
2013-05-30 13:11:44 -07:00
Guillaume J. Charmes
459a2867dd Merge pull request #741 from dotcloud/split_stdout_stderr_run-feature
* API: Split stdout stderr in docker run if no -t
2013-05-30 12:38:11 -07:00
Guillaume J. Charmes
28d5b2c15a Update docker build docs 2013-05-30 12:35:19 -07:00
Guillaume J. Charmes
054451fd19 NON-WORKING: Beginning of rmi refactor 2013-05-30 12:30:21 -07:00
Solomon Hykes
43f369ea0c Organize the project infrastructure: servers, dns, email etc. 2013-05-30 12:28:24 -07:00
Guillaume J. Charmes
6d2e3d2ec0 Fix issue with CMD instruction within docker build 2013-05-30 12:21:57 -07:00
Guillaume J. Charmes
a4e6025cc1 Deprecate INSERT and COPY 2013-05-30 12:10:54 -07:00
Guillaume J. Charmes
56431d3130 Add -t to docker build in order to tag resulting image 2013-05-30 12:08:21 -07:00
Solomon Hykes
5324614410 Merge pull request #747 from riccieri/patch-1
+ * Documentation: add IP forwarding config to archlinux guide
2013-05-30 11:59:53 -07:00
Solomon Hykes
531b30119a Remove special case for 'teams' from registry api 2013-05-30 11:37:58 -07:00
Solomon Hykes
fc788956c5 Make Victor Vieux a core maintainer 2013-05-30 11:31:49 -07:00
Solomon Hykes
2ed1092dad * Documentation: removed 'building blocks' for now. 2013-05-30 11:26:47 -07:00
Victor Vieux
cd002a4d16 ensure progress downloader is well formated 2013-05-30 17:00:42 +00:00
Victor Vieux
49e656839f move auth to the client WIP 2013-05-30 15:39:43 +00:00
Guillaume J. Charmes
194fca8347 Merge pull request #750 from dotcloud/ensure_order_help-fix
Always display help in the same order
2013-05-30 07:40:59 -07:00
Victor Vieux
2c14d3949d always display help in the same order 2013-05-30 14:08:26 +00:00
Victor Vieux
2a53717e8f if address already in use in unit tests, try a different port 2013-05-30 13:45:39 +00:00
Solomon Hykes
97247c5c73 'docker build': remove INSERT command (should add support for remote sources in ADD instead) 2013-05-29 21:57:36 -07:00
Renato Riccieri Santos Zannon
b2084a9c59 Add IP forwarding config to archlinux guide
I had this small issue when following this guide on my Arch box, and I don't think it is specific to any configuration I have.
2013-05-30 01:56:08 -03:00
Guillaume J. Charmes
9a39404127 Fix issue with mkdir within docker build 2013-05-29 18:55:00 -07:00
Solomon Hykes
fc864d2f0f Simplified syntax of 'docker build': 'docker build PATH | -' 2013-05-29 18:18:57 -07:00
Solomon Hykes
dcab408f6a Fixed phrasing, typos and formatting in 'docker build' 2013-05-29 18:14:50 -07:00
Guillaume J. Charmes
faafbf2118 Fix ADD behavior on single files 2013-05-29 17:58:05 -07:00
Guillaume J. Charmes
881fdc59ed Remove cache opti that cause wrong cache miss 2013-05-29 17:04:46 -07:00
Guillaume J. Charmes
560a74af15 Fix cache miss issue within docker build 2013-05-29 16:11:04 -07:00
Guillaume J. Charmes
b6165daa77 Create a layer for each operation (expose, cmd, etc) 2013-05-29 15:03:00 -07:00
Guillaume J. Charmes
ae0d555022 Fix autorun issue within docker build 2013-05-29 14:37:03 -07:00
Solomon Hykes
7ff2e6b797 Merge branch 'master' of ssh://github.com/dotcloud/docker 2013-05-29 12:31:11 -07:00
Guillaume J. Charmes
92939569ab Update build doc 2013-05-29 11:53:44 -07:00
Guillaume J. Charmes
d97fff60a9 Update docker build docs 2013-05-29 11:50:49 -07:00
Guillaume J. Charmes
33ea1483d5 Allow docker build from stdin 2013-05-29 11:43:29 -07:00
Roberto Gandolfo Hashioka
c7af917d13 updated the running ssh service example with the video's transcription 2013-05-29 11:29:30 -07:00
Guillaume J. Charmes
c05e9f856d Error output fix for docker rmi 2013-05-29 11:11:19 -07:00
Guillaume J. Charmes
6cbc7757b2 Fix issue with unknown instruction within docker build 2013-05-29 10:53:24 -07:00
Guillaume J. Charmes
75d2244023 Update docker build UI 2013-05-29 10:51:47 -07:00
Victor Vieux
7e92302c4f auto prune WIP 2013-05-29 17:27:32 +00:00
Victor Vieux
94f0d478de refacto 2013-05-29 17:01:54 +00:00
Victor Vieux
2eb4e2a0b8 removed the -f 2013-05-29 16:31:47 +00:00
Guillaume J. Charmes
08e5f12954 Merge pull request #739 from dotcloud/push_issue-1
- Registry: Cereate a new registry object for each request (~session)
2013-05-29 09:22:12 -07:00
Victor Vieux
e33ba9b36d split stdout and stderr in run if not -t 2013-05-29 14:14:51 +00:00
Victor Vieux
a5fe6f8af4 Merge branch 'master' into split_stdout_stderr_run-feature 2013-05-29 13:56:27 +00:00
Victor Vieux
f339fc2eb9 bump to master 2013-05-29 13:52:18 +00:00
Victor Vieux
8f829eb5e4 Merge pull request #740 from dotcloud/ps_output-fix
fix ps output
2013-05-29 06:43:03 -07:00
Victor Vieux
044bdc1b5f fix ps output 2013-05-29 13:42:20 +00:00
Victor Vieux
ea9095c562 merge master 2013-05-29 11:49:39 +00:00
Victor Vieux
c00d1a6ebe improve attach 2013-05-29 11:40:54 +00:00
Solomon Hykes
11550c6063 Removed debug output in 'docker version' 2013-05-28 21:38:35 -07:00
Solomon Hykes
dc1fa0745f Improved wording of a maintainer's responsibilities 2013-05-28 22:33:48 -06:00
Solomon Hykes
3bac27f240 Fixed formatting of CONTRIBUTING.md 2013-05-28 22:26:42 -06:00
Solomon Hykes
286ce266b4 allmaintainers.sh: print a flat list of all maintainers of a directory (including sub-directories) 2013-05-28 20:55:07 -07:00
Solomon Hykes
aa42c6f2a2 Added Victor and Daniel as maintainers for api.go and Vagrantfile, respectively 2013-05-28 20:45:12 -07:00
Solomon Hykes
7181edf4b2 getmaintainer.sh: parse MAINTAINERS file to determine who should review changes to a particular file or directory 2013-05-28 20:44:41 -07:00
Solomon Hykes
c7985808ae + Runtime: stable implementation of 'docker build' 2013-05-28 19:40:38 -07:00
Solomon Hykes
83db1f36e3 Merge branch 'master' of ssh://github.com/dotcloud/docker 2013-05-28 19:39:20 -07:00
Solomon Hykes
24ddfe3f25 Documented who decides what and how. 2013-05-28 19:39:09 -07:00
Guillaume J. Charmes
b76d6120ac Update tests with new cookies for registry 2013-05-28 17:35:10 -07:00
Guillaume J. Charmes
cd0de83917 Cereate a new registry object for each request (~session) 2013-05-28 17:12:24 -07:00
Guillaume J. Charmes
e84306ca61 Merge pull request #730 from dotcloud/invert_status_created-fix
- Runtime: invert status created in docker ps
2013-05-28 15:45:52 -07:00
Guillaume J. Charmes
f65327555e Merge pull request #731 from dotcloud/change_containersPs_containersJson_api-feature
* API: rename containers/ps to containers/json
2013-05-28 15:44:20 -07:00
Guillaume J. Charmes
7aa0d11171 Merge pull request #732 from dotcloud/718-return_no_such_container_as_404-fix
- API: return 404 on no such containers in /attach
2013-05-28 15:43:07 -07:00
Guillaume J. Charmes
54af053623 Make sure the last line of docker build is the image id 2013-05-28 15:40:22 -07:00
Guillaume J. Charmes
5b33b2463a Readd build tests 2013-05-28 15:31:06 -07:00
Guillaume J. Charmes
2127f8d6ad Fill the multipart writer directly instead of using reader 2013-05-28 15:22:34 -07:00
Guillaume J. Charmes
2897cb0476 Add directory contents instead of while directory for docker build 2013-05-28 15:22:01 -07:00
Guillaume J. Charmes
fe0c0c208c Send error without headers when using chunks 2013-05-28 15:21:06 -07:00
Solomon Hykes
522f399d68 Merge branch 'master' of ssh://github.com/dotcloud/docker 2013-05-28 14:57:50 -07:00
Solomon Hykes
326faec664 De-duplicated contribution instructions. The authoritative instructions are in CONTRIBUTING.md at the root of the repo. 2013-05-28 14:57:36 -07:00
Thatcher
28a30eda88 Merge pull request #702 from kim0/patch-2
Properly install ppa, avoid GPG key warning. Also closes #715
2013-05-28 14:17:59 -07:00
Solomon Hykes
387eb5295a Removed deprecated SPECS directory 2013-05-28 14:13:20 -07:00
Thatcher
f3cc1d985e Merge pull request #733 from meejah/master
Documentation: Use add-apt-repository instead of editing sources.list
2013-05-28 13:55:11 -07:00
Guillaume J. Charmes
cfb8cbe521 Small fix 2013-05-28 13:51:21 -07:00
Guillaume J. Charmes
582a9e0a67 Make docker build flush output each line 2013-05-28 13:47:04 -07:00
Guillaume J. Charmes
a48799016a Fix merge issue 2013-05-28 13:46:52 -07:00
Guillaume J. Charmes
dce82bc856 Merge master 2013-05-28 13:42:50 -07:00
Guillaume J. Charmes
90ffcda055 Update the UI for docker build 2013-05-28 13:38:40 -07:00
Guillaume J. Charmes
6ae3800151 Implement the CmdAdd instruction 2013-05-28 13:38:26 -07:00
Guillaume J. Charmes
54db18625a Add Extension() method to Compresison type 2013-05-28 13:37:49 -07:00
Andy Rothfusz
235ae9cd43 Merge pull request #709 from dhrp/links-to-kernel-doc
Added links to @jpetazzo 's kernel article, cleaned puppet doc layout
2013-05-28 12:07:46 -07:00
meejah
444f7020cb Add sudo to commands. 2013-05-28 10:56:26 -06:00
meejah
525080100d Use Ubuntu's built-in method to add a PPA repository, which correctly handles keys for you. 2013-05-28 10:54:32 -06:00
Victor Vieux
e5fa4a4956 return 404 on no such containers in /attach 2013-05-28 16:19:12 +00:00
Victor Vieux
4f9443927e rename containers/ps to containers/json 2013-05-28 16:08:05 +00:00
Victor Vieux
8699805756 documentation 2013-05-28 15:49:57 +00:00
Victor Vieux
d9670f4275 invert status created 2013-05-28 15:06:26 +00:00
Ken Cochrane
3d8da80611 Merge pull request #729 from mmcgrana/fix-attach-api-docs
- Documentation: Fix attach API docs typo.
2013-05-28 06:53:41 -07:00
Mark McGranaghan
7d6ff7be12 Fix attach API docs. 2013-05-28 06:27:12 -07:00
Victor Vieux
fbcd8503b3 remove hijack on the client in logs, and split stdout / stderr 2013-05-27 16:07:05 +00:00
Victor Vieux
5a36efb61f fix json encoding, and use less casts 2013-05-26 23:45:45 +00:00
Victor Vieux
14212930e4 ensure valid json 2013-05-25 15:51:26 +00:00
Victor Vieux
c8c7094b2e imporved error, push, import insert 2013-05-25 15:09:46 +00:00
Victor Vieux
cb0bc4adc2 add error handling 2013-05-25 14:12:02 +00:00
unclejack
5f69a53dba set architecture to x86_64 by default
We're going to hardcode architecture to amd64 for now.
This is a stub and will have to be changed to set the actual arch.
2013-05-25 13:03:49 +03:00
Guillaume J. Charmes
d3b9733507 Merge pull request #657 from offby1/master
Minor tweaks to python_web_app.rst
2013-05-24 19:19:32 -07:00
Solomon Hykes
df23a1e675 * Registry: specified naming restrictions for usernames and repository names 2013-05-24 18:58:24 -07:00
Solomon Hykes
bb4b35a892 Fix a unit test broken by pull request #703 2013-05-24 18:32:21 -07:00
Solomon Hykes
194f487749 Added FIXME about possible race condition in a unit test 2013-05-24 18:31:47 -07:00
Solomon Hykes
9775f0bd14 * Remote API: send push/pull progress bar as json 2013-05-24 17:59:27 -07:00
Guillaume J. Charmes
a05bfb246f Merge pull request #710 from dotcloud/tty_resize
+ Tty: Handle terminal size and resize in tty mode
2013-05-24 15:12:22 -07:00
Guillaume J. Charmes
b438565609 Fix merge issue 2013-05-24 14:48:13 -07:00
Guillaume J. Charmes
ffd9e06deb Merge branch 'master' into tty_resize
Conflicts:
	commands.go
2013-05-24 14:45:31 -07:00
Guillaume J. Charmes
88ef309a94 Finish resize implementation client and server 2013-05-24 14:44:16 -07:00
Thatcher Peskens
c5f15dcd3d Added links to @jpetazzo 's kernel article, removed quote indents from puppet.rst 2013-05-24 14:42:00 -07:00
Guillaume J. Charmes
d8e60b797f Merge pull request #630 from dotcloud/check_if_logged_before_pull-fix
add login check before pull user's repo
2013-05-24 12:29:59 -07:00
Guillaume J. Charmes
064101d82e Merge pull request #689 from dotcloud/help_command-fix
- Runtime: bring Error: Command not found: <command>
2013-05-24 12:01:19 -07:00
Guillaume J. Charmes
a3293ed854 Fix merge issue 2013-05-24 11:56:21 -07:00
Guillaume J. Charmes
3d1bc2660c Merge branch 'master' into help_command-fix
Conflicts:
	commands.go
2013-05-24 11:46:18 -07:00
unclejack
2cf92abf0e add arch field to image struct 2013-05-24 21:41:30 +03:00
Guillaume J. Charmes
48fd8ae79c Merge branch '573-add_host_port-feature' 2013-05-24 11:36:05 -07:00
Guillaume J. Charmes
c167d603f2 Merge pull request #661 from dotcloud/573-add_host_port-feature
+ Runtime: add -H flag to listen/connect to different ip/port
2013-05-24 11:32:57 -07:00
Guillaume J. Charmes
bfb65b733a Simplify the Host flag parsing 2013-05-24 11:31:36 -07:00
Guillaume J. Charmes
ae72c2f4d6 Gofmt 2013-05-24 11:31:19 -07:00
Guillaume J. Charmes
0146f65a44 Fix issue within auth test 2013-05-24 11:31:11 -07:00
Guillaume J. Charmes
deb9963e6e Catch sigwinch client 2013-05-24 11:07:32 -07:00
Victor Vieux
d7b01c049d bump to master 2013-05-24 16:50:16 +00:00
Victor Vieux
92e4a51965 use -H 2013-05-24 16:49:18 +00:00
Victor Vieux
3c7bca7a21 first version of Pull 2013-05-24 16:34:03 +00:00
Ken Cochrane
1b5ab5afe0 Merge pull request #701 from kim0/patch-1
- Documentation: Updated ubuntu install docs to avoid hardcoding kernel 3.8 version, and allow Ubuntu updates to work
2013-05-24 08:58:02 -07:00
kim0
4e576f047f Properly install ppa, avoid GPG key warning 2013-05-24 18:55:32 +03:00
kim0
8dc2ad2c06 Avoid hardcoding kernel 3.8 version, allow Ubuntu updates to work 2013-05-24 17:44:02 +02:00
Victor Vieux
58ce66e553 Merge pull request #699 from dotcloud/docker_login-fix
fix docker login when same username
2013-05-24 07:32:38 -07:00
Victor Vieux
1f23b4caae fix docker login when same username 2013-05-24 14:23:43 +00:00
Victor Vieux
1c946ef003 fix: Can't lookup root of unregistered image 2013-05-24 13:03:09 +00:00
Victor Vieux
4dab2fccd3 removed useless params 2013-05-24 12:43:24 +00:00
Victor Vieux
a7d7a06655 change %f to %g 2013-05-24 12:23:28 +00:00
Victor Vieux
9ebfcc9a15 bump to master 2013-05-24 12:17:44 +00:00
Guillaume J. Charmes
70d2123efd Add resize endpoint to api 2013-05-23 19:33:28 -07:00
Guillaume J. Charmes
2cd00a47a5 remove unused function 2013-05-23 18:34:38 -07:00
Guillaume J. Charmes
e3f0429859 Add missing buildfile.go 2013-05-23 18:33:31 -07:00
Guillaume J. Charmes
d42c10aa09 Implement Context within docker build (not yet in use) 2013-05-23 18:32:56 -07:00
Guillaume J. Charmes
ecb64be6a8 Merge pull request #695 from dtzWill/694-fix-merge-config
- Runtime: utils.go: Fix merge logic for user and hostname.
2013-05-23 16:00:34 -07:00
Will Dietz
83bc5b7435 utils.go: Fix merge logic for user and hostname.
Fall back to image-specified hostname if user doesn't
provide one, instead of only using image-specified
hostname if the user *does* try to set one.
(ditto for username)

Closes #694.
2013-05-23 15:54:51 -05:00
Guillaume J. Charmes
822056094a Bumped version to 0.3.3 2013-05-23 12:46:14 -07:00
Daniel Mizyrycki
d17c0b8368 Packaging: Update changelog for release 0.3.3 2013-05-23 12:45:00 -07:00
Solomon Hykes
e0e385ac69 * Build: leave temporary containers untouched after a failure to help debugging 2013-05-23 11:12:54 -06:00
Solomon Hykes
66f3a96983 * Remote API: add versioning 2013-05-23 10:44:36 -06:00
Victor Vieux
31c98bdaaf bring Error: Command not found: <command>
Usage: docker COMMAND [arg...]

A self-sufficient runtime for linux containers.

Commands:
    attach    Attach to a running container
    insert    Insert a file in an image
    login     Register or Login to the docker registry server
    export    Stream the contents of a container as a tar archive
    diff      Inspect changes on a container's filesystem
    logs      Fetch the logs of a container
    pull      Pull an image or a repository from the docker registry server
    restart   Restart a running container
    build     Build a container from Dockerfile or via stdin
    history   Show the history of an image
    kill      Kill a running container
    rmi       Remove an image
    start     Start a stopped container
    tag       Tag an image into a repository
    commit    Create a new image from a container's changes
    import    Create a new filesystem image from the contents of a tarball
    ps        List containers
    rm        Remove a container
    run       Run a command in a new container
    wait      Block until a container stops, then print its exit code
    images    List images
    port      Lookup the public-facing port which is NAT-ed to PRIVATE_PORT
    info      Display system-wide information
    inspect   Return low-level information on a container
    push      Push an image or a repository to the docker registry server
    search    Search for an image in the docker index
    stop      Stop a running container
    version   Show the docker version information back
2013-05-23 16:32:39 +00:00
Victor Vieux
59835135c5 added warning 2013-05-23 16:15:36 +00:00
Victor Vieux
13f1939a63 switch to default 127.0.0.1, and mixed the two flags in one. -h 2013-05-23 16:09:28 +00:00
Solomon Hykes
dbb7b60cfc Re-ordered and re-titled kernel requirement details to match the shortlist 2013-05-23 09:49:53 -06:00
Solomon Hykes
e77263010c Simplified and clarified kernel install instructions 2013-05-23 09:47:20 -06:00
Solomon Hykes
2cf29893b3 Merge branch 'master' of ssh://github.com/dotcloud/docker 2013-05-23 09:33:34 -06:00
Solomon Hykes
5f30453bfe Merge pull request #688 from pxlpnk/patch-1
Fixing two typos in the run help
2013-05-23 08:21:05 -07:00
Victor Vieux
cf35e8ed81 jsonstream WIP 2013-05-23 15:16:35 +00:00
Andreas Tiefenthaler
9e0427081e Fixing two typos in the run help 2013-05-23 18:09:59 +03:00
Solomon Hykes
bc3675d471 Merge branch 'master' of ssh://github.com/dotcloud/docker 2013-05-23 09:01:40 -06:00
Victor Vieux
b45143da9b switch to SI standard and add test 2013-05-23 10:29:09 +00:00
Victor Vieux
ed56b6a905 fix typo 2013-05-23 09:35:20 +00:00
Guillaume J. Charmes
0f135ad7f3 Start moving the docker builder into the server 2013-05-22 20:07:26 -07:00
Thatcher
422edd513a Merge pull request #683 from rogaha/doc_meta_info_update
Looks good. Thanks!  Closes #637
2013-05-22 19:09:32 -07:00
rogaha
18cb5c9314 added/modifed tittle, description and keywords
changed the title prefix to sufix + Documentation
2013-05-22 17:52:48 -07:00
Thatcher
4d80924f7c Merge pull request #681 from dhrp/680-fix-readme-image
Fix broken image on README, closes #680
2013-05-22 16:09:21 -07:00
Thatcher Peskens
f008d1107c Fix broken image on README, closes #680 2013-05-22 16:04:33 -07:00
Eric Hanchrow
056698b676 Use 127.0.0.1 instead of hostname in the "access the web app" section. 2013-05-22 12:54:50 -07:00
Eric Hanchrow
b4198de6bf Merge remote-tracking branch 'upstream/master'
Conflicts:
	docs/sources/examples/python_web_app.rst
2013-05-22 12:52:14 -07:00
Victor Vieux
800b401f0b improved doc and usage 2013-05-22 16:15:52 +00:00
Victor Vieux
faae7220c0 api versionning 2013-05-22 15:29:54 +00:00
Ken Cochrane
1ccc7bdd90 Merge pull request #668 from christophercurrie/patch-1
- Documentation: Fix typos in the python web app example docs.
2013-05-22 07:52:46 -07:00
Victor Vieux
52113cc443 Merge pull request #670 from dotcloud/669-wrong_content_type_create-fix
Fix content type in doc
2013-05-22 06:54:39 -07:00
Victor Vieux
949a649cc2 fix content type in doc 2013-05-22 13:49:12 +00:00
Victor Vieux
6fce89e60b bump to master 2013-05-22 13:41:29 +00:00
Christopher Currie
5818813183 Apparent typos in the docs. 2013-05-21 21:45:27 -06:00
Victor Vieux
4489005cb2 add regexp check on repo's name 2013-05-21 12:53:05 +00:00
Victor Vieux
a3ccec197e add -host and -port 2013-05-21 10:14:58 +00:00
Guillaume J. Charmes
e6e13d6ade Merge pull request #658 from dotcloud/builder_client_doc
Builder client doc
2013-05-20 18:14:44 -07:00
Guillaume J. Charmes
3f22842542 Remove no longer needed tests 2013-05-20 17:54:54 -07:00
Guillaume J. Charmes
218812eb3c Update docker builder doc 2013-05-20 17:52:39 -07:00
Guillaume J. Charmes
d37dea318e Merge pull request #653 from dotcloud/builder_client
* Builder: Move the Builder to the client
2013-05-20 17:50:17 -07:00
Guillaume J. Charmes
49505c599b Fix an issue trying to pull specific tag 2013-05-20 17:30:33 -07:00
Guillaume J. Charmes
f35f084059 Use pointers for the object methods 2013-05-20 16:35:28 -07:00
Eric Hanchrow
3a9ef5f9bb Install curl; nix stray backslash; use proper IP address 2013-05-20 16:31:39 -07:00
Guillaume J. Charmes
0d2fb29537 Merge fix 2013-05-20 16:21:35 -07:00
Guillaume J. Charmes
13e687e579 Allow multiple syntaxes for CMD 2013-05-20 16:00:51 -07:00
Guillaume J. Charmes
b06784b0dd Make docker client an interface 2013-05-20 16:00:16 -07:00
Guillaume J. Charmes
d756ae4cb3 Tag all images after pulling them 2013-05-20 15:19:05 -07:00
Guillaume J. Charmes
c6bc90d02d Isolate run() from commit 2013-05-20 15:02:32 -07:00
Guillaume J. Charmes
b51303cddc Make sure to have a command to execute upon commit 2013-05-20 13:50:50 -07:00
Guillaume J. Charmes
c2a14bb196 Add "Cmd" prefix to builder instructions 2013-05-20 12:09:15 -07:00
Victor Vieux
67b20f2c8c add check to see if the image isn't parent of another and add -f to force 2013-05-20 18:31:45 +00:00
Guillaume J. Charmes
bebbbf914b Merge pull request #625 from dotcloud/remove-hijack
Remove hijack from api when not necessary
2013-05-20 11:27:07 -07:00
Guillaume J. Charmes
372d81c325 Merge remove-hijack 2013-05-20 11:07:34 -07:00
Guillaume J. Charmes
ae9d7a5167 Avoid cast each write for flusher 2013-05-20 10:58:35 -07:00
Guillaume J. Charmes
0bb54813d4 Merge pull request #652 from unclejack/fix-compilation-on-linux
fix compilation on linux
2013-05-20 10:55:17 -07:00
unclejack
1b007828c9 fix compilation on linux 2013-05-20 20:43:09 +03:00
Guillaume J. Charmes
98b0fd173b Make the printflfush an interface 2013-05-20 10:22:50 -07:00
Guillaume J. Charmes
2879ef4642 Merge pull request #639 from fsouza/fix-utils-darwin
utils: fix compilation on Darwin
2013-05-20 10:10:30 -07:00
Ken Cochrane
b3101beb47 Merge pull request #645 from fsouza/fix-api-docs
- Documentation: remove trunc_cmd from /containers/ps example no longer needed.
2013-05-20 07:52:22 -07:00
Ken Cochrane
580df1dfc6 Merge pull request #646 from kirang89/master
- Documentation: Fixed typos in README
2013-05-19 16:53:33 -07:00
Guillaume J. Charmes
0f312113d3 Move docker build to client 2013-05-19 10:46:24 -07:00
Kiran Gangadharan
0b785487fe Fixed typos 2013-05-19 21:04:34 +05:30
Francisco Souza
8291f8b85c docs/remote_api: remove trunc_cmd from /containers/ps example
Apparently, this parameter does not exist anymore.
2013-05-19 11:57:45 -03:00
Ken Cochrane
6471fd3825 Merge pull request #644 from philspitler/master
- Documentation: Fixed typo in remote api docs
2013-05-19 06:41:26 -07:00
Phil Spitler
ea7fdecd41 Fixed typo in remote API doc 2013-05-19 09:37:02 -03:00
Francisco Souza
2b55874584 utils: fix compilation on Darwin
Although Docker daemon does not work on Darwin, the API client will have
to work. That said, I'm fixing the compilation of the package on Darwin.
2013-05-19 00:02:42 -03:00
Ken Cochrane
66e9f155c3 Merge pull request #632 from jpetazzo/626-kernel-docs
+ Documentation: Added a section for Kernel requirements to the Docker Docs.
2013-05-18 07:53:45 -07:00
Victor Vieux
6102552d61 Merge branch 'master' into 610-improve_rmi-feature 2013-05-18 14:29:37 +00:00
Victor Vieux
d7673274d2 check if the image to delete isn't parent of another 2013-05-18 14:29:32 +00:00
Victor Vieux
0143be42a1 add flush after each write when needed 2013-05-18 14:03:53 +00:00
Thatcher Peskens
537cce16f2 Added a "we are hiring" to the frontpage of the docker website, and fixed two broken links in the docs. 2013-05-17 19:35:46 -07:00
Jérôme Petazzoni
6301373c68 Add some details about pre-3.8 kernels 2013-05-17 18:58:00 -07:00
Jérôme Petazzoni
72360b2cdf Add information about kernel requirements
This page will be helpful for people who:
- want run run a custom kernel
- want to enable memory/swap accounting on Debian/Ubuntu
2013-05-17 18:57:56 -07:00
Ken Cochrane
45c5180516 Merge pull request #631 from garethr/puppet_usage_docs
+ Documentation: Added a section on how to use Docker with puppet.
2013-05-17 13:43:17 -07:00
Victor Vieux
f01990aad2 fix 2013-05-17 17:57:44 +00:00
Guillaume J. Charmes
ffb38ce0ad Merge pull request #633 from dotcloud/fix_insert
Fix insert
2013-05-17 10:16:03 -07:00
Victor Vieux
3703a65405 fixed insert 2013-05-17 17:09:09 +00:00
Gareth Rushgrove
55dd0afb5d add instructions for using docker with puppet 2013-05-17 17:08:37 +01:00
Victor Vieux
1b0b962b43 add login check before pull user's repo 2013-05-17 13:23:12 +00:00
Guillaume J. Charmes
08121c8f6b Update Push to reflect the correct API 2013-05-16 14:33:29 -07:00
Guillaume J. Charmes
6145812444 Update tests to reflect new behavior 2013-05-16 14:33:10 -07:00
Guillaume J. Charmes
f498dd2a2b - Registry: Fix issue preventing to pull specific tag 2013-05-16 12:29:16 -07:00
Guillaume J. Charmes
f29e5dc8a1 Remove hijack from api when not necessary 2013-05-16 12:09:06 -07:00
Guillaume J. Charmes
db1e965b65 Merge fixes + cleanup 2013-05-16 11:27:50 -07:00
Guillaume J. Charmes
2ae8aaa106 Merge branch 'master' into 610-improve_rmi-feature 2013-05-16 11:15:16 -07:00
Guillaume J. Charmes
e86fe7e5af Merge pull request #622 from dotcloud/616-support_wider_range_bool-feature
* API: Support wider range of bool parameters in the api: 1 or 0 -> 1/True/true or 0/False/false
2013-05-16 10:37:23 -07:00
Victor Vieux
0c5443571d 1 or 0 -> 1/True/true or 0/False/false 2013-05-16 13:45:29 +00:00
Thatcher
6677c7964a Merge pull request #620 from dhrp/documentation-review
Pulling the installation documentation update. Includes some timely fixes of broken links.
2013-05-15 20:12:09 -07:00
Thatcher Peskens
8454a86e97 Mostly changes to the html, moved pages, fixed links.
* Moved the introduction page to the en/latest/index.html home of the documentation and pointed all links there
* Fixed a broken link from+to homepage
* Fixed the javascript of the documentation navigation to allow expand and collapse multiple times.
* Fixed the one typo Andy pointed out
2013-05-15 20:00:20 -07:00
Guillaume J. Charmes
fef816163c Merge pull request #618 from titanous/cleanup
Misc. cleanup
2013-05-15 18:05:31 -07:00
Guillaume J. Charmes
f53ec3373d Merge pull request #604 from dotcloud/603-docker-ci
+ Buildbot: testing; issue #603: Create AWS testing buildbot instance for docker CI
2013-05-15 18:05:02 -07:00
Guillaume J. Charmes
0ad34c6cd5 Merge pull request #605 from dotcloud/251-packaging-debian
* Packaging: packaging-debian; issue #251: Update files for prerelease
2013-05-15 18:04:26 -07:00
Guillaume J. Charmes
acf1d5bf0e Merge pull request #592 from dotcloud/refactor_commands_object-feature
* Runtime: refactor commands.go into an object, will be easier for tests
2013-05-15 18:03:30 -07:00
Guillaume J. Charmes
d8bf5af79b Merge pull request #578 from dotcloud/registry-update-tests
+ Registry: Allow INDEX override
2013-05-15 18:00:13 -07:00
Guillaume J. Charmes
7cd7dcda0b Merge pull request #608 from dotcloud/split_subpackage
* Project: Split the project into sub-packages
2013-05-15 17:59:10 -07:00
Guillaume J. Charmes
1b04ccf62c Disable registry unit tests 2013-05-15 17:57:53 -07:00
Guillaume J. Charmes
10f5b6486c Update utils_test.go 2013-05-15 17:46:01 -07:00
Guillaume J. Charmes
822cf67dae Remove test file 2013-05-15 17:44:37 -07:00
Guillaume J. Charmes
f3bab52df4 Move getKernelVersion to utils package 2013-05-15 17:40:47 -07:00
Guillaume J. Charmes
10e19e4b97 Update tests to reflect new AuthConfig 2013-05-15 17:31:11 -07:00
Guillaume J. Charmes
95dd6d31a4 Move authConfig from runtime to registry 2013-05-15 17:17:33 -07:00
Thatcher Peskens
b5c1b17b50 Updated getting started on website. 2013-05-15 16:19:18 -07:00
Thatcher Peskens
2ebbd2d636 Clean up of unnecessary files. 2013-05-15 16:13:20 -07:00
Thatcher Peskens
ce35f5d899 Updated the entire installation section text, checked everything, cleaned up rackspace installation. 2013-05-15 16:11:59 -07:00
Guillaume J. Charmes
bb85ce9aff Allow to change login 2013-05-15 13:39:24 -07:00
Guillaume J. Charmes
dc9d6c1c1f Upload images only when necessary 2013-05-15 13:22:57 -07:00
Jonathan Rudenberg
aa0d40747c Remove broken, redundant struct tag 2013-05-15 16:02:24 -04:00
Jonathan Rudenberg
3a339b2bb3 Update AUTHORS 2013-05-15 15:57:21 -04:00
Jonathan Rudenberg
52ef89f9c2 Fix mistaken call to fmt.Println 2013-05-15 15:52:19 -04:00
Jonathan Rudenberg
04748a7766 Fix logic flaw in builder.mergeConfig 2013-05-15 15:51:04 -04:00
Guillaume J. Charmes
97880a223e Move httpClient within registry object 2013-05-15 19:22:08 +00:00
Guillaume J. Charmes
2f4de3867d Reenable docker push 2013-05-15 19:21:37 +00:00
Guillaume J. Charmes
398a6317a0 Remove stdout from registry 2013-05-15 18:50:52 +00:00
Guillaume J. Charmes
49b61af1f8 Refactor registry Push 2013-05-15 18:30:40 +00:00
Victor Vieux
f27415540f rename client to DockerCli to prepare the split CLI/API 2013-05-15 14:16:46 +00:00
Victor Vieux
c80448c4d1 improve rmi 2013-05-15 13:51:50 +00:00
Guillaume J. Charmes
828d1aa507 Begin to implement push with new project structure 2013-05-15 03:27:15 +00:00
Guillaume J. Charmes
e7077320ff Update tests to reflect new project structure 2013-05-15 01:52:09 +00:00
Guillaume J. Charmes
9bb3dc9843 Split registry into subpackage 2013-05-15 01:41:39 +00:00
Victor Vieux
c75942c79d add 4243 port forward 2013-05-15 02:41:19 +02:00
Guillaume J. Charmes
2e69e1727b Create a subpackage for utils 2013-05-14 22:37:35 +00:00
shin-
539c876727 Make tests less verbose 2013-05-14 22:00:53 +00:00
shin-
e6324ff400 Added small doc page for DOCKER_INDEX_URL variable 2013-05-14 22:00:24 +00:00
shin-
17ad00a35e gofmt pass 2013-05-14 22:00:24 +00:00
shin-
6c7dc1de86 Repository name can not exceed 30 characters 2013-05-14 22:00:24 +00:00
shin-
20a57f15b9 Added login/account creation tests 2013-05-14 22:00:24 +00:00
shin-
54871e3683 Added test for PushRepository 2013-05-14 22:00:24 +00:00
shin-
2b620efffd Allow index server address to vary during execution 2013-05-14 22:00:24 +00:00
shin-
fc1d1d871b Find docker index URL in ENV before using default value. Unit tests for docker pull 2013-05-14 22:00:24 +00:00
Daniel Mizyrycki
341739d916 packaging-debian; issue #251: Update files for prerelease 2013-05-14 03:28:59 -07:00
Daniel Mizyrycki
085ee6fcc4 testing; issue #603: Create AWS testing buildbot instance for docker CI 2013-05-13 23:10:00 -07:00
Guillaume J. Charmes
d586662ce5 Merge pull request #598 from unclejack/build-docker-with-go1.1
* Dockerbuilder - build docker using Go 1.1
2013-05-13 15:20:13 -07:00
unclejack
48fcde8193 build docker using Go 1.1 2013-05-14 01:10:24 +03:00
Guillaume J. Charmes
ccc4fae30a Merge pull request #586 from tbikeev/master
* Vagrant: AWS region and ami name can now be set as ENV variables.
2013-05-13 14:06:48 -07:00
Ken Cochrane
01cecfe2f3 Merge pull request #588 from kencochrane/rackspace-docs
+ Documentation: Added install instructions for RackSpace cloud using Ubuntu 12.04, 12.10, 13.04 and fixed a few minor issues with the ubuntu install docs.
2013-05-13 14:06:15 -07:00
Guillaume J. Charmes
3491957e35 Merge pull request #597 from dotcloud/589-volumes_crash-fix
- Runtime: Make sure that the layer exists prior to store it
2013-05-13 13:09:03 -07:00
Guillaume J. Charmes
be9dd7b85f Make sure that the layer exists prior to store it 2013-05-13 13:08:16 -07:00
Guillaume J. Charmes
edc7c092d9 Merge pull request #551 from jpetazzo/471-cpu-limit
+ Runtime: implement "-c" option to allocate a number of CPU shares to a container
2013-05-13 11:59:09 -07:00
Ken Cochrane
90b4e097cf Merge pull request #581 from dhrp/actually_rebased_docs
* Documentation - rearranged the documentation so that it is easier to manage and use.
2013-05-13 11:47:26 -07:00
Thatcher Peskens
4551fbd700 Merge remote-tracking branch 'dotcloud/master' into clean-reshuffle 2013-05-13 11:44:52 -07:00
Guillaume J. Charmes
37b80325d0 Merge pull request #593 from dotcloud/579-move_display_options_to_client-feature
* Api: Move display options to client
2013-05-13 11:40:46 -07:00
Victor Vieux
4fb89027e6 add host 2013-05-13 20:34:09 +02:00
Thatcher Peskens
6390d25010 Moved docker-remote-api doc to api section 2013-05-13 11:21:28 -07:00
Thatcher Peskens
097531d853 Merge remote-tracking branch 'dotcloud/master' into clean-reshuffle 2013-05-13 11:12:19 -07:00
Guillaume J. Charmes
02d255457a Merge pull request #591 from dotcloud/590_error_message-fix
Fix error message in export
2013-05-13 10:58:05 -07:00
Guillaume J. Charmes
38bc134db3 Merge pull request #587 from zimbatm/fix/readme-build-example
Fixes the README build example to make it work
2013-05-13 10:50:51 -07:00
Guillaume J. Charmes
9023db5c5d Merge pull request #595 from dotcloud/typo_error-fix
Small fix about the error in push
2013-05-13 10:46:38 -07:00
Guillaume J. Charmes
824ad4274a Change t.Skip to t.Log in tests for go1.0.3 compatibility 2013-05-13 10:45:50 -07:00
Victor Vieux
182842e3c3 fix error push 2013-05-13 19:19:27 +02:00
Victor Vieux
a91b710961 add sizes in images and containers 2013-05-13 15:14:20 +02:00
Victor Vieux
e82ff22fae add -notrunc on docker images 2013-05-13 12:26:18 +02:00
Victor Vieux
1990c49a62 removed only_ids and trunc_cmd from the api 2013-05-13 12:18:55 +02:00
Victor Vieux
761731215f refactor commands.go into an object, will be easier for tests 2013-05-13 11:48:27 +02:00
Victor Vieux
8b31d30601 fix error message in export 2013-05-13 11:39:24 +02:00
Ken Cochrane
87038311fc fixed the git clone url, it was the read-write one that required ssh access to the docker account, changed to the read only version 2013-05-12 21:46:35 -04:00
Ken Cochrane
f074e983bd updated the rackspace docs to have more information, and fixed a couple of mistakes 2013-05-12 13:12:44 -04:00
Ken Cochrane
823bc74935 Added install instructions for RackSpace cloud using Ubuntu 12.04, 12.10, 13.04 2013-05-11 19:03:41 -04:00
Jonas Pfenniger
a47d8799b1 Fixes the README build example to make it work 2013-05-11 23:50:06 +01:00
Ken Cochrane
908e4797a6 Merge pull request #584 from boffbowsh/builder-documentation
* Documentation: Updated Docker builder docs to clean up format and add new information and examples
2013-05-11 12:39:44 -07:00
Paul Bowsher
9416574569 Make examples bash-highlighted
Remove DOCKER-VERSION, easier to maintain
Add example of multiple FROM steps
2013-05-11 18:34:26 +01:00
Thomas Bikeev
9cfcfbae52 Made AWS region and ami to be configurable through the ENV variables. 2013-05-11 19:27:42 +02:00
Guillaume J. Charmes
dfbea4ad9f Merge pull request #582 from dotcloud/refactor_api_file
* Api: refactor api.go file to ease the streaming
2013-05-10 15:26:32 -07:00
Guillaume J. Charmes
1cce8572e2 Merge branch 'refactor_api_file' of github.com:dotcloud/docker into refactor_api_file 2013-05-10 15:24:25 -07:00
Guillaume J. Charmes
b99446831f Update api_test.go to reflect new api.go 2013-05-10 15:24:07 -07:00
Victor Vieux
f7beba3acc add writeJson 2013-05-10 15:10:15 -07:00
Victor Vieux
7cc082347f refactor api.go 2013-05-10 15:10:15 -07:00
Paul Bowsher
a98eafaf58 Expand upon Builder documentation and tidy
Standardizes on uppercase for instructions, gives 
example usage.
Wrap at 80 columns
2013-05-10 22:49:42 +01:00
Jérôme Petazzoni
6f3e868a7b Merge branch 'master' of github.com:dotcloud/docker into 471-cpu-limit 2013-05-10 14:44:50 -07:00
Guillaume J. Charmes
d7adeb8a45 Merge pull request #583 from dotcloud/api-tests-1
* Api: Improve remote api unit tests
2013-05-10 14:34:02 -07:00
Guillaume J. Charmes
052a15ace9 - Registry: Fix the checksums file path 2013-05-10 14:25:45 -07:00
Guillaume J. Charmes
ae80c37054 Small fix within TestGetImagesJson 2013-05-10 13:58:21 -07:00
Guillaume J. Charmes
5bec9275c0 Improve remote api unit tests 2013-05-10 12:28:07 -07:00
Thatcher Peskens
8946b75a0b Updated commands index to show better title.
Small syntax fix in couchDB
2013-05-10 12:25:02 -07:00
Victor Vieux
67cdfc0c83 add writeJson 2013-05-10 21:11:59 +02:00
Thatcher Peskens
7557af19d8 Major rearrange of the documentation. 2013-05-10 12:00:12 -07:00
Victor Vieux
d270501de2 Merge pull request #575 from odk-/notrunc-container-id
full container.Id display when notrunc is used
2013-05-10 11:35:20 -07:00
Victor Vieux
dbc899130c refactor api.go 2013-05-10 20:20:49 +02:00
Victor Vieux
17c1704f4a fix run 2013-05-10 17:00:26 +02:00
Victor Vieux
53a8229ce7 fix tests in api_test.go 2013-05-10 16:49:47 +02:00
odk-
82313b1de9 full container.Id display when notrunc is used 2013-05-10 10:18:01 +02:00
Guillaume J. Charmes
1c5e9f8a88 Merge pull request #571 from dotcloud/569-run_after_rmi-fix
fix run after rmi
2013-05-09 23:02:36 -07:00
Guillaume J. Charmes
fae63aa42e Merge pull request #567 from dotcloud/516-xorg-ppa
+ Vagrant: packaging-ubuntu; issue #516: Add xorg kernel to Vagrantfile for stable ubuntu LTS docker distro
2013-05-09 23:01:48 -07:00
Guillaume J. Charmes
26bfeb1d67 Merge pull request #432 from dotcloud/remote-api
+ Remote API: Implement the remote API.
2013-05-09 22:43:47 -07:00
Guillaume J. Charmes
9897e78e32 Update the doc for remote-api 2013-05-09 22:29:12 -07:00
Guillaume J. Charmes
1941c79195 make commands use the correct routes 2013-05-09 22:28:52 -07:00
Guillaume J. Charmes
15ae314cbb Mock Hijack and Implement Unit test for Attach 2013-05-09 21:55:08 -07:00
Guillaume J. Charmes
310fec4819 More unit tests for remote api 2013-05-09 21:54:43 -07:00
Guillaume J. Charmes
28fd289b44 Reduce the Destroy timeout from 10 to 3 seconds 2013-05-09 21:53:59 -07:00
Guillaume J. Charmes
483c942520 Fix typos whithin unit tests 2013-05-09 21:53:28 -07:00
Guillaume J. Charmes
eeaea4e873 Update the routes within commands.go 2013-05-09 20:19:21 -07:00
Guillaume J. Charmes
24816a8b80 Add/improve unit tests 2013-05-09 20:13:52 -07:00
Guillaume J. Charmes
0c6380cc32 Rename "v" in "removeVolume" 2013-05-09 19:19:55 -07:00
Guillaume J. Charmes
2a303dab85 Add unit tests 2013-05-09 19:19:24 -07:00
Guillaume J. Charmes
ede0793d94 Add unit tests for remote api 2013-05-09 17:51:27 -07:00
Guillaume J. Charmes
152ebeea43 Change API route for containers/ and images/ in order to avoid conflict 2013-05-09 17:50:56 -07:00
Guillaume J. Charmes
ff67da9c86 Add the variable maps to the Api functions 2013-05-09 16:28:47 -07:00
Victor Vieux
add73641e6 fixed private caller 2013-05-10 01:23:50 +02:00
Victor Vieux
9af5d9c527 Merge branch 'remote-api' of https://github.com/dotcloud/docker into remote-api 2013-05-10 01:20:33 +02:00
Victor Vieux
3115c5a225 private tests api fixed 2013-05-10 01:20:07 +02:00
Guillaume J. Charmes
7e8b413bcf Implement the some unit tests without listenandserve 2013-05-09 16:20:06 -07:00
Guillaume J. Charmes
b2b59ddb10 Remove ListenAndServe from unit tests 2013-05-09 15:47:06 -07:00
Victor Vieux
c423a790d6 fixed issue with viz 2013-05-09 23:52:12 +02:00
Guillaume J. Charmes
074a566164 Small fix in graph_test.go 2013-05-09 14:48:10 -07:00
Victor Vieux
93dc2c331e removed hijack in export 2013-05-09 23:28:03 +02:00
Victor Vieux
0ecf5e245d removed hijack on viz 2013-05-09 23:10:26 +02:00
Victor Vieux
0862183c86 fix status code and error detection 2013-05-09 21:42:29 +02:00
Victor Vieux
0bcc5d3bee re-add previously deleted doc files 2013-05-09 20:36:08 +02:00
Victor Vieux
b5831eda1e merge master 2013-05-09 20:32:53 +02:00
Victor Vieux
7c7619ecf8 display warning on the server in debug in version don't match 2013-05-09 20:24:49 +02:00
Victor Vieux
0a5d86d7be fix run after rmi 2013-05-09 18:57:47 +02:00
Victor Vieux
4576e11121 fix attach and remove cli doc 2013-05-09 17:54:41 +02:00
Daniel Mizyrycki
73321f27f4 packaging-ubuntu; issue #516: Add xorg kernel to Vagrantfile for stable ubuntu LTS docker distro 2013-05-08 19:36:06 -07:00
Victor Vieux
842cb8909e pretty print json in inspect 2013-05-09 03:46:39 +02:00
Victor Vieux
fc29f01528 bump to master 2013-05-09 02:20:16 +02:00
Victor Vieux
24c785bc06 fix login 2013-05-08 23:57:14 +02:00
Victor Vieux
1d42cbaa21 removed useless returns 2013-05-08 23:19:24 +02:00
Victor Vieux
bf605fcfc7 fix commit without run parameter 2013-05-08 19:21:52 +02:00
Victor Vieux
954ecac388 fix doc and empty content-type 2013-05-08 18:52:01 +02:00
Victor Vieux
4a1e0d321e change content-type and small fix in run 2013-05-08 18:36:37 +02:00
Victor Vieux
bc3fa506e9 added pagination on ps 2013-05-08 18:28:11 +02:00
Victor Vieux
075e1ebb0e remove useless port endpoint 2013-05-08 18:06:43 +02:00
Victor Vieux
60ddcaa15d changes 2 endpoints to avoid confusion, changed some parameters, fix doc, add api unit tests 2013-05-08 17:35:50 +02:00
Guillaume J. Charmes
cacc7e564a Fix non exiting client issue 2013-05-07 23:32:17 -07:00
Guillaume J. Charmes
2ac4e662f1 Small fix 2013-05-07 18:16:24 -07:00
Guillaume J. Charmes
57cfe72e8c Replace os.File with io.ReadCloser and io.Writer 2013-05-07 18:06:49 -07:00
Guillaume J. Charmes
755604a2bd Fix routes in api.go 2013-05-07 17:35:33 -07:00
Guillaume J. Charmes
891c5202ea Factorize api.go 2013-05-07 17:27:09 -07:00
Guillaume J. Charmes
ab96da8eb2 Use bool instead of string for flags 2013-05-07 16:47:43 -07:00
Guillaume J. Charmes
279db68b46 Use Fprintf instead of Fprintln 2013-05-07 16:36:49 -07:00
Guillaume J. Charmes
b56b2da5c5 Refactor api.go to use a factory with named functions 2013-05-07 16:33:12 -07:00
Victor Vieux
a0880edc63 removed useless buffered pipe in import 2013-05-07 23:56:45 +02:00
Victor Vieux
4d30a32c68 removed RAW mode on server 2013-05-07 23:15:42 +02:00
Victor Vieux
a5b765a769 remove useless wait in run 2013-05-07 22:52:58 +02:00
Victor Vieux
ac0e27699c display id on run -s stdin 2013-05-07 21:36:24 +02:00
Victor Vieux
32cbd72ebe Merge branch 'master' into remote-api 2013-05-07 21:02:32 +02:00
Victor Vieux
4079411375 fix run no parameter 2013-05-07 20:59:04 +02:00
Jérôme Petazzoni
eef8b0d406 propagate CpuShares in mergeConfig 2013-05-07 11:44:38 -07:00
Jérôme Petazzoni
af9f559f2e in the tests, use a non-default value for cpu.shares 2013-05-07 11:44:24 -07:00
Jérôme Petazzoni
e36752e033 if -c is not specified, do not set cpu.shares (instead of using the default value of 1024) 2013-05-07 11:43:45 -07:00
Victor Vieux
59a6316f5e added search 2013-05-07 20:43:31 +02:00
Jérôme Petazzoni
efd9becb78 implement "-c" option to allocate a number of CPU shares to a container 2013-05-07 11:16:30 -07:00
Victor Vieux
97fdfab446 Merge branch 'master' into remote-api 2013-05-07 19:42:36 +02:00
Victor Vieux
10c0e99037 update to master 2013-05-07 19:23:50 +02:00
Victor Vieux
0b6c79b303 first draft of the doc, split commit and fix some issues in spi.go 2013-05-07 17:19:41 +02:00
Tianon Gravi
2f89315bf8 Update mkimage-debian.sh now that wheezy is officially the stable release - also, we can't rely on "release" versions for testing or unstable - only "stable" has reliable release versions 2013-05-06 17:00:21 -06:00
Solomon Hykes
b6af9d3d2e * Website: put Adam's and Mitchell's nice tweets on top :) 2013-05-06 14:26:58 -07:00
Solomon Hykes
f796b9c76e * Website: Bigger twitter profile pictures 2013-05-06 14:26:07 -07:00
Solomon Hykes
627f7fdbfd + Website: new quotes 2013-05-06 14:24:14 -07:00
Victor Vieux
5a2a5ccdaf removed rcli completly 2013-05-06 16:59:33 +02:00
Victor Vieux
f37399d22b added login and push 2013-05-06 13:34:31 +02:00
Victor Vieux
6f9b574f25 bump to 0.2.2 2013-05-06 11:53:00 +02:00
Victor Vieux
04cd20fa62 split api and server. run return exit code. import, pull and commit uses the smae endpoint. non zero status code on failure 2013-05-06 11:31:22 +02:00
Victor Vieux
75418ec849 Merge branch 'master' into remote-api 2013-05-03 17:59:11 +02:00
Victor Vieux
c6963da54e makefile from master 2013-05-03 17:51:11 +02:00
Victor Vieux
4f0bda2dd5 up to date with master 2013-05-02 18:36:23 +02:00
Victor Vieux
a4bcf7e1ac refactoring run/attach/logs 2013-05-02 05:07:06 +02:00
Victor Vieux
36b968bb09 [] instead fon null, timetsamps and wip import 2013-04-30 17:04:31 +02:00
Victor Vieux
131c6ab3e6 more accurate http errors, attach handle tty correctly now 2013-04-29 17:46:41 +02:00
Victor Vieux
e5104a4cb4 working tty 2013-04-29 15:12:18 +02:00
Victor Vieux
22f5e35579 fix last commit 2013-04-29 11:49:04 +02:00
Victor Vieux
30cb4b351f run now try to pull if unknown image 2013-04-29 11:46:31 +02:00
Victor Vieux
a48eb4dff8 run now try to pull if unknown image 2013-04-26 15:08:33 +02:00
Victor Vieux
75c0dc9526 fixed inspect 2013-04-24 18:50:26 +02:00
Victor Vieux
c7bbe7ca79 added export 2013-04-24 16:32:51 +02:00
Victor Vieux
79512b2a80 added commit 2013-04-24 16:06:30 +02:00
Victor Vieux
1e357c6969 changed not found errors to 404, added inspect, wait and diff 2013-04-24 14:01:40 +02:00
Victor Vieux
cf19be44a8 added run (wip), fixed ps and images, added port and tag 2013-04-23 18:20:53 +02:00
Victor Vieux
6ce475dbdf added push hijack (wip) 2013-04-22 23:37:22 +02:00
Victor Vieux
1aa7f1392d restify api 2013-04-22 18:17:47 +02:00
Victor Vieux
b295239de2 added: info, history, logs, ps, start, stop, restart, rm, rmi 2013-04-19 15:24:37 +02:00
Victor Vieux
79e9105806 added kill and images(wip) 2013-04-18 18:56:22 +02:00
Victor Vieux
c0d5d5969b skeleton remote API, only version working (wip) 2013-04-18 03:13:43 +02:00
Victor Vieux
92cd75607b Merge branch 'remote-api' of https://github.com/dotcloud/docker into remote-api 2013-04-17 12:21:56 +02:00
Solomon Hykes
a11b31399b Skeleton of http API 2013-04-16 19:53:08 +02:00
Solomon Hykes
38e2d00199 Skeleton of http API 2013-04-10 19:48:21 -07:00
233 changed files with 21704 additions and 6290 deletions

1
.gitignore vendored
View File

@@ -15,3 +15,4 @@ docs/_build
docs/_static
docs/_templates
.gopath/
.dotcloud

View File

@@ -2,7 +2,7 @@
<charles.hooper@dotcloud.com> <chooper@plumata.com>
<daniel.mizyrycki@dotcloud.com> <daniel@dotcloud.com>
<daniel.mizyrycki@dotcloud.com> <mzdaniel@glidelink.net>
Guillaume J. Charmes <guillaume.charmes@dotcloud.com> creack <charmes.guillaume@gmail.com>
Guillaume J. Charmes <guillaume.charmes@dotcloud.com> <charmes.guillaume@gmail.com>
<guillaume.charmes@dotcloud.com> <guillaume@dotcloud.com>
<kencochrane@gmail.com> <KenCochrane@gmail.com>
<sridharr@activestate.com> <github@srid.name>
@@ -16,4 +16,10 @@ Tim Terhorst <mynamewastaken+git@gmail.com>
Andy Smith <github@anarkystic.com>
<kalessin@kalessin.fr> <louis@dotcloud.com>
<victor.vieux@dotcloud.com> <victor@dotcloud.com>
<victor.vieux@dotcloud.com> <dev@vvieux.com>
<dominik@honnef.co> <dominikh@fork-bomb.org>
Thatcher Peskens <thatcher@dotcloud.com>
<ehanchrow@ine.com> <eric.hanchrow@gmail.com>
Walter Stanish <walter@pratyeka.org>
<daniel@gasienica.ch> <dgasienica@zynga.com>
Roberto Hashioka <roberto_hashioka@hotmail.com>

46
AUTHORS
View File

@@ -1,45 +1,89 @@
# This file lists all individuals having contributed content to the repository.
# If you're submitting a patch, please add your name here in alphabetical order as part of the patch.
#
# For a list of active project maintainers, see the MAINTAINERS file.
#
Al Tobey <al@ooyala.com>
Alexey Shamrin <shamrin@gmail.com>
Andrea Luzzardi <aluzzardi@gmail.com>
Andreas Tiefenthaler <at@an-ti.eu>
Andrew Munsell <andrew@wizardapps.net>
Andy Rothfusz <github@metaliveblog.com>
Andy Smith <github@anarkystic.com>
Antony Messerli <amesserl@rackspace.com>
Barry Allard <barry.allard@gmail.com>
Brandon Liu <bdon@bdon.org>
Brian McCallister <brianm@skife.org>
Bruno Bigras <bigras.bruno@gmail.com>
Caleb Spare <cespare@gmail.com>
Calen Pennington <cale@edx.org>
Charles Hooper <charles.hooper@dotcloud.com>
Christopher Currie <codemonkey+github@gmail.com>
Daniel Gasienica <daniel@gasienica.ch>
Daniel Mizyrycki <daniel.mizyrycki@dotcloud.com>
Daniel Robinson <gottagetmac@gmail.com>
Daniel Von Fange <daniel@leancoder.com>
Dominik Honnef <dominik@honnef.co>
Don Spaulding <donspauldingii@gmail.com>
Dr Nic Williams <drnicwilliams@gmail.com>
Elias Probst <mail@eliasprobst.eu>
Eric Hanchrow <ehanchrow@ine.com>
Evan Wies <evan@neomantra.net>
Eric Myhre <hash@exultant.us>
ezbercih <cem.ezberci@gmail.com>
Flavio Castelli <fcastelli@suse.com>
Francisco Souza <f@souza.cc>
Frederick F. Kautz IV <fkautz@alumni.cmu.edu>
Gareth Rushgrove <gareth@morethanseven.net>
Guillaume J. Charmes <guillaume.charmes@dotcloud.com>
Harley Laue <losinggeneration@gmail.com>
Hunter Blanks <hunter@twilio.com>
Jeff Lindsay <progrium@gmail.com>
Jeremy Grosser <jeremy@synack.me>
Joffrey F <joffrey@dotcloud.com>
John Costa <john.costa@gmail.com>
Jon Wedaman <jweede@gmail.com>
Jonas Pfenniger <jonas@pfenniger.name>
Jonathan Rudenberg <jonathan@titanous.com>
Joseph Anthony Pasquale Holsten <joseph@josephholsten.com>
Julien Barbier <write0@gmail.com>
Jérôme Petazzoni <jerome.petazzoni@dotcloud.com>
Ken Cochrane <kencochrane@gmail.com>
Kevin J. Lynagh <kevin@keminglabs.com>
kim0 <email.ahmedkamal@googlemail.com>
Kiran Gangadharan <kiran.daredevil@gmail.com>
Louis Opter <kalessin@kalessin.fr>
Marcus Farkas <toothlessgear@finitebox.com>
Mark McGranaghan <mmcgrana@gmail.com>
Maxim Treskin <zerthurd@gmail.com>
meejah <meejah@meejah.ca>
Michael Crosby <crosby.michael@gmail.com>
Mikhail Sobolev <mss@mawhrin.net>
Nate Jones <nate@endot.org>
Nelson Chen <crazysim@gmail.com>
Niall O'Higgins <niallo@unworkable.org>
odk- <github@odkurzacz.org>
Paul Bowsher <pbowsher@globalpersonals.co.uk>
Paul Hammond <paul@paulhammond.org>
Phil Spitler <pspitler@gmail.com>
Piotr Bogdan <ppbogdan@gmail.com>
Renato Riccieri Santos Zannon <renato.riccieri@gmail.com>
Robert Obryk <robryk@gmail.com>
Roberto Hashioka <roberto_hashioka@hotmail.com>
Sam Alba <sam.alba@gmail.com>
Sam J Sharpe <sam.sharpe@digital.cabinet-office.gov.uk>
Shawn Siefkas <shawn.siefkas@meredith.com>
Silas Sewell <silas@sewell.org>
Solomon Hykes <solomon@dotcloud.com>
Sridhar Ratnakumar <sridharr@activestate.com>
Thatcher Peskens <thatcher@dotcloud.com>
Thomas Bikeev <thomas.bikeev@mac.com>
Thomas Hansen <thomas.hansen@gmail.com>
Tianon Gravi <admwiggin@gmail.com>
Tim Terhorst <mynamewastaken+git@gmail.com>
Troy Howard <thoward37@gmail.com>
Tobias Bieniek <Tobias.Bieniek@gmx.de>
unclejack <unclejacksons@gmail.com>
Victor Vieux <victor.vieux@dotcloud.com>
Vivek Agarwal <me@vivek.im>
Walter Stanish <walter@pratyeka.org>
Will Dietz <w@wdtz.org>

View File

@@ -1,5 +1,106 @@
# Changelog
## 0.5.0 (2013-07-17)
+ Runtime: List all processes running inside a container with 'docker top'
+ Runtime: Host directories can be mounted as volumes with 'docker run -v'
+ Runtime: Containers can expose public UDP ports (eg, '-p 123/udp')
+ Runtime: Optionally specify an exact public port (eg. '-p 80:4500')
+ Registry: New image naming scheme inspired by Go packaging convention allows arbitrary combinations of registries
+ Builder: ENTRYPOINT instruction sets a default binary entry point to a container
+ Builder: VOLUME instruction marks a part of the container as persistent data
* Builder: 'docker build' displays the full output of a build by default
* Runtime: 'docker login' supports additional options
- Runtime: Dont save a container's hostname when committing an image.
- Registry: Fix issues when uploading images to a private registry
## 0.4.8 (2013-07-01)
+ Builder: New build operation ENTRYPOINT adds an executable entry point to the container.
- Runtime: Fix a bug which caused 'docker run -d' to no longer print the container ID.
- Tests: Fix issues in the test suite
## 0.4.7 (2013-06-28)
* Registry: easier push/pull to a custom registry
* Remote API: the progress bar updates faster when downloading and uploading large files
- Remote API: fix a bug in the optional unix socket transport
* Runtime: improve detection of kernel version
+ Runtime: host directories can be mounted as volumes with 'docker run -b'
- Runtime: fix an issue when only attaching to stdin
* Runtime: use 'tar --numeric-owner' to avoid uid mismatch across multiple hosts
* Hack: improve test suite and dev environment
* Hack: remove dependency on unit tests on 'os/user'
+ Documentation: add terminology section
## 0.4.6 (2013-06-22)
- Runtime: fix a bug which caused creation of empty images (and volumes) to crash.
## 0.4.5 (2013-06-21)
+ Builder: 'docker build git://URL' fetches and builds a remote git repository
* Runtime: 'docker ps -s' optionally prints container size
* Tests: Improved and simplified
- Runtime: fix a regression introduced in 0.4.3 which caused the logs command to fail.
- Builder: fix a regression when using ADD with single regular file.
## 0.4.4 (2013-06-19)
- Builder: fix a regression introduced in 0.4.3 which caused builds to fail on new clients.
## 0.4.3 (2013-06-19)
+ Builder: ADD of a local file will detect tar archives and unpack them
* Runtime: Remove bsdtar dependency
* Runtime: Add unix socket and multiple -H support
* Runtime: Prevent rm of running containers
* Runtime: Use go1.1 cookiejar
* Builder: ADD improvements: use tar for copy + automatically unpack local archives
* Builder: ADD uses tar/untar for copies instead of calling 'cp -ar'
* Builder: nicer output for 'docker build'
* Builder: fixed the behavior of ADD to be (mostly) reverse-compatible, predictable and well-documented.
* Client: HumanReadable ProgressBar sizes in pull
* Client: Fix docker version's git commit output
* API: Send all tags on History API call
* API: Add tag lookup to history command. Fixes #882
- Runtime: Fix issue detaching from running TTY container
- Runtime: Forbid parralel push/pull for a single image/repo. Fixes #311
- Runtime: Fix race condition within Run command when attaching.
- Builder: fix a bug which caused builds to fail if ADD was the first command
- Documentation: fix missing command in irc bouncer example
## 0.4.2 (2013-06-17)
- Packaging: Bumped version to work around an Ubuntu bug
## 0.4.1 (2013-06-17)
+ Remote Api: Add flag to enable cross domain requests
+ Remote Api/Client: Add images and containers sizes in docker ps and docker images
+ Runtime: Configure dns configuration host-wide with 'docker -d -dns'
+ Runtime: Detect faulty DNS configuration and replace it with a public default
+ Runtime: allow docker run <name>:<id>
+ Runtime: you can now specify public port (ex: -p 80:4500)
* Client: allow multiple params in inspect
* Client: Print the container id before the hijack in `docker run`
* Registry: add regexp check on repo's name
* Registry: Move auth to the client
* Runtime: improved image removal to garbage-collect unreferenced parents
* Vagrantfile: Add the rest api port to vagrantfile's port_forward
* Upgrade to Go 1.1
- Builder: don't ignore last line in Dockerfile when it doesn't end with \n
- Registry: Remove login check on pull
## 0.4.0 (2013-06-03)
+ Introducing Builder: 'docker build' builds a container, layer by layer, from a source repository containing a Dockerfile
+ Introducing Remote API: control Docker programmatically using a simple HTTP/json API
* Runtime: various reliability and usability improvements
## 0.3.4 (2013-05-30)
+ Builder: 'docker build' builds a container, layer by layer, from a source repository containing a Dockerfile
+ Builder: 'docker build -t FOO' applies the tag FOO to the newly built container.
+ Runtime: interactive TTYs correctly handle window resize
* Runtime: fix how configuration is merged between layers
+ Remote API: split stdout and stderr on 'docker run'
+ Remote API: optionally listen on a different IP and port (use at your own risk)
* Documentation: improved install instructions.
## 0.3.3 (2013-05-23)
- Registry: Fix push regression
- Various bugfixes
## 0.3.2 (2013-05-09)
* Runtime: Store the actual archive on commit
* Registry: Improve the checksum process

View File

@@ -1,9 +1,6 @@
# Contributing to Docker
Want to hack on Docker? Awesome! There are instructions to get you
started on the website: http://docker.io/gettingstarted.html
They are probably not perfect, please let us know if anything feels
Want to hack on Docker? Awesome! Here are instructions to get you started. They are probably not perfect, please let us know if anything feels
wrong or incomplete.
## Contribution guidelines
@@ -91,3 +88,73 @@ Add your name to the AUTHORS file, but make sure the list is sorted and your
name and email address match your git configuration. The AUTHORS file is
regenerated occasionally from the git commit history, so a mismatch may result
in your changes being overwritten.
## Decision process
### How are decisions made?
Short answer: with pull requests to the docker repository.
Docker is an open-source project with an open design philosophy. This means that the repository is the source of truth for EVERY aspect of the project,
including its philosophy, design, roadmap and APIs. *If it's part of the project, it's in the repo. It's in the repo, it's part of the project.*
As a result, all decisions can be expressed as changes to the repository. An implementation change is a change to the source code. An API change is a change to
the API specification. A philosophy change is a change to the philosophy manifesto. And so on.
All decisions affecting docker, big and small, follow the same 3 steps:
* Step 1: Open a pull request. Anyone can do this.
* Step 2: Discuss the pull request. Anyone can do this.
* Step 3: Accept or refuse a pull request. The relevant maintainer does this (see below "Who decides what?")
### Who decides what?
So all decisions are pull requests, and the relevant maintainer makes the decision by accepting or refusing the pull request.
But how do we identify the relevant maintainer for a given pull request?
Docker follows the timeless, highly efficient and totally unfair system known as [Benevolent dictator for life](http://en.wikipedia.org/wiki/Benevolent_Dictator_for_Life),
with yours truly, Solomon Hykes, in the role of BDFL.
This means that all decisions are made by default by me. Since making every decision myself would be highly unscalable, in practice decisions are spread across multiple maintainers.
The relevant maintainer for a pull request is assigned in 3 steps:
* Step 1: Determine the subdirectory affected by the pull request. This might be src/registry, docs/source/api, or any other part of the repo.
* Step 2: Find the MAINTAINERS file which affects this directory. If the directory itself does not have a MAINTAINERS file, work your way up the the repo hierarchy until you find one.
* Step 3: The first maintainer listed is the primary maintainer. The pull request is assigned to him. He may assign it to other listed maintainers, at his discretion.
### I'm a maintainer, should I make pull requests too?
Primary maintainers are not required to create pull requests when changing their own subdirectory, but secondary maintainers are.
### Who assigns maintainers?
Solomon.
### How can I become a maintainer?
* Step 1: learn the component inside out
* Step 2: make yourself useful by contributing code, bugfixes, support etc.
* Step 3: volunteer on the irc channel (#docker@freenode)
Don't forget: being a maintainer is a time investment. Make sure you will have time to make yourself available.
You don't have to be a maintainer to make a difference on the project!
### What are a maintainer's responsibility?
It is every maintainer's responsibility to:
* 1) Expose a clear roadmap for improving their component.
* 2) Deliver prompt feedback and decisions on pull requests.
* 3) Be available to anyone with questions, bug reports, criticism etc. on their component. This includes irc, github requests and the mailing list.
* 4) Make sure their component respects the philosophy, design and roadmap of the project.
### How is this process changed?
Just like everything else: by making a pull request :)

30
Dockerfile Normal file
View File

@@ -0,0 +1,30 @@
# This file describes the standard way to build Docker, using docker
docker-version 0.4.2
from ubuntu:12.04
maintainer Solomon Hykes <solomon@dotcloud.com>
# Build dependencies
run apt-get install -y -q curl
run apt-get install -y -q git
# Install Go
run curl -s https://go.googlecode.com/files/go1.1.1.linux-amd64.tar.gz | tar -v -C /usr/local -xz
env PATH /usr/local/go/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin
env GOPATH /go
env CGO_ENABLED 0
run cd /tmp && echo 'package main' > t.go && go test -a -i -v
# Download dependencies
run PKG=github.com/kr/pty REV=27435c699; git clone http://$PKG /go/src/$PKG && cd /go/src/$PKG && git checkout -f $REV
run PKG=github.com/gorilla/context/ REV=708054d61e5; git clone http://$PKG /go/src/$PKG && cd /go/src/$PKG && git checkout -f $REV
run PKG=github.com/gorilla/mux/ REV=9b36453141c; git clone http://$PKG /go/src/$PKG && cd /go/src/$PKG && git checkout -f $REV
# Run dependencies
run apt-get install -y iptables
# lxc requires updating ubuntu sources
run echo 'deb http://archive.ubuntu.com/ubuntu precise main universe' > /etc/apt/sources.list
run apt-get update
run apt-get install -y lxc
run apt-get install -y aufs-tools
# Upload docker source
add . /go/src/github.com/dotcloud/docker
# Build the binary
run cd /go/src/github.com/dotcloud/docker/docker && go install -ldflags "-X main.GITCOMMIT '??' -d -w"
env PATH /usr/local/go/bin:/go/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin
cmd ["docker"]

36
FIXME Normal file
View File

@@ -0,0 +1,36 @@
## FIXME
This file is a loose collection of things to improve in the codebase, for the internal
use of the maintainers.
They are not big enough to be in the roadmap, not user-facing enough to be github issues,
and not important enough to be discussed in the mailing list.
They are just like FIXME comments in the source code, except we're not sure where in the source
to put them - so we put them here :)
* Merge Runtime, Server and Builder into Runtime
* Run linter on codebase
* Unify build commands and regular commands
* Move source code into src/ subdir for clarity
* Clean up the Makefile, it's a mess
* docker build: on non-existent local path for ADD, don't show full absolute path on the host
* mount into /dockerinit rather than /sbin/init
* docker tag foo REPO:TAG
* use size header for progress bar in pull
* Clean up context upload in build!!!
* Parallel pull
* Ensure /proc/sys/net/ipv4/ip_forward is 1
* Force DNS to public!
* Always generate a resolv.conf per container, to avoid changing resolv.conf under thne container's feet
* Save metadata with import/export
* Upgrade dockerd without stopping containers
* bring back git revision info, looks like it was lost
* Simple command to remove all untagged images
* Simple command to clean up containers for disk space
* Caching after an ADD
* entry point config
* bring back git revision info, looks like it was lost
* Clean up the ProgressReader api, it's a PITA to use

5
MAINTAINERS Normal file
View File

@@ -0,0 +1,5 @@
Solomon Hykes <solomon@dotcloud.com>
Guillaume Charmes <guillaume@dotcloud.com>
Victor Vieux <victor@dotcloud.com>
api.go: Victor Vieux <victor@dotcloud.com>
Vagrantfile: Daniel Mizyrycki <daniel@dotcloud.com>

View File

@@ -2,6 +2,8 @@ DOCKER_PACKAGE := github.com/dotcloud/docker
RELEASE_VERSION := $(shell git tag | grep -E "v[0-9\.]+$$" | sort -nr | head -n 1)
SRCRELEASE := docker-$(RELEASE_VERSION)
BINRELEASE := docker-$(RELEASE_VERSION).tgz
BUILD_SRC := build_src
BUILD_PATH := ${BUILD_SRC}/src/${DOCKER_PACKAGE}
GIT_ROOT := $(shell git rev-parse --show-toplevel)
BUILD_DIR := $(CURDIR)/.gopath
@@ -17,7 +19,7 @@ endif
GIT_COMMIT = $(shell git rev-parse --short HEAD)
GIT_STATUS = $(shell test -n "`git status --porcelain`" && echo "+CHANGES")
BUILD_OPTIONS = -ldflags "-X main.GIT_COMMIT $(GIT_COMMIT)$(GIT_STATUS)"
BUILD_OPTIONS = -a -ldflags "-X main.GITCOMMIT $(GIT_COMMIT)$(GIT_STATUS) -d -w"
SRC_DIR := $(GOPATH)/src
@@ -33,7 +35,7 @@ all: $(DOCKER_BIN)
$(DOCKER_BIN): $(DOCKER_DIR)
@mkdir -p $(dir $@)
@(cd $(DOCKER_MAIN); go build $(GO_OPTIONS) $(BUILD_OPTIONS) -o $@)
@(cd $(DOCKER_MAIN); CGO_ENABLED=0 go build $(GO_OPTIONS) $(BUILD_OPTIONS) -o $@)
@echo $(DOCKER_BIN_RELATIVE) is created.
$(DOCKER_DIR):
@@ -46,6 +48,8 @@ whichrelease:
release: $(BINRELEASE)
s3cmd -P put $(BINRELEASE) s3://get.docker.io/builds/`uname -s`/`uname -m`/docker-$(RELEASE_VERSION).tgz
s3cmd -P put docker-latest.tgz s3://get.docker.io/builds/`uname -s`/`uname -m`/docker-latest.tgz
s3cmd -P put $(SRCRELEASE)/bin/docker s3://get.docker.io/builds/`uname -s`/`uname -m`/docker
srcrelease: $(SRCRELEASE)
deps: $(DOCKER_DIR)
@@ -60,6 +64,7 @@ $(SRCRELEASE):
$(BINRELEASE): $(SRCRELEASE)
rm -f $(BINRELEASE)
cd $(SRCRELEASE); make; cp -R bin docker-$(RELEASE_VERSION); tar -f ../$(BINRELEASE) -zv -c docker-$(RELEASE_VERSION)
cd $(SRCRELEASE); cp -R bin docker-latest; tar -f ../docker-latest.tgz -zv -c docker-latest
clean:
@rm -rf $(dir $(DOCKER_BIN))
@@ -69,8 +74,16 @@ else ifneq ($(DOCKER_DIR), $(realpath $(DOCKER_DIR)))
@rm -f $(DOCKER_DIR)
endif
test: all
@(cd $(DOCKER_DIR); sudo -E go test $(GO_OPTIONS))
test:
# Copy docker source and dependencies for testing
rm -rf ${BUILD_SRC}; mkdir -p ${BUILD_PATH}
tar --exclude=${BUILD_SRC} -cz . | tar -xz -C ${BUILD_PATH}
GOPATH=${CURDIR}/${BUILD_SRC} go get -d
# Do the test
sudo -E GOPATH=${CURDIR}/${BUILD_SRC} go test ${GO_OPTIONS}
testall: all
@(cd $(DOCKER_DIR); sudo -E go test ./... $(GO_OPTIONS))
fmt:
@gofmt -s -l -w .

7
NOTICE
View File

@@ -3,4 +3,9 @@ Copyright 2012-2013 dotCloud, inc.
This product includes software developed at dotCloud, inc. (http://www.dotcloud.com).
This product contains software (https://github.com/kr/pty) developed by Keith Rarick, licensed under the MIT License.
This product contains software (https://github.com/kr/pty) developed by Keith Rarick, licensed under the MIT License.
Transfers of Docker shall be in accordance with applicable export controls of any country and all other applicable
legal requirements. Docker shall not be distributed or downloaded to or in Cuba, Iran, North Korea, Sudan or Syria
and shall not be distributed or downloaded to any person on the Denied Persons List administered by the U.S.
Department of Commerce.

View File

@@ -12,7 +12,7 @@ Docker is an open-source implementation of the deployment engine which powers [d
It benefits directly from the experience accumulated over several years of large-scale operation and support of hundreds of thousands
of applications and databases.
![Docker L](docs/sources/static_files/lego_docker.jpg "Docker")
![Docker L](docs/sources/concepts/images/lego_docker.jpg "Docker")
## Better than VMs
@@ -23,19 +23,19 @@ happens, for a few reasons:
* *Size*: VMs are very large which makes them impractical to store and transfer.
* *Performance*: running VMs consumes significant CPU and memory, which makes them impractical in many scenarios, for example local development of multi-tier applications, and
large-scale deployment of cpu and memory-intensive applications on large numbers of machines.
large-scale deployment of cpu and memory-intensive applications on large numbers of machines.
* *Portability*: competing VM environments don't play well with each other. Although conversion tools do exist, they are limited and add even more overhead.
* *Hardware-centric*: VMs were designed with machine operators in mind, not software developers. As a result, they offer very limited tooling for what developers need most:
building, testing and running their software. For example, VMs offer no facilities for application versioning, monitoring, configuration, logging or service discovery.
building, testing and running their software. For example, VMs offer no facilities for application versioning, monitoring, configuration, logging or service discovery.
By contrast, Docker relies on a different sandboxing method known as *containerization*. Unlike traditional virtualization,
containerization takes place at the kernel level. Most modern operating system kernels now support the primitives necessary
for containerization, including Linux with [openvz](http://openvz.org), [vserver](http://linux-vserver.org) and more recently [lxc](http://lxc.sourceforge.net),
Solaris with [zones](http://docs.oracle.com/cd/E26502_01/html/E29024/preface-1.html#scrolltoc) and FreeBSD with [Jails](http://www.freebsd.org/doc/handbook/jails.html).
Solaris with [zones](http://docs.oracle.com/cd/E26502_01/html/E29024/preface-1.html#scrolltoc) and FreeBSD with [Jails](http://www.freebsd.org/doc/handbook/jails.html).
Docker builds on top of these low-level primitives to offer developers a portable format and runtime environment that solves
all 4 problems. Docker containers are small (and their transfer can be optimized with layers), they have basically zero memory and cpu overhead,
the are completely portable and are designed from the ground up with an application-centric design.
they are completely portable and are designed from the ground up with an application-centric design.
The best part: because docker operates at the OS level, it can still be run inside a VM!
@@ -46,7 +46,7 @@ Docker does not require that you buy into a particular programming language, fra
Is your application a unix process? Does it use files, tcp connections, environment variables, standard unix streams and command-line
arguments as inputs and outputs? Then docker can run it.
Can your application's build be expressed a sequence of such commands? Then docker can build it.
Can your application's build be expressed as a sequence of such commands? Then docker can build it.
## Escape dependency hell
@@ -56,35 +56,35 @@ A common problem for developers is the difficulty of managing all their applicat
This is usually difficult for several reasons:
* *Cross-platform dependencies*. Modern applications often depend on a combination of system libraries and binaries, language-specific packages, framework-specific modules,
internal components developed for another project, etc. These dependencies live in different "worlds" and require different tools - these tools typically don't work
well with each other, requiring awkward custom integrations.
internal components developed for another project, etc. These dependencies live in different "worlds" and require different tools - these tools typically don't work
well with each other, requiring awkward custom integrations.
* Conflicting dependencies. Different applications may depend on different versions of the same dependency. Packaging tools handle these situations with various degrees of ease -
but they all handle them in different and incompatible ways, which again forces the developer to do extra work.
but they all handle them in different and incompatible ways, which again forces the developer to do extra work.
* Custom dependencies. A developer may need to prepare a custom version of his application's dependency. Some packaging systems can handle custom versions of a dependency,
others can't - and all of them handle it differently.
* Custom dependencies. A developer may need to prepare a custom version of their application's dependency. Some packaging systems can handle custom versions of a dependency,
others can't - and all of them handle it differently.
Docker solves dependency hell by giving the developer a simple way to express *all* his application's dependencies in one place,
Docker solves dependency hell by giving the developer a simple way to express *all* their application's dependencies in one place,
and streamline the process of assembling them. If this makes you think of [XKCD 927](http://xkcd.com/927/), don't worry. Docker doesn't
*replace* your favorite packaging systems. It simply orchestrates their use in a simple and repeatable way. How does it do that? With layers.
Docker defines a build as running a sequence unix commands, one after the other, in the same container. Build commands modify the contents of the container
Docker defines a build as running a sequence of unix commands, one after the other, in the same container. Build commands modify the contents of the container
(usually by installing new files on the filesystem), the next command modifies it some more, etc. Since each build command inherits the result of the previous
commands, the *order* in which the commands are executed expresses *dependencies*.
Here's a typical docker build process:
```bash
from ubuntu:12.10
run apt-get update
run apt-get install python
run apt-get install python-pip
run pip install django
run apt-get install curl
run curl http://github.com/shykes/helloflask/helloflask/master.tar.gz | tar -zxv
run cd master && pip install -r requirements.txt
from ubuntu:12.10
run apt-get update
run DEBIAN_FRONTEND=noninteractive apt-get install -q -y python
run DEBIAN_FRONTEND=noninteractive apt-get install -q -y python-pip
run pip install django
run DEBIAN_FRONTEND=noninteractive apt-get install -q -y curl
run curl -L https://github.com/shykes/helloflask/archive/master.tar.gz | tar -xzv
run cd helloflask-master && pip install -r requirements.txt
```
Note that Docker doesn't care *how* dependencies are built - as long as they can be built by running a unix command in a container.
@@ -97,7 +97,7 @@ Quick install on Ubuntu 12.04 and 12.10
---------------------------------------
```bash
curl get.docker.io | sh -x
curl get.docker.io | sudo sh -x
```
Binary installs
@@ -108,7 +108,7 @@ Note that some methods are community contributions and not yet officially suppor
* [Ubuntu 12.04 and 12.10 (officially supported)](http://docs.docker.io/en/latest/installation/ubuntulinux/)
* [Arch Linux](http://docs.docker.io/en/latest/installation/archlinux/)
* [MacOS X (with Vagrant)](http://docs.docker.io/en/latest/installation/macos/)
* [Mac OS X (with Vagrant)](http://docs.docker.io/en/latest/installation/vagrant/)
* [Windows (with Vagrant)](http://docs.docker.io/en/latest/installation/windows/)
* [Amazon EC2 (with Vagrant)](http://docs.docker.io/en/latest/installation/amazon/)
@@ -181,7 +181,7 @@ Running an irc bouncer
----------------------
```bash
BOUNCER_ID=$(docker run -d -p 6667 -u irc shykes/znc $USER $PASSWORD)
BOUNCER_ID=$(docker run -d -p 6667 -u irc shykes/znc zncrun $USER $PASSWORD)
echo "Configure your irc client to connect to port $(docker port $BOUNCER_ID 6667) of this machine"
```
@@ -216,7 +216,8 @@ PORT=$(docker port $JOB 4444)
# Connect to the public port via the host's public address
# Please note that because of how routing works connecting to localhost or 127.0.0.1 $PORT will not work.
IP=$(ifconfig eth0 | perl -n -e 'if (m/inet addr:([\d\.]+)/g) { print $1 }')
# Replace *eth0* according to your local interface name.
IP=$(ip -o -4 addr list eth0 | perl -n -e 'if (m{inet\s([\d\.]+)\/\d+\s}xms) { print $1 }')
echo hello world | nc $IP $PORT
# Verify that the network connection worked
@@ -251,7 +252,7 @@ Note
----
We also keep the documentation in this repository. The website documentation is generated using sphinx using these sources.
Please find it under docs/sources/ and read more about it https://github.com/dotcloud/docker/master/docs/README.md
Please find it under docs/sources/ and read more about it https://github.com/dotcloud/docker/tree/master/docs/README.md
Please feel free to fix / update the documentation and send us pull requests. More tutorials are also welcome.
@@ -262,14 +263,14 @@ Setting up a dev environment
Instructions that have been verified to work on Ubuntu 12.10,
```bash
sudo apt-get -y install lxc wget bsdtar curl golang git
sudo apt-get -y install lxc curl xz-utils golang git
export GOPATH=~/go/
export PATH=$GOPATH/bin:$PATH
mkdir -p $GOPATH/src/github.com/dotcloud
cd $GOPATH/src/github.com/dotcloud
git clone git@github.com:dotcloud/docker.git
git clone https://github.com/dotcloud/docker.git
cd docker
go get -v github.com/dotcloud/docker/...
@@ -293,7 +294,7 @@ a format that is self-describing and portable, so that any compliant runtime can
The spec for Standard Containers is currently a work in progress, but it is very straightforward. It mostly defines 1) an image format, 2) a set of standard operations, and 3) an execution environment.
A great analogy for this is the shipping container. Just like Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery.
A great analogy for this is the shipping container. Just like how Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery.
### 1. STANDARD OPERATIONS
@@ -321,7 +322,7 @@ Similarly, before Standard Containers, by the time a software component ran in p
### 5. INDUSTRIAL-GRADE DELIVERY
There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded on the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away.
There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded onto the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away.
With Standard Containers we can put an end to that embarrassment, by making INDUSTRIAL-GRADE DELIVERY of software a reality.
@@ -371,4 +372,10 @@ Standard Container Specification
#### Security
### Legal
Transfers of Docker shall be in accordance with applicable export controls of any country and all other applicable
legal requirements. Docker shall not be distributed or downloaded to or in Cuba, Iran, North Korea, Sudan or Syria
and shall not be distributed or downloaded to any person on the Denied Persons List administered by the U.S.
Department of Commerce.

View File

@@ -1,71 +0,0 @@
## Spec for data volumes
Spec owner: Solomon Hykes <solomon@dotcloud.com>
Data volumes (issue #111) are a much-requested feature which trigger much discussion and debate. Below is the current authoritative spec for implementing data volumes.
This spec will be deprecated once the feature is fully implemented.
Discussion, requests, trolls, demands, offerings, threats and other forms of supplications concerning this spec should be addressed to Solomon here: https://github.com/dotcloud/docker/issues/111
### 1. Creating data volumes
At container creation, parts of a container's filesystem can be mounted as separate data volumes. Volumes are defined with the -v flag.
For example:
```bash
$ docker run -v /var/lib/postgres -v /var/log postgres /usr/bin/postgres
```
In this example, a new container is created from the 'postgres' image. At the same time, docker creates 2 new data volumes: one will be mapped to the container at /var/lib/postgres, the other at /var/log.
2 important notes:
1) Volumes don't have top-level names. At no point does the user provide a name, or is a name given to him. Volumes are identified by the path at which they are mounted inside their container.
2) The user doesn't choose the source of the volume. Docker only mounts volumes it created itself, in the same way that it only runs containers that it created itself. That is by design.
### 2. Sharing data volumes
Instead of creating its own volumes, a container can share another container's volumes. For example:
```bash
$ docker run --volumes-from $OTHER_CONTAINER_ID postgres /usr/local/bin/postgres-backup
```
In this example, a new container is created from the 'postgres' example. At the same time, docker will *re-use* the 2 data volumes created in the previous example. One volume will be mounted on the /var/lib/postgres of *both* containers, and the other will be mounted on the /var/log of both containers.
### 3. Under the hood
Docker stores volumes in /var/lib/docker/volumes. Each volume receives a globally unique ID at creation, and is stored at /var/lib/docker/volumes/ID.
At creation, volumes are attached to a single container - the source of truth for this mapping will be the container's configuration.
Mounting a volume consists of calling "mount --bind" from the volume's directory to the appropriate sub-directory of the container mountpoint. This may be done by Docker itself, or farmed out to lxc (which supports mount-binding) if possible.
### 4. Backups, transfers and other volume operations
Volumes sometimes need to be backed up, transfered between hosts, synchronized, etc. These operations typically are application-specific or site-specific, eg. rsync vs. S3 upload vs. replication vs...
Rather than attempting to implement all these scenarios directly, Docker will allow for custom implementations using an extension mechanism.
### 5. Custom volume handlers
Docker allows for arbitrary code to be executed against a container's volumes, to implement any custom action: backup, transfer, synchronization across hosts, etc.
Here's an example:
```bash
$ DB=$(docker run -d -v /var/lib/postgres -v /var/log postgres /usr/bin/postgres)
$ BACKUP_JOB=$(docker run -d --volumes-from $DB shykes/backuper /usr/local/bin/backup-postgres --s3creds=$S3CREDS)
$ docker wait $BACKUP_JOB
```
Congratulations, you just implemented a custom volume handler, using Docker's built-in ability to 1) execute arbitrary code and 2) share volumes between containers.

73
Vagrantfile vendored
View File

@@ -3,41 +3,63 @@
BOX_NAME = ENV['BOX_NAME'] || "ubuntu"
BOX_URI = ENV['BOX_URI'] || "http://files.vagrantup.com/precise64.box"
PPA_KEY = "E61D797F63561DC6"
VF_BOX_URI = ENV['BOX_URI'] || "http://files.vagrantup.com/precise64_vmware_fusion.box"
AWS_REGION = ENV['AWS_REGION'] || "us-east-1"
AWS_AMI = ENV['AWS_AMI'] || "ami-d0f89fb9"
FORWARD_DOCKER_PORTS = ENV['FORWARD_DOCKER_PORTS']
Vagrant::Config.run do |config|
# Setup virtual machine box. This VM configuration code is always executed.
config.vm.box = BOX_NAME
config.vm.box_url = BOX_URI
# Add docker PPA key to the local repository and install docker
pkg_cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys #{PPA_KEY}; "
pkg_cmd << "echo 'deb http://ppa.launchpad.net/dotcloud/lxc-docker/ubuntu precise main' >/etc/apt/sources.list.d/lxc-docker.list; "
pkg_cmd << "apt-get update -qq; apt-get install -q -y lxc-docker"
if ARGV.include?("--provider=aws".downcase)
# Add AUFS dependency to amazon's VM
pkg_cmd << "; apt-get install linux-image-extra-3.2.0-40-virtual"
config.vm.forward_port 4243, 4243
# Provision docker and new kernel if deployment was not done
if Dir.glob("#{File.dirname(__FILE__)}/.vagrant/machines/default/*/id").empty?
# Add lxc-docker package
pkg_cmd = "apt-get update -qq; apt-get install -q -y python-software-properties; " \
"add-apt-repository -y ppa:dotcloud/lxc-docker; apt-get update -qq; " \
"apt-get install -q -y lxc-docker; "
# Add X.org Ubuntu backported 3.8 kernel
pkg_cmd << "add-apt-repository -y ppa:ubuntu-x-swat/r-lts-backport; " \
"apt-get update -qq; apt-get install -q -y linux-image-3.8.0-19-generic; "
# Add guest additions if local vbox VM
is_vbox = true
ARGV.each do |arg| is_vbox &&= !arg.downcase.start_with?("--provider") end
if is_vbox
pkg_cmd << "apt-get install -q -y linux-headers-3.8.0-19-generic dkms; " \
"echo 'Downloading VBox Guest Additions...'; " \
"wget -q http://dlc.sun.com.edgesuite.net/virtualbox/4.2.12/VBoxGuestAdditions_4.2.12.iso; "
# Prepare the VM to add guest additions after reboot
pkg_cmd << "echo -e 'mount -o loop,ro /home/vagrant/VBoxGuestAdditions_4.2.12.iso /mnt\n" \
"echo yes | /mnt/VBoxLinuxAdditions.run\numount /mnt\n" \
"rm /root/guest_additions.sh; ' > /root/guest_additions.sh; " \
"chmod 700 /root/guest_additions.sh; " \
"sed -i -E 's#^exit 0#[ -x /root/guest_additions.sh ] \\&\\& /root/guest_additions.sh#' /etc/rc.local; " \
"echo 'Installation of VBox Guest Additions is proceeding in the background.'; " \
"echo '\"vagrant reload\" can be used in about 2 minutes to activate the new guest additions.'; "
end
# Activate new kernel
pkg_cmd << "shutdown -r +1; "
config.vm.provision :shell, :inline => pkg_cmd
end
config.vm.provision :shell, :inline => pkg_cmd
end
# Providers were added on Vagrant >= 1.1.0
Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
config.vm.provider :aws do |aws, override|
config.vm.box = "dummy"
config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
aws.access_key_id = ENV["AWS_ACCESS_KEY_ID"]
aws.secret_access_key = ENV["AWS_SECRET_ACCESS_KEY"]
aws.keypair_name = ENV["AWS_KEYPAIR_NAME"]
override.ssh.private_key_path = ENV["AWS_SSH_PRIVKEY"]
override.ssh.username = "ubuntu"
aws.region = "us-east-1"
aws.ami = "ami-d0f89fb9"
aws.region = AWS_REGION
aws.ami = AWS_AMI
aws.instance_type = "t1.micro"
end
config.vm.provider :rackspace do |rs|
config.vm.box = "dummy"
config.vm.box_url = "https://github.com/mitchellh/vagrant-rackspace/raw/master/dummy.box"
config.ssh.private_key_path = ENV["RS_PRIVATE_KEY"]
rs.username = ENV["RS_USERNAME"]
rs.api_key = ENV["RS_API_KEY"]
@@ -46,8 +68,29 @@ Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
rs.image = /Ubuntu/
end
config.vm.provider :vmware_fusion do |f, override|
override.vm.box = BOX_NAME
override.vm.box_url = VF_BOX_URI
override.vm.synced_folder ".", "/vagrant", disabled: true
f.vmx["displayName"] = "docker"
end
config.vm.provider :virtualbox do |vb|
config.vm.box = BOX_NAME
config.vm.box_url = BOX_URI
end
end
if !FORWARD_DOCKER_PORTS.nil?
Vagrant::VERSION < "1.1.0" and Vagrant::Config.run do |config|
(49000..49900).each do |port|
config.vm.forward_port port, port
end
end
Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
(49000..49900).each do |port|
config.vm.network :forwarded_port, :host => port, :guest => port
end
end
end

963
api.go Normal file
View File

@@ -0,0 +1,963 @@
package docker
import (
"encoding/json"
"fmt"
"github.com/dotcloud/docker/auth"
"github.com/dotcloud/docker/utils"
"github.com/gorilla/mux"
"io"
"io/ioutil"
"log"
"net"
"net/http"
"os"
"os/exec"
"strconv"
"strings"
)
const APIVERSION = 1.3
const DEFAULTHTTPHOST string = "127.0.0.1"
const DEFAULTHTTPPORT int = 4243
func hijackServer(w http.ResponseWriter) (io.ReadCloser, io.Writer, error) {
conn, _, err := w.(http.Hijacker).Hijack()
if err != nil {
return nil, nil, err
}
// Flush the options to make sure the client sets the raw mode
conn.Write([]byte{})
return conn, conn, nil
}
//If we don't do this, POST method without Content-type (even with empty body) will fail
func parseForm(r *http.Request) error {
if err := r.ParseForm(); err != nil && !strings.HasPrefix(err.Error(), "mime:") {
return err
}
return nil
}
func parseMultipartForm(r *http.Request) error {
if err := r.ParseMultipartForm(4096); err != nil && !strings.HasPrefix(err.Error(), "mime:") {
return err
}
return nil
}
func httpError(w http.ResponseWriter, err error) {
statusCode := http.StatusInternalServerError
if strings.HasPrefix(err.Error(), "No such") {
statusCode = http.StatusNotFound
} else if strings.HasPrefix(err.Error(), "Bad parameter") {
statusCode = http.StatusBadRequest
} else if strings.HasPrefix(err.Error(), "Conflict") {
statusCode = http.StatusConflict
} else if strings.HasPrefix(err.Error(), "Impossible") {
statusCode = http.StatusNotAcceptable
} else if strings.HasPrefix(err.Error(), "Wrong login/password") {
statusCode = http.StatusUnauthorized
} else if strings.Contains(err.Error(), "hasn't been activated") {
statusCode = http.StatusForbidden
}
utils.Debugf("[error %d] %s", statusCode, err)
http.Error(w, err.Error(), statusCode)
}
func writeJSON(w http.ResponseWriter, b []byte) {
w.Header().Set("Content-Type", "application/json")
w.Write(b)
}
func getBoolParam(value string) (bool, error) {
if value == "" {
return false, nil
}
ret, err := strconv.ParseBool(value)
if err != nil {
return false, fmt.Errorf("Bad parameter")
}
return ret, nil
}
func getAuth(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if version > 1.1 {
w.WriteHeader(http.StatusNotFound)
return nil
}
authConfig, err := auth.LoadConfig(srv.runtime.root)
if err != nil {
if err != auth.ErrConfigFileMissing {
return err
}
authConfig = &auth.AuthConfig{}
}
b, err := json.Marshal(&auth.AuthConfig{Username: authConfig.Username, Email: authConfig.Email})
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func postAuth(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
authConfig := &auth.AuthConfig{}
err := json.NewDecoder(r.Body).Decode(authConfig)
if err != nil {
return err
}
status := ""
if version > 1.1 {
status, err = auth.Login(authConfig, false)
if err != nil {
return err
}
} else {
localAuthConfig, err := auth.LoadConfig(srv.runtime.root)
if err != nil {
if err != auth.ErrConfigFileMissing {
return err
}
}
if authConfig.Username == localAuthConfig.Username {
authConfig.Password = localAuthConfig.Password
}
newAuthConfig := auth.NewAuthConfig(authConfig.Username, authConfig.Password, authConfig.Email, srv.runtime.root)
status, err = auth.Login(newAuthConfig, true)
if err != nil {
return err
}
}
if status != "" {
b, err := json.Marshal(&APIAuth{Status: status})
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func getVersion(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
m := srv.DockerVersion()
b, err := json.Marshal(m)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func postContainersKill(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if err := srv.ContainerKill(name); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func getContainersExport(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if err := srv.ContainerExport(name, w); err != nil {
utils.Debugf("%s", err)
return err
}
return nil
}
func getImagesJSON(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
all, err := getBoolParam(r.Form.Get("all"))
if err != nil {
return err
}
filter := r.Form.Get("filter")
outs, err := srv.Images(all, filter)
if err != nil {
return err
}
b, err := json.Marshal(outs)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func getImagesViz(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := srv.ImagesViz(w); err != nil {
return err
}
return nil
}
func getInfo(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
out := srv.DockerInfo()
b, err := json.Marshal(out)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func getImagesHistory(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
outs, err := srv.ImageHistory(name)
if err != nil {
return err
}
b, err := json.Marshal(outs)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func getContainersChanges(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
changesStr, err := srv.ContainerChanges(name)
if err != nil {
return err
}
b, err := json.Marshal(changesStr)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func getContainersTop(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
procsStr, err := srv.ContainerTop(name)
if err != nil {
return err
}
b, err := json.Marshal(procsStr)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func getContainersJSON(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
all, err := getBoolParam(r.Form.Get("all"))
if err != nil {
return err
}
size, err := getBoolParam(r.Form.Get("size"))
if err != nil {
return err
}
since := r.Form.Get("since")
before := r.Form.Get("before")
n, err := strconv.Atoi(r.Form.Get("limit"))
if err != nil {
n = -1
}
outs := srv.Containers(all, size, n, since, before)
b, err := json.Marshal(outs)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func postImagesTag(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
repo := r.Form.Get("repo")
tag := r.Form.Get("tag")
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
force, err := getBoolParam(r.Form.Get("force"))
if err != nil {
return err
}
if err := srv.ContainerTag(name, repo, tag, force); err != nil {
return err
}
w.WriteHeader(http.StatusCreated)
return nil
}
func postCommit(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
config := &Config{}
if err := json.NewDecoder(r.Body).Decode(config); err != nil {
utils.Debugf("%s", err)
}
repo := r.Form.Get("repo")
tag := r.Form.Get("tag")
container := r.Form.Get("container")
author := r.Form.Get("author")
comment := r.Form.Get("comment")
id, err := srv.ContainerCommit(container, repo, tag, author, comment, config)
if err != nil {
return err
}
b, err := json.Marshal(&APIID{id})
if err != nil {
return err
}
w.WriteHeader(http.StatusCreated)
writeJSON(w, b)
return nil
}
// Creates an image from Pull or from Import
func postImagesCreate(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
src := r.Form.Get("fromSrc")
image := r.Form.Get("fromImage")
tag := r.Form.Get("tag")
repo := r.Form.Get("repo")
if version > 1.0 {
w.Header().Set("Content-Type", "application/json")
}
sf := utils.NewStreamFormatter(version > 1.0)
if image != "" { //pull
if err := srv.ImagePull(image, tag, w, sf, &auth.AuthConfig{}); err != nil {
if sf.Used() {
w.Write(sf.FormatError(err))
return nil
}
return err
}
} else { //import
if err := srv.ImageImport(src, repo, tag, r.Body, w, sf); err != nil {
if sf.Used() {
w.Write(sf.FormatError(err))
return nil
}
return err
}
}
return nil
}
func getImagesSearch(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
term := r.Form.Get("term")
outs, err := srv.ImagesSearch(term)
if err != nil {
return err
}
b, err := json.Marshal(outs)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func postImagesInsert(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
url := r.Form.Get("url")
path := r.Form.Get("path")
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if version > 1.0 {
w.Header().Set("Content-Type", "application/json")
}
sf := utils.NewStreamFormatter(version > 1.0)
imgID, err := srv.ImageInsert(name, url, path, w, sf)
if err != nil {
if sf.Used() {
w.Write(sf.FormatError(err))
return nil
}
}
b, err := json.Marshal(&APIID{ID: imgID})
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func postImagesPush(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
authConfig := &auth.AuthConfig{}
if version > 1.1 {
if err := json.NewDecoder(r.Body).Decode(authConfig); err != nil {
return err
}
} else {
localAuthConfig, err := auth.LoadConfig(srv.runtime.root)
if err != nil && err != auth.ErrConfigFileMissing {
return err
}
authConfig = localAuthConfig
}
if err := parseForm(r); err != nil {
return err
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if version > 1.0 {
w.Header().Set("Content-Type", "application/json")
}
sf := utils.NewStreamFormatter(version > 1.0)
if err := srv.ImagePush(name, w, sf, authConfig); err != nil {
if sf.Used() {
w.Write(sf.FormatError(err))
return nil
}
return err
}
return nil
}
func postContainersCreate(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
config := &Config{}
out := &APIRun{}
if err := json.NewDecoder(r.Body).Decode(config); err != nil {
return err
}
if len(config.Dns) == 0 && len(srv.runtime.Dns) == 0 && utils.CheckLocalDns() {
out.Warnings = append(out.Warnings, fmt.Sprintf("Docker detected local DNS server on resolv.conf. Using default external servers: %v", defaultDns))
config.Dns = defaultDns
}
id, err := srv.ContainerCreate(config)
if err != nil {
return err
}
out.ID = id
if config.Memory > 0 && !srv.runtime.capabilities.MemoryLimit {
log.Println("WARNING: Your kernel does not support memory limit capabilities. Limitation discarded.")
out.Warnings = append(out.Warnings, "Your kernel does not support memory limit capabilities. Limitation discarded.")
}
if config.Memory > 0 && !srv.runtime.capabilities.SwapLimit {
log.Println("WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.")
out.Warnings = append(out.Warnings, "Your kernel does not support memory swap capabilities. Limitation discarded.")
}
b, err := json.Marshal(out)
if err != nil {
return err
}
w.WriteHeader(http.StatusCreated)
writeJSON(w, b)
return nil
}
func postContainersRestart(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
t, err := strconv.Atoi(r.Form.Get("t"))
if err != nil || t < 0 {
t = 10
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if err := srv.ContainerRestart(name, t); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func deleteContainers(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
removeVolume, err := getBoolParam(r.Form.Get("v"))
if err != nil {
return err
}
if err := srv.ContainerDestroy(name, removeVolume); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func deleteImages(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
imgs, err := srv.ImageDelete(name, version > 1.1)
if err != nil {
return err
}
if imgs != nil {
if len(imgs) != 0 {
b, err := json.Marshal(imgs)
if err != nil {
return err
}
writeJSON(w, b)
} else {
return fmt.Errorf("Conflict, %s wasn't deleted", name)
}
} else {
w.WriteHeader(http.StatusNoContent)
}
return nil
}
func postContainersStart(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
hostConfig := &HostConfig{}
// allow a nil body for backwards compatibility
if r.Body != nil {
if r.Header.Get("Content-Type") == "application/json" {
if err := json.NewDecoder(r.Body).Decode(hostConfig); err != nil {
return err
}
}
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if err := srv.ContainerStart(name, hostConfig); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func postContainersStop(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
t, err := strconv.Atoi(r.Form.Get("t"))
if err != nil || t < 0 {
t = 10
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if err := srv.ContainerStop(name, t); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func postContainersWait(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
status, err := srv.ContainerWait(name)
if err != nil {
return err
}
b, err := json.Marshal(&APIWait{StatusCode: status})
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func postContainersResize(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
height, err := strconv.Atoi(r.Form.Get("h"))
if err != nil {
return err
}
width, err := strconv.Atoi(r.Form.Get("w"))
if err != nil {
return err
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if err := srv.ContainerResize(name, height, width); err != nil {
return err
}
return nil
}
func postContainersAttach(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
logs, err := getBoolParam(r.Form.Get("logs"))
if err != nil {
return err
}
stream, err := getBoolParam(r.Form.Get("stream"))
if err != nil {
return err
}
stdin, err := getBoolParam(r.Form.Get("stdin"))
if err != nil {
return err
}
stdout, err := getBoolParam(r.Form.Get("stdout"))
if err != nil {
return err
}
stderr, err := getBoolParam(r.Form.Get("stderr"))
if err != nil {
return err
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if _, err := srv.ContainerInspect(name); err != nil {
return err
}
in, out, err := hijackServer(w)
if err != nil {
return err
}
defer func() {
if tcpc, ok := in.(*net.TCPConn); ok {
tcpc.CloseWrite()
} else {
in.Close()
}
}()
defer func() {
if tcpc, ok := out.(*net.TCPConn); ok {
tcpc.CloseWrite()
} else if closer, ok := out.(io.Closer); ok {
closer.Close()
}
}()
fmt.Fprintf(out, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n")
if err := srv.ContainerAttach(name, logs, stream, stdin, stdout, stderr, in, out); err != nil {
fmt.Fprintf(out, "Error: %s\n", err)
}
return nil
}
func getContainersByName(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
container, err := srv.ContainerInspect(name)
if err != nil {
return err
}
b, err := json.Marshal(container)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func getImagesByName(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
image, err := srv.ImageInspect(name)
if err != nil {
return err
}
b, err := json.Marshal(image)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func postImagesGetCache(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
apiConfig := &APIImageConfig{}
if err := json.NewDecoder(r.Body).Decode(apiConfig); err != nil {
return err
}
image, err := srv.ImageGetCached(apiConfig.ID, apiConfig.Config)
if err != nil {
return err
}
if image == nil {
w.WriteHeader(http.StatusNotFound)
return nil
}
apiID := &APIID{ID: image.ID}
b, err := json.Marshal(apiID)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func postBuild(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if version < 1.3 {
return fmt.Errorf("Multipart upload for build is no longer supported. Please upgrade your docker client.")
}
remoteURL := r.FormValue("remote")
repoName := r.FormValue("t")
rawSuppressOutput := r.FormValue("q")
tag := ""
if strings.Contains(repoName, ":") {
remoteParts := strings.Split(repoName, ":")
tag = remoteParts[1]
repoName = remoteParts[0]
}
var context io.Reader
if remoteURL == "" {
context = r.Body
} else if utils.IsGIT(remoteURL) {
if !strings.HasPrefix(remoteURL, "git://") {
remoteURL = "https://" + remoteURL
}
root, err := ioutil.TempDir("", "docker-build-git")
if err != nil {
return err
}
defer os.RemoveAll(root)
if output, err := exec.Command("git", "clone", remoteURL, root).CombinedOutput(); err != nil {
return fmt.Errorf("Error trying to use git: %s (%s)", err, output)
}
c, err := Tar(root, Bzip2)
if err != nil {
return err
}
context = c
} else if utils.IsURL(remoteURL) {
f, err := utils.Download(remoteURL, ioutil.Discard)
if err != nil {
return err
}
defer f.Body.Close()
dockerFile, err := ioutil.ReadAll(f.Body)
if err != nil {
return err
}
c, err := mkBuildContext(string(dockerFile), nil)
if err != nil {
return err
}
context = c
}
suppressOutput, err := getBoolParam(rawSuppressOutput)
if err != nil {
return err
}
b := NewBuildFile(srv, utils.NewWriteFlusher(w), !suppressOutput)
id, err := b.Build(context)
if err != nil {
fmt.Fprintf(w, "Error build: %s\n", err)
return err
}
if repoName != "" {
srv.runtime.repositories.Set(repoName, tag, id, false)
}
return nil
}
func optionsHandler(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
w.WriteHeader(http.StatusOK)
return nil
}
func writeCorsHeaders(w http.ResponseWriter, r *http.Request) {
w.Header().Add("Access-Control-Allow-Origin", "*")
w.Header().Add("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept")
w.Header().Add("Access-Control-Allow-Methods", "GET, POST, DELETE, PUT, OPTIONS")
}
func createRouter(srv *Server, logging bool) (*mux.Router, error) {
r := mux.NewRouter()
m := map[string]map[string]func(*Server, float64, http.ResponseWriter, *http.Request, map[string]string) error{
"GET": {
"/auth": getAuth,
"/version": getVersion,
"/info": getInfo,
"/images/json": getImagesJSON,
"/images/viz": getImagesViz,
"/images/search": getImagesSearch,
"/images/{name:.*}/history": getImagesHistory,
"/images/{name:.*}/json": getImagesByName,
"/containers/ps": getContainersJSON,
"/containers/json": getContainersJSON,
"/containers/{name:.*}/export": getContainersExport,
"/containers/{name:.*}/changes": getContainersChanges,
"/containers/{name:.*}/json": getContainersByName,
"/containers/{name:.*}/top": getContainersTop,
},
"POST": {
"/auth": postAuth,
"/commit": postCommit,
"/build": postBuild,
"/images/create": postImagesCreate,
"/images/{name:.*}/insert": postImagesInsert,
"/images/{name:.*}/push": postImagesPush,
"/images/{name:.*}/tag": postImagesTag,
"/images/getCache": postImagesGetCache,
"/containers/create": postContainersCreate,
"/containers/{name:.*}/kill": postContainersKill,
"/containers/{name:.*}/restart": postContainersRestart,
"/containers/{name:.*}/start": postContainersStart,
"/containers/{name:.*}/stop": postContainersStop,
"/containers/{name:.*}/wait": postContainersWait,
"/containers/{name:.*}/resize": postContainersResize,
"/containers/{name:.*}/attach": postContainersAttach,
},
"DELETE": {
"/containers/{name:.*}": deleteContainers,
"/images/{name:.*}": deleteImages,
},
"OPTIONS": {
"": optionsHandler,
},
}
for method, routes := range m {
for route, fct := range routes {
utils.Debugf("Registering %s, %s", method, route)
// NOTE: scope issue, make sure the variables are local and won't be changed
localRoute := route
localMethod := method
localFct := fct
f := func(w http.ResponseWriter, r *http.Request) {
utils.Debugf("Calling %s %s from %s", localMethod, localRoute, r.RemoteAddr)
if logging {
log.Println(r.Method, r.RequestURI)
}
if strings.Contains(r.Header.Get("User-Agent"), "Docker-Client/") {
userAgent := strings.Split(r.Header.Get("User-Agent"), "/")
if len(userAgent) == 2 && userAgent[1] != VERSION {
utils.Debugf("Warning: client and server don't have the same version (client: %s, server: %s)", userAgent[1], VERSION)
}
}
version, err := strconv.ParseFloat(mux.Vars(r)["version"], 64)
if err != nil {
version = APIVERSION
}
if srv.enableCors {
writeCorsHeaders(w, r)
}
if version == 0 || version > APIVERSION {
w.WriteHeader(http.StatusNotFound)
return
}
if err := localFct(srv, version, w, r, mux.Vars(r)); err != nil {
httpError(w, err)
}
}
if localRoute == "" {
r.Methods(localMethod).HandlerFunc(f)
} else {
r.Path("/v{version:[0-9.]+}" + localRoute).Methods(localMethod).HandlerFunc(f)
r.Path(localRoute).Methods(localMethod).HandlerFunc(f)
}
}
}
return r, nil
}
func ListenAndServe(proto, addr string, srv *Server, logging bool) error {
log.Printf("Listening for HTTP on %s (%s)\n", addr, proto)
r, err := createRouter(srv, logging)
if err != nil {
return err
}
l, e := net.Listen(proto, addr)
if e != nil {
return e
}
//as the daemon is launched as root, change to permission of the socket to allow non-root to connect
if proto == "unix" {
os.Chmod(addr, 0777)
}
httpSrv := http.Server{Addr: addr, Handler: r}
return httpSrv.Serve(l)
}

87
api_params.go Normal file
View File

@@ -0,0 +1,87 @@
package docker
type APIHistory struct {
ID string `json:"Id"`
Tags []string `json:",omitempty"`
Created int64
CreatedBy string `json:",omitempty"`
}
type APIImages struct {
Repository string `json:",omitempty"`
Tag string `json:",omitempty"`
ID string `json:"Id"`
Created int64
Size int64
VirtualSize int64
}
type APIInfo struct {
Debug bool
Containers int
Images int
NFd int `json:",omitempty"`
NGoroutines int `json:",omitempty"`
MemoryLimit bool `json:",omitempty"`
SwapLimit bool `json:",omitempty"`
}
type APITop struct {
PID string
Tty string
Time string
Cmd string
}
type APIRmi struct {
Deleted string `json:",omitempty"`
Untagged string `json:",omitempty"`
}
type APIContainers struct {
ID string `json:"Id"`
Image string
Command string
Created int64
Status string
Ports string
SizeRw int64
SizeRootFs int64
}
type APISearch struct {
Name string
Description string
}
type APIID struct {
ID string `json:"Id"`
}
type APIRun struct {
ID string `json:"Id"`
Warnings []string `json:",omitempty"`
}
type APIPort struct {
Port string
}
type APIVersion struct {
Version string
GitCommit string `json:",omitempty"`
GoVersion string `json:",omitempty"`
}
type APIWait struct {
StatusCode int
}
type APIAuth struct {
Status string
}
type APIImageConfig struct {
ID string `json:"Id"`
*Config
}

1117
api_test.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,11 +1,16 @@
package docker
import (
"errors"
"archive/tar"
"bytes"
"fmt"
"github.com/dotcloud/docker/utils"
"io"
"io/ioutil"
"os"
"os/exec"
"path"
"path/filepath"
)
type Archive io.Reader
@@ -19,6 +24,33 @@ const (
Xz
)
func DetectCompression(source []byte) Compression {
sourceLen := len(source)
for compression, m := range map[Compression][]byte{
Bzip2: {0x42, 0x5A, 0x68},
Gzip: {0x1F, 0x8B, 0x08},
Xz: {0xFD, 0x37, 0x7A, 0x58, 0x5A, 0x00},
} {
fail := false
if len(m) > sourceLen {
utils.Debugf("Len too short")
continue
}
i := 0
for _, b := range m {
if b != source[i] {
fail = true
break
}
i++
}
if !fail {
return compression
}
}
return Uncompressed
}
func (compression *Compression) Flag() string {
switch *compression {
case Bzip2:
@@ -31,21 +63,167 @@ func (compression *Compression) Flag() string {
return ""
}
func Tar(path string, compression Compression) (io.Reader, error) {
cmd := exec.Command("bsdtar", "-f", "-", "-C", path, "-c"+compression.Flag(), ".")
return CmdStream(cmd)
func (compression *Compression) Extension() string {
switch *compression {
case Uncompressed:
return "tar"
case Bzip2:
return "tar.bz2"
case Gzip:
return "tar.gz"
case Xz:
return "tar.xz"
}
return ""
}
// Tar creates an archive from the directory at `path`, and returns it as a
// stream of bytes.
func Tar(path string, compression Compression) (io.Reader, error) {
return TarFilter(path, compression, nil)
}
// Tar creates an archive from the directory at `path`, only including files whose relative
// paths are included in `filter`. If `filter` is nil, then all files are included.
func TarFilter(path string, compression Compression, filter []string) (io.Reader, error) {
args := []string{"tar", "--numeric-owner", "-f", "-", "-C", path}
if filter == nil {
filter = []string{"."}
}
for _, f := range filter {
args = append(args, "-c"+compression.Flag(), f)
}
return CmdStream(exec.Command(args[0], args[1:]...))
}
// Untar reads a stream of bytes from `archive`, parses it as a tar archive,
// and unpacks it into the directory at `path`.
// The archive may be compressed with one of the following algorithgms:
// identity (uncompressed), gzip, bzip2, xz.
// FIXME: specify behavior when target path exists vs. doesn't exist.
func Untar(archive io.Reader, path string) error {
cmd := exec.Command("bsdtar", "-f", "-", "-C", path, "-x")
cmd.Stdin = archive
if archive == nil {
return fmt.Errorf("Empty archive")
}
buf := make([]byte, 10)
totalN := 0
for totalN < 10 {
if n, err := archive.Read(buf[totalN:]); err != nil {
if err == io.EOF {
return fmt.Errorf("Tarball too short")
}
return err
} else {
totalN += n
utils.Debugf("[tar autodetect] n: %d", n)
}
}
compression := DetectCompression(buf)
utils.Debugf("Archive compression detected: %s", compression.Extension())
cmd := exec.Command("tar", "--numeric-owner", "-f", "-", "-C", path, "-x"+compression.Flag())
cmd.Stdin = io.MultiReader(bytes.NewReader(buf), archive)
// Hardcode locale environment for predictable outcome regardless of host configuration.
// (see https://github.com/dotcloud/docker/issues/355)
cmd.Env = []string{"LANG=en_US.utf-8", "LC_ALL=en_US.utf-8"}
output, err := cmd.CombinedOutput()
if err != nil {
return errors.New(err.Error() + ": " + string(output))
return fmt.Errorf("%s: %s", err, output)
}
return nil
}
// TarUntar is a convenience function which calls Tar and Untar, with
// the output of one piped into the other. If either Tar or Untar fails,
// TarUntar aborts and returns the error.
func TarUntar(src string, filter []string, dst string) error {
utils.Debugf("TarUntar(%s %s %s)", src, filter, dst)
archive, err := TarFilter(src, Uncompressed, filter)
if err != nil {
return err
}
return Untar(archive, dst)
}
// UntarPath is a convenience function which looks for an archive
// at filesystem path `src`, and unpacks it at `dst`.
func UntarPath(src, dst string) error {
if archive, err := os.Open(src); err != nil {
return err
} else if err := Untar(archive, dst); err != nil {
return err
}
return nil
}
// CopyWithTar creates a tar archive of filesystem path `src`, and
// unpacks it at filesystem path `dst`.
// The archive is streamed directly with fixed buffering and no
// intermediary disk IO.
//
func CopyWithTar(src, dst string) error {
srcSt, err := os.Stat(src)
if err != nil {
return err
}
if !srcSt.IsDir() {
return CopyFileWithTar(src, dst)
}
// Create dst, copy src's content into it
utils.Debugf("Creating dest directory: %s", dst)
if err := os.MkdirAll(dst, 0700); err != nil && !os.IsExist(err) {
return err
}
utils.Debugf("Calling TarUntar(%s, %s)", src, dst)
return TarUntar(src, nil, dst)
}
// CopyFileWithTar emulates the behavior of the 'cp' command-line
// for a single file. It copies a regular file from path `src` to
// path `dst`, and preserves all its metadata.
//
// If `dst` ends with a trailing slash '/', the final destination path
// will be `dst/base(src)`.
func CopyFileWithTar(src, dst string) error {
utils.Debugf("CopyFileWithTar(%s, %s)", src, dst)
srcSt, err := os.Stat(src)
if err != nil {
return err
}
if srcSt.IsDir() {
return fmt.Errorf("Can't copy a directory")
}
// Clean up the trailing /
if dst[len(dst)-1] == '/' {
dst = path.Join(dst, filepath.Base(src))
}
// Create the holding directory if necessary
if err := os.MkdirAll(filepath.Dir(dst), 0700); err != nil && !os.IsExist(err) {
return err
}
buf := new(bytes.Buffer)
tw := tar.NewWriter(buf)
hdr, err := tar.FileInfoHeader(srcSt, "")
if err != nil {
return err
}
hdr.Name = filepath.Base(dst)
if err := tw.WriteHeader(hdr); err != nil {
return err
}
srcF, err := os.Open(src)
if err != nil {
return err
}
if _, err := io.Copy(tw, srcF); err != nil {
return err
}
tw.Close()
return Untar(buf, filepath.Dir(dst))
}
// CmdStream executes a command, and returns its stdout as a stream.
// If the command fails to run or doesn't complete successfully, an error
// will be returned, including anything written on stderr.
@@ -76,7 +254,7 @@ func CmdStream(cmd *exec.Cmd) (io.Reader, error) {
}
errText := <-errChan
if err := cmd.Wait(); err != nil {
pipeW.CloseWithError(errors.New(err.Error() + ": " + string(errText)))
pipeW.CloseWithError(fmt.Errorf("%s: %s", err, errText))
} else {
pipeW.Close()
}

View File

@@ -1,10 +1,13 @@
package docker
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"os"
"os/exec"
"path"
"testing"
"time"
)
@@ -13,7 +16,7 @@ func TestCmdStreamLargeStderr(t *testing.T) {
cmd := exec.Command("/bin/sh", "-c", "dd if=/dev/zero bs=1k count=1000 of=/dev/stderr; echo hello")
out, err := CmdStream(cmd)
if err != nil {
t.Fatalf("Failed to start command: " + err.Error())
t.Fatalf("Failed to start command: %s", err)
}
errCh := make(chan error)
go func() {
@@ -23,7 +26,7 @@ func TestCmdStreamLargeStderr(t *testing.T) {
select {
case err := <-errCh:
if err != nil {
t.Fatalf("Command should not have failed (err=%s...)", err.Error()[:100])
t.Fatalf("Command should not have failed (err=%.100s...)", err)
}
case <-time.After(5 * time.Second):
t.Fatalf("Command did not complete in 5 seconds; probable deadlock")
@@ -34,12 +37,12 @@ func TestCmdStreamBad(t *testing.T) {
badCmd := exec.Command("/bin/sh", "-c", "echo hello; echo >&2 error couldn\\'t reverse the phase pulser; exit 1")
out, err := CmdStream(badCmd)
if err != nil {
t.Fatalf("Failed to start command: " + err.Error())
t.Fatalf("Failed to start command: %s", err)
}
if output, err := ioutil.ReadAll(out); err == nil {
t.Fatalf("Command should have failed")
} else if err.Error() != "exit status 1: error couldn't reverse the phase pulser\n" {
t.Fatalf("Wrong error value (%s)", err.Error())
t.Fatalf("Wrong error value (%s)", err)
} else if s := string(output); s != "hello\n" {
t.Fatalf("Command output should be '%s', not '%s'", "hello\\n", output)
}
@@ -58,20 +61,58 @@ func TestCmdStreamGood(t *testing.T) {
}
}
func TestTarUntar(t *testing.T) {
archive, err := Tar(".", Uncompressed)
func tarUntar(t *testing.T, origin string, compression Compression) error {
archive, err := Tar(origin, compression)
if err != nil {
t.Fatal(err)
}
buf := make([]byte, 10)
if _, err := archive.Read(buf); err != nil {
return err
}
archive = io.MultiReader(bytes.NewReader(buf), archive)
detectedCompression := DetectCompression(buf)
if detectedCompression.Extension() != compression.Extension() {
return fmt.Errorf("Wrong compression detected. Actual compression: %s, found %s", compression.Extension(), detectedCompression.Extension())
}
tmp, err := ioutil.TempDir("", "docker-test-untar")
if err != nil {
t.Fatal(err)
return err
}
defer os.RemoveAll(tmp)
if err := Untar(archive, tmp); err != nil {
t.Fatal(err)
return err
}
if _, err := os.Stat(tmp); err != nil {
t.Fatalf("Error stating %s: %s", tmp, err.Error())
return err
}
return nil
}
func TestTarUntar(t *testing.T) {
origin, err := ioutil.TempDir("", "docker-test-untar-origin")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(origin)
if err := ioutil.WriteFile(path.Join(origin, "1"), []byte("hello world"), 0700); err != nil {
t.Fatal(err)
}
if err := ioutil.WriteFile(path.Join(origin, "2"), []byte("welcome!"), 0700); err != nil {
t.Fatal(err)
}
for _, c := range []Compression{
Uncompressed,
Gzip,
Bzip2,
Xz,
} {
if err := tarUntar(t, origin, c); err != nil {
t.Fatalf("Error tar/untar for compression %s: %s", c.Extension(), err)
}
}
}

1
auth/MAINTAINERS Symbolic link
View File

@@ -0,0 +1 @@
../registry/MAINTAINERS

View File

@@ -3,6 +3,7 @@ package auth
import (
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"net/http"
@@ -14,14 +15,20 @@ import (
// Where we store the config file
const CONFIGFILE = ".dockercfg"
// the registry server we want to login against
const INDEX_SERVER = "https://index.docker.io"
// Only used for user auth + account creation
const INDEXSERVER = "https://index.docker.io/v1/"
//const INDEXSERVER = "http://indexstaging-docker.dotcloud.com/"
var (
ErrConfigFileMissing = errors.New("The Auth config file is missing")
)
type AuthConfig struct {
Username string `json:"username"`
Password string `json:"password"`
Email string `json:"email"`
rootPath string `json:-`
rootPath string
}
func NewAuthConfig(username, password, email, rootPath string) *AuthConfig {
@@ -33,8 +40,12 @@ func NewAuthConfig(username, password, email, rootPath string) *AuthConfig {
}
}
func IndexServerAddress() string {
return INDEXSERVER
}
// create a base64 encoded auth string to store in config
func EncodeAuth(authConfig *AuthConfig) string {
func encodeAuth(authConfig *AuthConfig) string {
authStr := authConfig.Username + ":" + authConfig.Password
msg := []byte(authStr)
encoded := make([]byte, base64.StdEncoding.EncodedLen(len(msg)))
@@ -43,7 +54,7 @@ func EncodeAuth(authConfig *AuthConfig) string {
}
// decode the auth string
func DecodeAuth(authStr string) (*AuthConfig, error) {
func decodeAuth(authStr string) (*AuthConfig, error) {
decLen := base64.StdEncoding.DecodedLen(len(authStr))
decoded := make([]byte, decLen)
authByte := []byte(authStr)
@@ -60,7 +71,6 @@ func DecodeAuth(authStr string) (*AuthConfig, error) {
}
password := strings.Trim(arr[1], "\x00")
return &AuthConfig{Username: arr[0], Password: password}, nil
}
// load up the auth config information and return values
@@ -68,7 +78,7 @@ func DecodeAuth(authStr string) (*AuthConfig, error) {
func LoadConfig(rootPath string) (*AuthConfig, error) {
confFile := path.Join(rootPath, CONFIGFILE)
if _, err := os.Stat(confFile); err != nil {
return &AuthConfig{}, fmt.Errorf("The Auth config file is missing")
return &AuthConfig{rootPath: rootPath}, ErrConfigFileMissing
}
b, err := ioutil.ReadFile(confFile)
if err != nil {
@@ -80,7 +90,7 @@ func LoadConfig(rootPath string) (*AuthConfig, error) {
}
origAuth := strings.Split(arr[0], " = ")
origEmail := strings.Split(arr[1], " = ")
authConfig, err := DecodeAuth(origAuth[1])
authConfig, err := decodeAuth(origAuth[1])
if err != nil {
return nil, err
}
@@ -90,13 +100,13 @@ func LoadConfig(rootPath string) (*AuthConfig, error) {
}
// save the auth config
func saveConfig(rootPath, authStr string, email string) error {
confFile := path.Join(rootPath, CONFIGFILE)
if len(email) == 0 {
func SaveConfig(authConfig *AuthConfig) error {
confFile := path.Join(authConfig.rootPath, CONFIGFILE)
if len(authConfig.Email) == 0 {
os.Remove(confFile)
return nil
}
lines := "auth = " + authStr + "\n" + "email = " + email + "\n"
lines := "auth = " + encodeAuth(authConfig) + "\n" + "email = " + authConfig.Email + "\n"
b := []byte(lines)
err := ioutil.WriteFile(confFile, b, 0600)
if err != nil {
@@ -106,7 +116,7 @@ func saveConfig(rootPath, authStr string, email string) error {
}
// try to register/login to the registry server
func Login(authConfig *AuthConfig) (string, error) {
func Login(authConfig *AuthConfig, store bool) (string, error) {
storeConfig := false
client := &http.Client{}
reqStatusCode := 0
@@ -119,7 +129,7 @@ func Login(authConfig *AuthConfig) (string, error) {
// using `bytes.NewReader(jsonBody)` here causes the server to respond with a 411 status.
b := strings.NewReader(string(jsonBody))
req1, err := http.Post(INDEX_SERVER+"/v1/users/", "application/json; charset=utf-8", b)
req1, err := http.Post(IndexServerAddress()+"users/", "application/json; charset=utf-8", b)
if err != nil {
return "", fmt.Errorf("Server Error: %s", err)
}
@@ -132,14 +142,14 @@ func Login(authConfig *AuthConfig) (string, error) {
if reqStatusCode == 201 {
status = "Account created. Please use the confirmation link we sent" +
" to your e-mail to activate it.\n"
" to your e-mail to activate it."
storeConfig = true
} else if reqStatusCode == 403 {
return "", fmt.Errorf("Login: Your account hasn't been activated. " +
"Please check your e-mail for a confirmation link.")
} else if reqStatusCode == 400 {
if string(reqBody) == "\"Username or email already exists\"" {
req, err := http.NewRequest("GET", INDEX_SERVER+"/v1/users/", nil)
req, err := http.NewRequest("GET", IndexServerAddress()+"users/", nil)
req.SetBasicAuth(authConfig.Username, authConfig.Password)
resp, err := client.Do(req)
if err != nil {
@@ -151,10 +161,15 @@ func Login(authConfig *AuthConfig) (string, error) {
return "", err
}
if resp.StatusCode == 200 {
status = "Login Succeeded\n"
status = "Login Succeeded"
storeConfig = true
} else if resp.StatusCode == 401 {
saveConfig(authConfig.rootPath, "", "")
if store {
authConfig.Email = ""
if err := SaveConfig(authConfig); err != nil {
return "", err
}
}
return "", fmt.Errorf("Wrong login/password, please try again")
} else {
return "", fmt.Errorf("Login: %s (Code: %d; Headers: %s)", body,
@@ -166,9 +181,10 @@ func Login(authConfig *AuthConfig) (string, error) {
} else {
return "", fmt.Errorf("Unexpected status code [%d] : %s", reqStatusCode, reqBody)
}
if storeConfig {
authStr := EncodeAuth(authConfig)
saveConfig(authConfig.rootPath, authStr, authConfig.Email)
if storeConfig && store {
if err := SaveConfig(authConfig); err != nil {
return "", err
}
}
return status, nil
}

View File

@@ -1,13 +1,17 @@
package auth
import (
"crypto/rand"
"encoding/hex"
"os"
"strings"
"testing"
)
func TestEncodeAuth(t *testing.T) {
newAuthConfig := &AuthConfig{Username: "ken", Password: "test", Email: "test@example.com"}
authStr := EncodeAuth(newAuthConfig)
decAuthConfig, err := DecodeAuth(authStr)
authStr := encodeAuth(newAuthConfig)
decAuthConfig, err := decodeAuth(authStr)
if err != nil {
t.Fatal(err)
}
@@ -21,3 +25,49 @@ func TestEncodeAuth(t *testing.T) {
t.Fatal("AuthString encoding isn't correct.")
}
}
func TestLogin(t *testing.T) {
os.Setenv("DOCKER_INDEX_URL", "https://indexstaging-docker.dotcloud.com")
defer os.Setenv("DOCKER_INDEX_URL", "")
authConfig := NewAuthConfig("unittester", "surlautrerivejetattendrai", "noise+unittester@dotcloud.com", "/tmp")
status, err := Login(authConfig, false)
if err != nil {
t.Fatal(err)
}
if status != "Login Succeeded" {
t.Fatalf("Expected status \"Login Succeeded\", found \"%s\" instead", status)
}
}
func TestCreateAccount(t *testing.T) {
os.Setenv("DOCKER_INDEX_URL", "https://indexstaging-docker.dotcloud.com")
defer os.Setenv("DOCKER_INDEX_URL", "")
tokenBuffer := make([]byte, 16)
_, err := rand.Read(tokenBuffer)
if err != nil {
t.Fatal(err)
}
token := hex.EncodeToString(tokenBuffer)[:12]
username := "ut" + token
authConfig := NewAuthConfig(username, "test42", "docker-ut+"+token+"@example.com", "/tmp")
status, err := Login(authConfig, false)
if err != nil {
t.Fatal(err)
}
expectedStatus := "Account created. Please use the confirmation link we sent" +
" to your e-mail to activate it."
if status != expectedStatus {
t.Fatalf("Expected status: \"%s\", found \"%s\" instead.", expectedStatus, status)
}
status, err = Login(authConfig, false)
if err == nil {
t.Fatalf("Expected error but found nil instead")
}
expectedError := "Login: Account is not Active"
if !strings.Contains(err.Error(), expectedError) {
t.Fatalf("Expected message \"%s\" but found \"%s\" instead", expectedError, err)
}
}

View File

@@ -1,20 +0,0 @@
Buildbot
========
Buildbot is a continuous integration system designed to automate the
build/test cycle. By automatically rebuilding and testing the tree each time
something has changed, build problems are pinpointed quickly, before other
developers are inconvenienced by the failure.
When running 'make hack' at the docker root directory, it spawns a virtual
machine in the background running a buildbot instance and adds a git
post-commit hook that automatically run docker tests for you.
You can check your buildbot instance at http://192.168.33.21:8010/waterfall
Buildbot dependencies
---------------------
vagrant, virtualbox packages and python package requests

28
buildbot/Vagrantfile vendored
View File

@@ -1,28 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
$BUILDBOT_IP = '192.168.33.21'
def v10(config)
config.vm.box = "quantal64_3.5.0-25"
config.vm.box_url = "http://get.docker.io/vbox/ubuntu/12.10/quantal64_3.5.0-25.box"
config.vm.share_folder 'v-data', '/data/docker', File.dirname(__FILE__) + '/..'
config.vm.network :hostonly, $BUILDBOT_IP
# Ensure puppet is installed on the instance
config.vm.provision :shell, :inline => 'apt-get -qq update; apt-get install -y puppet'
config.vm.provision :puppet do |puppet|
puppet.manifests_path = '.'
puppet.manifest_file = 'buildbot.pp'
puppet.options = ['--templatedir','.']
end
end
Vagrant::VERSION < '1.1.0' and Vagrant::Config.run do |config|
v10(config)
end
Vagrant::VERSION >= '1.1.0' and Vagrant.configure('1') do |config|
v10(config)
end

View File

@@ -1,43 +0,0 @@
#!/bin/bash
# Auto setup of buildbot configuration. Package installation is being done
# on buildbot.pp
# Dependencies: buildbot, buildbot-slave, supervisor
SLAVE_NAME='buildworker'
SLAVE_SOCKET='localhost:9989'
BUILDBOT_PWD='pass-docker'
USER='vagrant'
ROOT_PATH='/data/buildbot'
DOCKER_PATH='/data/docker'
BUILDBOT_CFG="$DOCKER_PATH/buildbot/buildbot-cfg"
IP=$(grep BUILDBOT_IP /data/docker/buildbot/Vagrantfile | awk -F "'" '{ print $2; }')
function run { su $USER -c "$1"; }
export PATH=/bin:sbin:/usr/bin:/usr/sbin:/usr/local/bin
# Exit if buildbot has already been installed
[ -d "$ROOT_PATH" ] && exit 0
# Setup buildbot
run "mkdir -p ${ROOT_PATH}"
cd ${ROOT_PATH}
run "buildbot create-master master"
run "cp $BUILDBOT_CFG/master.cfg master"
run "sed -i 's/localhost/$IP/' master/master.cfg"
run "buildslave create-slave slave $SLAVE_SOCKET $SLAVE_NAME $BUILDBOT_PWD"
# Allow buildbot subprocesses (docker tests) to properly run in containers,
# in particular with docker -u
run "sed -i 's/^umask = None/umask = 000/' ${ROOT_PATH}/slave/buildbot.tac"
# Setup supervisor
cp $BUILDBOT_CFG/buildbot.conf /etc/supervisor/conf.d/buildbot.conf
sed -i "s/^chmod=0700.*0700./chmod=0770\nchown=root:$USER/" /etc/supervisor/supervisord.conf
kill -HUP `pgrep -f "/usr/bin/python /usr/bin/supervisord"`
# Add git hook
cp $BUILDBOT_CFG/post-commit $DOCKER_PATH/.git/hooks
sed -i "s/localhost/$IP/" $DOCKER_PATH/.git/hooks/post-commit

View File

@@ -1,32 +0,0 @@
node default {
$USER = 'vagrant'
$ROOT_PATH = '/data/buildbot'
$DOCKER_PATH = '/data/docker'
exec {'apt_update': command => '/usr/bin/apt-get update' }
Package { require => Exec['apt_update'] }
group {'puppet': ensure => 'present'}
# Install dependencies
Package { ensure => 'installed' }
package { ['python-dev','python-pip','supervisor','lxc','bsdtar','git','golang']: }
file{[ '/data' ]:
owner => $USER, group => $USER, ensure => 'directory' }
file {'/var/tmp/requirements.txt':
content => template('requirements.txt') }
exec {'requirements':
require => [ Package['python-dev'], Package['python-pip'],
File['/var/tmp/requirements.txt'] ],
cwd => '/var/tmp',
command => "/bin/sh -c '(/usr/bin/pip install -r requirements.txt;
rm /var/tmp/requirements.txt)'" }
exec {'buildbot-cfg-sh':
require => [ Package['supervisor'], Exec['requirements']],
path => '/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin',
cwd => '/data',
command => "$DOCKER_PATH/buildbot/buildbot-cfg/buildbot-cfg.sh" }
}

View File

@@ -1,20 +1,22 @@
package docker
import (
"bufio"
"encoding/json"
"fmt"
"io"
"github.com/dotcloud/docker/utils"
"os"
"path"
"strings"
"time"
)
var defaultDns = []string{"8.8.8.8", "8.8.4.4"}
type Builder struct {
runtime *Runtime
repositories *TagStore
graph *Graph
config *Config
image *Image
}
func NewBuilder(runtime *Runtime) *Builder {
@@ -25,42 +27,6 @@ func NewBuilder(runtime *Runtime) *Builder {
}
}
func (builder *Builder) mergeConfig(userConf, imageConf *Config) {
if userConf.Hostname != "" {
userConf.Hostname = imageConf.Hostname
}
if userConf.User != "" {
userConf.User = imageConf.User
}
if userConf.Memory == 0 {
userConf.Memory = imageConf.Memory
}
if userConf.MemorySwap == 0 {
userConf.MemorySwap = imageConf.MemorySwap
}
if userConf.PortSpecs == nil || len(userConf.PortSpecs) == 0 {
userConf.PortSpecs = imageConf.PortSpecs
}
if !userConf.Tty {
userConf.Tty = userConf.Tty
}
if !userConf.OpenStdin {
userConf.OpenStdin = imageConf.OpenStdin
}
if !userConf.StdinOnce {
userConf.StdinOnce = imageConf.StdinOnce
}
if userConf.Env == nil || len(userConf.Env) == 0 {
userConf.Env = imageConf.Env
}
if userConf.Cmd == nil || len(userConf.Cmd) == 0 {
userConf.Cmd = imageConf.Cmd
}
if userConf.Dns == nil || len(userConf.Dns) == 0 {
userConf.Dns = imageConf.Dns
}
}
func (builder *Builder) Create(config *Config) (*Container, error) {
// Lookup image
img, err := builder.repositories.LookupImage(config.Image)
@@ -69,7 +35,7 @@ func (builder *Builder) Create(config *Config) (*Container, error) {
}
if img.Config != nil {
builder.mergeConfig(config, img.Config)
MergeConfig(config, img.Config)
}
if config.Cmd == nil || len(config.Cmd) == 0 {
@@ -77,41 +43,63 @@ func (builder *Builder) Create(config *Config) (*Container, error) {
}
// Generate id
id := GenerateId()
id := GenerateID()
// Generate default hostname
// FIXME: the lxc template no longer needs to set a default hostname
if config.Hostname == "" {
config.Hostname = id[:12]
}
var args []string
var entrypoint string
if len(config.Entrypoint) != 0 {
entrypoint = config.Entrypoint[0]
args = append(config.Entrypoint[1:], config.Cmd...)
} else {
entrypoint = config.Cmd[0]
args = config.Cmd[1:]
}
container := &Container{
// FIXME: we should generate the ID here instead of receiving it as an argument
Id: id,
ID: id,
Created: time.Now(),
Path: config.Cmd[0],
Args: config.Cmd[1:], //FIXME: de-duplicate from config
Path: entrypoint,
Args: args, //FIXME: de-duplicate from config
Config: config,
Image: img.Id, // Always use the resolved image id
Image: img.ID, // Always use the resolved image id
NetworkSettings: &NetworkSettings{},
// FIXME: do we need to store this in the container?
SysInitPath: sysInitPath,
}
container.root = builder.runtime.containerRoot(container.Id)
container.root = builder.runtime.containerRoot(container.ID)
// Step 1: create the container directory.
// This doubles as a barrier to avoid race conditions.
if err := os.Mkdir(container.root, 0700); err != nil {
return nil, err
}
if len(config.Dns) == 0 && len(builder.runtime.Dns) == 0 && utils.CheckLocalDns() {
//"WARNING: Docker detected local DNS server on resolv.conf. Using default external servers: %v", defaultDns
builder.runtime.Dns = defaultDns
}
// If custom dns exists, then create a resolv.conf for the container
if len(config.Dns) > 0 {
if len(config.Dns) > 0 || len(builder.runtime.Dns) > 0 {
var dns []string
if len(config.Dns) > 0 {
dns = config.Dns
} else {
dns = builder.runtime.Dns
}
container.ResolvConfPath = path.Join(container.root, "resolv.conf")
f, err := os.Create(container.ResolvConfPath)
if err != nil {
return nil, err
}
defer f.Close()
for _, dns := range config.Dns {
for _, dns := range dns {
if _, err := f.Write([]byte("nameserver " + dns + "\n")); err != nil {
return nil, err
}
@@ -147,317 +135,9 @@ func (builder *Builder) Commit(container *Container, repository, tag, comment, a
}
// Register the image if needed
if repository != "" {
if err := builder.repositories.Set(repository, tag, img.Id, true); err != nil {
if err := builder.repositories.Set(repository, tag, img.ID, true); err != nil {
return img, err
}
}
return img, nil
}
func (builder *Builder) clearTmp(containers, images map[string]struct{}) {
for c := range containers {
tmp := builder.runtime.Get(c)
builder.runtime.Destroy(tmp)
Debugf("Removing container %s", c)
}
for i := range images {
builder.runtime.graph.Delete(i)
Debugf("Removing image %s", i)
}
}
func (builder *Builder) getCachedImage(image *Image, config *Config) (*Image, error) {
// Retrieve all images
images, err := builder.graph.All()
if err != nil {
return nil, err
}
// Store the tree in a map of map (map[parentId][childId])
imageMap := make(map[string]map[string]struct{})
for _, img := range images {
if _, exists := imageMap[img.Parent]; !exists {
imageMap[img.Parent] = make(map[string]struct{})
}
imageMap[img.Parent][img.Id] = struct{}{}
}
// Loop on the children of the given image and check the config
for elem := range imageMap[image.Id] {
img, err := builder.graph.Get(elem)
if err != nil {
return nil, err
}
if CompareConfig(&img.ContainerConfig, config) {
return img, nil
}
}
return nil, nil
}
func (builder *Builder) Build(dockerfile io.Reader, stdout io.Writer) (*Image, error) {
var (
image, base *Image
config *Config
maintainer string
env map[string]string = make(map[string]string)
tmpContainers map[string]struct{} = make(map[string]struct{})
tmpImages map[string]struct{} = make(map[string]struct{})
)
defer builder.clearTmp(tmpContainers, tmpImages)
file := bufio.NewReader(dockerfile)
for {
line, err := file.ReadString('\n')
if err != nil {
if err == io.EOF {
break
}
return nil, err
}
line = strings.Replace(strings.TrimSpace(line), " ", " ", 1)
// Skip comments and empty line
if len(line) == 0 || line[0] == '#' {
continue
}
tmp := strings.SplitN(line, " ", 2)
if len(tmp) != 2 {
return nil, fmt.Errorf("Invalid Dockerfile format")
}
instruction := strings.Trim(tmp[0], " ")
arguments := strings.Trim(tmp[1], " ")
switch strings.ToLower(instruction) {
case "from":
fmt.Fprintf(stdout, "FROM %s\n", arguments)
image, err = builder.runtime.repositories.LookupImage(arguments)
if err != nil {
if builder.runtime.graph.IsNotExist(err) {
var tag, remote string
if strings.Contains(arguments, ":") {
remoteParts := strings.Split(arguments, ":")
tag = remoteParts[1]
remote = remoteParts[0]
} else {
remote = arguments
}
if err := builder.runtime.graph.PullRepository(stdout, remote, tag, builder.runtime.repositories, builder.runtime.authConfig); err != nil {
return nil, err
}
image, err = builder.runtime.repositories.LookupImage(arguments)
if err != nil {
return nil, err
}
} else {
return nil, err
}
}
config = &Config{}
break
case "maintainer":
fmt.Fprintf(stdout, "MAINTAINER %s\n", arguments)
maintainer = arguments
break
case "run":
fmt.Fprintf(stdout, "RUN %s\n", arguments)
if image == nil {
return nil, fmt.Errorf("Please provide a source image with `from` prior to run")
}
config, err := ParseRun([]string{image.Id, "/bin/sh", "-c", arguments}, nil, builder.runtime.capabilities)
if err != nil {
return nil, err
}
for key, value := range env {
config.Env = append(config.Env, fmt.Sprintf("%s=%s", key, value))
}
if cache, err := builder.getCachedImage(image, config); err != nil {
return nil, err
} else if cache != nil {
image = cache
fmt.Fprintf(stdout, "===> %s\n", image.ShortId())
break
}
Debugf("Env -----> %v ------ %v\n", config.Env, env)
// Create the container and start it
c, err := builder.Create(config)
if err != nil {
return nil, err
}
if os.Getenv("DEBUG") != "" {
out, _ := c.StdoutPipe()
err2, _ := c.StderrPipe()
go io.Copy(os.Stdout, out)
go io.Copy(os.Stdout, err2)
}
if err := c.Start(); err != nil {
return nil, err
}
tmpContainers[c.Id] = struct{}{}
// Wait for it to finish
if result := c.Wait(); result != 0 {
return nil, fmt.Errorf("!!! '%s' return non-zero exit code '%d'. Aborting.", arguments, result)
}
// Commit the container
base, err = builder.Commit(c, "", "", "", maintainer, nil)
if err != nil {
return nil, err
}
tmpImages[base.Id] = struct{}{}
fmt.Fprintf(stdout, "===> %s\n", base.ShortId())
// use the base as the new image
image = base
break
case "env":
tmp := strings.SplitN(arguments, " ", 2)
if len(tmp) != 2 {
return nil, fmt.Errorf("Invalid ENV format")
}
key := strings.Trim(tmp[0], " ")
value := strings.Trim(tmp[1], " ")
fmt.Fprintf(stdout, "ENV %s %s\n", key, value)
env[key] = value
if image != nil {
fmt.Fprintf(stdout, "===> %s\n", image.ShortId())
} else {
fmt.Fprintf(stdout, "===> <nil>\n")
}
break
case "cmd":
fmt.Fprintf(stdout, "CMD %s\n", arguments)
// Create the container and start it
c, err := builder.Create(&Config{Image: image.Id, Cmd: []string{"", ""}})
if err != nil {
return nil, err
}
if err := c.Start(); err != nil {
return nil, err
}
tmpContainers[c.Id] = struct{}{}
cmd := []string{}
if err := json.Unmarshal([]byte(arguments), &cmd); err != nil {
return nil, err
}
config.Cmd = cmd
// Commit the container
base, err = builder.Commit(c, "", "", "", maintainer, config)
if err != nil {
return nil, err
}
tmpImages[base.Id] = struct{}{}
fmt.Fprintf(stdout, "===> %s\n", base.ShortId())
image = base
break
case "expose":
ports := strings.Split(arguments, " ")
fmt.Fprintf(stdout, "EXPOSE %v\n", ports)
if image == nil {
return nil, fmt.Errorf("Please provide a source image with `from` prior to copy")
}
// Create the container and start it
c, err := builder.Create(&Config{Image: image.Id, Cmd: []string{"", ""}})
if err != nil {
return nil, err
}
if err := c.Start(); err != nil {
return nil, err
}
tmpContainers[c.Id] = struct{}{}
config.PortSpecs = append(ports, config.PortSpecs...)
// Commit the container
base, err = builder.Commit(c, "", "", "", maintainer, config)
if err != nil {
return nil, err
}
tmpImages[base.Id] = struct{}{}
fmt.Fprintf(stdout, "===> %s\n", base.ShortId())
image = base
break
case "insert":
if image == nil {
return nil, fmt.Errorf("Please provide a source image with `from` prior to copy")
}
tmp = strings.SplitN(arguments, " ", 2)
if len(tmp) != 2 {
return nil, fmt.Errorf("Invalid INSERT format")
}
sourceUrl := strings.Trim(tmp[0], " ")
destPath := strings.Trim(tmp[1], " ")
fmt.Fprintf(stdout, "COPY %s to %s in %s\n", sourceUrl, destPath, base.ShortId())
file, err := Download(sourceUrl, stdout)
if err != nil {
return nil, err
}
defer file.Body.Close()
config, err := ParseRun([]string{base.Id, "echo", "insert", sourceUrl, destPath}, nil, builder.runtime.capabilities)
if err != nil {
return nil, err
}
c, err := builder.Create(config)
if err != nil {
return nil, err
}
if err := c.Start(); err != nil {
return nil, err
}
// Wait for echo to finish
if result := c.Wait(); result != 0 {
return nil, fmt.Errorf("!!! '%s' return non-zero exit code '%d'. Aborting.", arguments, result)
}
if err := c.Inject(file.Body, destPath); err != nil {
return nil, err
}
base, err = builder.Commit(c, "", "", "", maintainer, nil)
if err != nil {
return nil, err
}
fmt.Fprintf(stdout, "===> %s\n", base.ShortId())
image = base
break
default:
fmt.Fprintf(stdout, "Skipping unknown instruction %s\n", strings.ToUpper(instruction))
}
}
if image != nil {
// The build is successful, keep the temporary containers and images
for i := range tmpImages {
delete(tmpImages, i)
}
for i := range tmpContainers {
delete(tmpContainers, i)
}
fmt.Fprintf(stdout, "Build finished. image id: %s\n", image.ShortId())
return image, nil
}
return nil, fmt.Errorf("An error occured during the build\n")
}

View File

@@ -1,88 +0,0 @@
package docker
import (
"strings"
"testing"
)
const Dockerfile = `
# VERSION 0.1
# DOCKER-VERSION 0.2
from ` + unitTestImageName + `
run sh -c 'echo root:testpass > /tmp/passwd'
run mkdir -p /var/run/sshd
insert https://raw.github.com/dotcloud/docker/master/CHANGELOG.md /tmp/CHANGELOG.md
`
func TestBuild(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
builder := NewBuilder(runtime)
img, err := builder.Build(strings.NewReader(Dockerfile), &nopWriter{})
if err != nil {
t.Fatal(err)
}
container, err := builder.Create(
&Config{
Image: img.Id,
Cmd: []string{"cat", "/tmp/passwd"},
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container)
output, err := container.Output()
if err != nil {
t.Fatal(err)
}
if string(output) != "root:testpass\n" {
t.Fatalf("Unexpected output. Read '%s', expected '%s'", output, "root:testpass\n")
}
container2, err := builder.Create(
&Config{
Image: img.Id,
Cmd: []string{"ls", "-d", "/var/run/sshd"},
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container2)
output, err = container2.Output()
if err != nil {
t.Fatal(err)
}
if string(output) != "/var/run/sshd\n" {
t.Fatal("/var/run/sshd has not been created")
}
container3, err := builder.Create(
&Config{
Image: img.Id,
Cmd: []string{"cat", "/tmp/CHANGELOG.md"},
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container3)
output, err = container3.Output()
if err != nil {
t.Fatal(err)
}
if len(output) == 0 {
t.Fatal("/tmp/CHANGELOG.md has not been copied")
}
}

444
buildfile.go Normal file
View File

@@ -0,0 +1,444 @@
package docker
import (
"bufio"
"encoding/json"
"fmt"
"github.com/dotcloud/docker/utils"
"io"
"io/ioutil"
"os"
"path"
"reflect"
"strings"
)
type BuildFile interface {
Build(io.Reader) (string, error)
CmdFrom(string) error
CmdRun(string) error
}
type buildFile struct {
runtime *Runtime
builder *Builder
srv *Server
image string
maintainer string
config *Config
context string
verbose bool
tmpContainers map[string]struct{}
tmpImages map[string]struct{}
out io.Writer
}
func (b *buildFile) clearTmp(containers, images map[string]struct{}) {
for c := range containers {
tmp := b.runtime.Get(c)
b.runtime.Destroy(tmp)
utils.Debugf("Removing container %s", c)
}
for i := range images {
b.runtime.graph.Delete(i)
utils.Debugf("Removing image %s", i)
}
}
func (b *buildFile) CmdFrom(name string) error {
image, err := b.runtime.repositories.LookupImage(name)
if err != nil {
if b.runtime.graph.IsNotExist(err) {
remote, tag := utils.ParseRepositoryTag(name)
if err := b.srv.ImagePull(remote, tag, b.out, utils.NewStreamFormatter(false), nil); err != nil {
return err
}
image, err = b.runtime.repositories.LookupImage(name)
if err != nil {
return err
}
} else {
return err
}
}
b.image = image.ID
b.config = &Config{}
return nil
}
func (b *buildFile) CmdMaintainer(name string) error {
b.maintainer = name
return b.commit("", b.config.Cmd, fmt.Sprintf("MAINTAINER %s", name))
}
func (b *buildFile) CmdRun(args string) error {
if b.image == "" {
return fmt.Errorf("Please provide a source image with `from` prior to run")
}
config, _, _, err := ParseRun([]string{b.image, "/bin/sh", "-c", args}, nil)
if err != nil {
return err
}
cmd := b.config.Cmd
b.config.Cmd = nil
MergeConfig(b.config, config)
utils.Debugf("Command to be executed: %v", b.config.Cmd)
if cache, err := b.srv.ImageGetCached(b.image, b.config); err != nil {
return err
} else if cache != nil {
fmt.Fprintf(b.out, " ---> Using cache\n")
utils.Debugf("[BUILDER] Use cached version")
b.image = cache.ID
return nil
} else {
utils.Debugf("[BUILDER] Cache miss")
}
cid, err := b.run()
if err != nil {
return err
}
if err := b.commit(cid, cmd, "run"); err != nil {
return err
}
b.config.Cmd = cmd
return nil
}
func (b *buildFile) CmdEnv(args string) error {
tmp := strings.SplitN(args, " ", 2)
if len(tmp) != 2 {
return fmt.Errorf("Invalid ENV format")
}
key := strings.Trim(tmp[0], " \t")
value := strings.Trim(tmp[1], " \t")
for i, elem := range b.config.Env {
if strings.HasPrefix(elem, key+"=") {
b.config.Env[i] = key + "=" + value
return nil
}
}
b.config.Env = append(b.config.Env, key+"="+value)
return b.commit("", b.config.Cmd, fmt.Sprintf("ENV %s=%s", key, value))
}
func (b *buildFile) CmdCmd(args string) error {
var cmd []string
if err := json.Unmarshal([]byte(args), &cmd); err != nil {
utils.Debugf("Error unmarshalling: %s, setting cmd to /bin/sh -c", err)
cmd = []string{"/bin/sh", "-c", args}
}
if err := b.commit("", cmd, fmt.Sprintf("CMD %v", cmd)); err != nil {
return err
}
b.config.Cmd = cmd
return nil
}
func (b *buildFile) CmdExpose(args string) error {
ports := strings.Split(args, " ")
b.config.PortSpecs = append(ports, b.config.PortSpecs...)
return b.commit("", b.config.Cmd, fmt.Sprintf("EXPOSE %v", ports))
}
func (b *buildFile) CmdInsert(args string) error {
return fmt.Errorf("INSERT has been deprecated. Please use ADD instead")
}
func (b *buildFile) CmdCopy(args string) error {
return fmt.Errorf("COPY has been deprecated. Please use ADD instead")
}
func (b *buildFile) CmdEntrypoint(args string) error {
if args == "" {
return fmt.Errorf("Entrypoint cannot be empty")
}
var entrypoint []string
if err := json.Unmarshal([]byte(args), &entrypoint); err != nil {
b.config.Entrypoint = []string{"/bin/sh", "-c", args}
} else {
b.config.Entrypoint = entrypoint
}
if err := b.commit("", b.config.Cmd, fmt.Sprintf("ENTRYPOINT %s", args)); err != nil {
return err
}
return nil
}
func (b *buildFile) CmdVolume(args string) error {
if args == "" {
return fmt.Errorf("Volume cannot be empty")
}
var volume []string
if err := json.Unmarshal([]byte(args), &volume); err != nil {
volume = []string{args}
}
if b.config.Volumes == nil {
b.config.Volumes = NewPathOpts()
}
for _, v := range volume {
b.config.Volumes[v] = struct{}{}
}
if err := b.commit("", b.config.Cmd, fmt.Sprintf("VOLUME %s", args)); err != nil {
return err
}
return nil
}
func (b *buildFile) addRemote(container *Container, orig, dest string) error {
file, err := utils.Download(orig, ioutil.Discard)
if err != nil {
return err
}
defer file.Body.Close()
return container.Inject(file.Body, dest)
}
func (b *buildFile) addContext(container *Container, orig, dest string) error {
origPath := path.Join(b.context, orig)
destPath := path.Join(container.RootfsPath(), dest)
// Preserve the trailing '/'
if dest[len(dest)-1] == '/' {
destPath = destPath + "/"
}
fi, err := os.Stat(origPath)
if err != nil {
return err
}
if fi.IsDir() {
if err := CopyWithTar(origPath, destPath); err != nil {
return err
}
// First try to unpack the source as an archive
} else if err := UntarPath(origPath, destPath); err != nil {
utils.Debugf("Couldn't untar %s to %s: %s", origPath, destPath, err)
// If that fails, just copy it as a regular file
if err := os.MkdirAll(path.Dir(destPath), 0700); err != nil {
return err
}
if err := CopyWithTar(origPath, destPath); err != nil {
return err
}
}
return nil
}
func (b *buildFile) CmdAdd(args string) error {
if b.context == "" {
return fmt.Errorf("No context given. Impossible to use ADD")
}
tmp := strings.SplitN(args, " ", 2)
if len(tmp) != 2 {
return fmt.Errorf("Invalid ADD format")
}
orig := strings.Trim(tmp[0], " \t")
dest := strings.Trim(tmp[1], " \t")
cmd := b.config.Cmd
b.config.Cmd = []string{"/bin/sh", "-c", fmt.Sprintf("#(nop) ADD %s in %s", orig, dest)}
b.config.Image = b.image
// Create the container and start it
container, err := b.builder.Create(b.config)
if err != nil {
return err
}
b.tmpContainers[container.ID] = struct{}{}
if err := container.EnsureMounted(); err != nil {
return err
}
defer container.Unmount()
if utils.IsURL(orig) {
if err := b.addRemote(container, orig, dest); err != nil {
return err
}
} else {
if err := b.addContext(container, orig, dest); err != nil {
return err
}
}
if err := b.commit(container.ID, cmd, fmt.Sprintf("ADD %s in %s", orig, dest)); err != nil {
return err
}
b.config.Cmd = cmd
return nil
}
func (b *buildFile) run() (string, error) {
if b.image == "" {
return "", fmt.Errorf("Please provide a source image with `from` prior to run")
}
b.config.Image = b.image
// Create the container and start it
c, err := b.builder.Create(b.config)
if err != nil {
return "", err
}
b.tmpContainers[c.ID] = struct{}{}
fmt.Fprintf(b.out, " ---> Running in %s\n", utils.TruncateID(c.ID))
// override the entry point that may have been picked up from the base image
c.Path = b.config.Cmd[0]
c.Args = b.config.Cmd[1:]
//start the container
hostConfig := &HostConfig{}
if err := c.Start(hostConfig); err != nil {
return "", err
}
if b.verbose {
err = <-c.Attach(nil, nil, b.out, b.out)
if err != nil {
return "", err
}
}
// Wait for it to finish
if ret := c.Wait(); ret != 0 {
return "", fmt.Errorf("The command %v returned a non-zero code: %d", b.config.Cmd, ret)
}
return c.ID, nil
}
// Commit the container <id> with the autorun command <autoCmd>
func (b *buildFile) commit(id string, autoCmd []string, comment string) error {
if b.image == "" {
return fmt.Errorf("Please provide a source image with `from` prior to commit")
}
b.config.Image = b.image
if id == "" {
cmd := b.config.Cmd
b.config.Cmd = []string{"/bin/sh", "-c", "#(nop) " + comment}
defer func(cmd []string) { b.config.Cmd = cmd }(cmd)
if cache, err := b.srv.ImageGetCached(b.image, b.config); err != nil {
return err
} else if cache != nil {
fmt.Fprintf(b.out, " ---> Using cache\n")
utils.Debugf("[BUILDER] Use cached version")
b.image = cache.ID
return nil
} else {
utils.Debugf("[BUILDER] Cache miss")
}
container, err := b.builder.Create(b.config)
if err != nil {
return err
}
b.tmpContainers[container.ID] = struct{}{}
fmt.Fprintf(b.out, " ---> Running in %s\n", utils.TruncateID(container.ID))
id = container.ID
if err := container.EnsureMounted(); err != nil {
return err
}
defer container.Unmount()
}
container := b.runtime.Get(id)
if container == nil {
return fmt.Errorf("An error occured while creating the container")
}
// Note: Actually copy the struct
autoConfig := *b.config
autoConfig.Cmd = autoCmd
// Commit the container
image, err := b.builder.Commit(container, "", "", "", b.maintainer, &autoConfig)
if err != nil {
return err
}
b.tmpImages[image.ID] = struct{}{}
b.image = image.ID
return nil
}
func (b *buildFile) Build(context io.Reader) (string, error) {
// FIXME: @creack any reason for using /tmp instead of ""?
// FIXME: @creack "name" is a terrible variable name
name, err := ioutil.TempDir("/tmp", "docker-build")
if err != nil {
return "", err
}
if err := Untar(context, name); err != nil {
return "", err
}
defer os.RemoveAll(name)
b.context = name
dockerfile, err := os.Open(path.Join(name, "Dockerfile"))
if err != nil {
return "", fmt.Errorf("Can't build a directory with no Dockerfile")
}
// FIXME: "file" is also a terrible variable name ;)
file := bufio.NewReader(dockerfile)
stepN := 0
for {
line, err := file.ReadString('\n')
if err != nil {
if err == io.EOF && line == "" {
break
} else if err != io.EOF {
return "", err
}
}
line = strings.Trim(strings.Replace(line, "\t", " ", -1), " \t\r\n")
// Skip comments and empty line
if len(line) == 0 || line[0] == '#' {
continue
}
tmp := strings.SplitN(line, " ", 2)
if len(tmp) != 2 {
return "", fmt.Errorf("Invalid Dockerfile format")
}
instruction := strings.ToLower(strings.Trim(tmp[0], " "))
arguments := strings.Trim(tmp[1], " ")
stepN += 1
// FIXME: only count known instructions as build steps
fmt.Fprintf(b.out, "Step %d : %s %s\n", stepN, strings.ToUpper(instruction), arguments)
method, exists := reflect.TypeOf(b).MethodByName("Cmd" + strings.ToUpper(instruction[:1]) + strings.ToLower(instruction[1:]))
if !exists {
fmt.Fprintf(b.out, "# Skipping unknown instruction %s\n", strings.ToUpper(instruction))
continue
}
ret := method.Func.Call([]reflect.Value{reflect.ValueOf(b), reflect.ValueOf(arguments)})[0].Interface()
if ret != nil {
return "", ret.(error)
}
fmt.Fprintf(b.out, " ---> %v\n", utils.TruncateID(b.image))
}
if b.image != "" {
fmt.Fprintf(b.out, "Successfully built %s\n", utils.TruncateID(b.image))
return b.image, nil
}
return "", fmt.Errorf("An error occured during the build\n")
}
func NewBuildFile(srv *Server, out io.Writer, verbose bool) BuildFile {
return &buildFile{
builder: NewBuilder(srv.runtime),
runtime: srv.runtime,
srv: srv,
config: &Config{},
out: out,
tmpContainers: make(map[string]struct{}),
tmpImages: make(map[string]struct{}),
verbose: verbose,
}
}

216
buildfile_test.go Normal file
View File

@@ -0,0 +1,216 @@
package docker
import (
"fmt"
"io/ioutil"
"testing"
)
// mkTestContext generates a build context from the contents of the provided dockerfile.
// This context is suitable for use as an argument to BuildFile.Build()
func mkTestContext(dockerfile string, files [][2]string, t *testing.T) Archive {
context, err := mkBuildContext(fmt.Sprintf(dockerfile, unitTestImageID), files)
if err != nil {
t.Fatal(err)
}
return context
}
// A testContextTemplate describes a build context and how to test it
type testContextTemplate struct {
// Contents of the Dockerfile
dockerfile string
// Additional files in the context, eg [][2]string{"./passwd", "gordon"}
files [][2]string
}
// A table of all the contexts to build and test.
// A new docker runtime will be created and torn down for each context.
var testContexts = []testContextTemplate{
{
`
from %s
run sh -c 'echo root:testpass > /tmp/passwd'
run mkdir -p /var/run/sshd
run [ "$(cat /tmp/passwd)" = "root:testpass" ]
run [ "$(ls -d /var/run/sshd)" = "/var/run/sshd" ]
`,
nil,
},
{
`
from %s
add foo /usr/lib/bla/bar
run [ "$(cat /usr/lib/bla/bar)" = 'hello world!' ]
`,
[][2]string{{"foo", "hello world!"}},
},
{
`
from %s
add f /
run [ "$(cat /f)" = "hello" ]
add f /abc
run [ "$(cat /abc)" = "hello" ]
add f /x/y/z
run [ "$(cat /x/y/z)" = "hello" ]
add f /x/y/d/
run [ "$(cat /x/y/d/f)" = "hello" ]
add d /
run [ "$(cat /ga)" = "bu" ]
add d /somewhere
run [ "$(cat /somewhere/ga)" = "bu" ]
add d /anotherplace/
run [ "$(cat /anotherplace/ga)" = "bu" ]
add d /somewheeeere/over/the/rainbooow
run [ "$(cat /somewheeeere/over/the/rainbooow/ga)" = "bu" ]
`,
[][2]string{
{"f", "hello"},
{"d/ga", "bu"},
},
},
{
`
from %s
env FOO BAR
run [ "$FOO" = "BAR" ]
`,
nil,
},
{
`
from %s
ENTRYPOINT /bin/echo
CMD Hello world
`,
nil,
},
{
`
from %s
VOLUME /test
CMD Hello world
`,
nil,
},
}
// FIXME: test building with 2 successive overlapping ADD commands
func TestBuild(t *testing.T) {
for _, ctx := range testContexts {
buildImage(ctx, t)
}
}
func buildImage(context testContextTemplate, t *testing.T) *Image {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{
runtime: runtime,
pullingPool: make(map[string]struct{}),
pushingPool: make(map[string]struct{}),
}
buildfile := NewBuildFile(srv, ioutil.Discard, false)
id, err := buildfile.Build(mkTestContext(context.dockerfile, context.files, t))
if err != nil {
t.Fatal(err)
}
img, err := srv.ImageInspect(id)
if err != nil {
t.Fatal(err)
}
return img
}
func TestVolume(t *testing.T) {
img := buildImage(testContextTemplate{`
from %s
volume /test
cmd Hello world
`, nil}, t)
if len(img.Config.Volumes) == 0 {
t.Fail()
}
for key := range img.Config.Volumes {
if key != "/test" {
t.Fail()
}
}
}
func TestBuildMaintainer(t *testing.T) {
img := buildImage(testContextTemplate{`
from %s
maintainer dockerio
`, nil}, t)
if img.Author != "dockerio" {
t.Fail()
}
}
func TestBuildEnv(t *testing.T) {
img := buildImage(testContextTemplate{`
from %s
env port 4243
`,
nil}, t)
if img.Config.Env[0] != "port=4243" {
t.Fail()
}
}
func TestBuildCmd(t *testing.T) {
img := buildImage(testContextTemplate{`
from %s
cmd ["/bin/echo", "Hello World"]
`,
nil}, t)
if img.Config.Cmd[0] != "/bin/echo" {
t.Log(img.Config.Cmd[0])
t.Fail()
}
if img.Config.Cmd[1] != "Hello World" {
t.Log(img.Config.Cmd[1])
t.Fail()
}
}
func TestBuildExpose(t *testing.T) {
img := buildImage(testContextTemplate{`
from %s
expose 4243
`,
nil}, t)
if img.Config.PortSpecs[0] != "4243" {
t.Fail()
}
}
func TestBuildEntrypoint(t *testing.T) {
img := buildImage(testContextTemplate{`
from %s
entrypoint ["/bin/echo"]
`,
nil}, t)
if img.Config.Entrypoint[0] != "/bin/echo" {
}
}

View File

@@ -65,7 +65,7 @@ func Changes(layers []string, rw string) ([]Change, error) {
file := filepath.Base(path)
// If there is a whiteout, then the file was removed
if strings.HasPrefix(file, ".wh.") {
originalFile := strings.TrimLeft(file, ".wh.")
originalFile := file[len(".wh."):]
change.Path = filepath.Join(filepath.Dir(path), originalFile)
change.Kind = ChangeDelete
} else {

File diff suppressed because it is too large Load Diff

View File

@@ -3,7 +3,7 @@ package docker
import (
"bufio"
"fmt"
"github.com/dotcloud/docker/rcli"
"github.com/dotcloud/docker/utils"
"io"
"io/ioutil"
"strings"
@@ -59,304 +59,58 @@ func assertPipe(input, output string, r io.Reader, w io.Writer, count int) error
return nil
}
func cmdWait(srv *Server, container *Container) error {
stdout, stdoutPipe := io.Pipe()
go func() {
srv.CmdWait(nil, stdoutPipe, container.Id)
}()
if _, err := bufio.NewReader(stdout).ReadString('\n'); err != nil {
return err
}
// Cleanup pipes
return closeWrap(stdout, stdoutPipe)
}
func cmdImages(srv *Server, args ...string) (string, error) {
stdout, stdoutPipe := io.Pipe()
go func() {
if err := srv.CmdImages(nil, stdoutPipe, args...); err != nil {
return
}
// force the pipe closed, so that the code below gets an EOF
stdoutPipe.Close()
}()
output, err := ioutil.ReadAll(stdout)
if err != nil {
return "", err
}
// Cleanup pipes
return string(output), closeWrap(stdout, stdoutPipe)
}
// TestImages checks that 'docker images' displays information correctly
func TestImages(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
output, err := cmdImages(srv)
if !strings.Contains(output, "REPOSITORY") {
t.Fatal("'images' should have a header")
}
if !strings.Contains(output, "docker-ut") {
t.Fatal("'images' should show the docker-ut image")
}
if !strings.Contains(output, "e9aa60c60128") {
t.Fatal("'images' should show the docker-ut image id")
}
output, err = cmdImages(srv, "-q")
if strings.Contains(output, "REPOSITORY") {
t.Fatal("'images -q' should not have a header")
}
if strings.Contains(output, "docker-ut") {
t.Fatal("'images' should not show the docker-ut image name")
}
if !strings.Contains(output, "e9aa60c60128") {
t.Fatal("'images' should show the docker-ut image id")
}
output, err = cmdImages(srv, "-viz")
if !strings.HasPrefix(output, "digraph docker {") {
t.Fatal("'images -v' should start with the dot header")
}
if !strings.HasSuffix(output, "}\n") {
t.Fatal("'images -v' should end with a '}'")
}
if !strings.Contains(output, "base -> \"e9aa60c60128\" [style=invis]") {
t.Fatal("'images -v' should have the docker-ut image id node")
}
// todo: add checks for -a
}
// TestRunHostname checks that 'docker run -h' correctly sets a custom hostname
func TestRunHostname(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
stdin, _ := io.Pipe()
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(nil, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
c := make(chan struct{})
go func() {
if err := srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-h", "foobar", GetTestImage(runtime).Id, "hostname"); err != nil {
defer close(c)
if err := cli.CmdRun("-h", "foobar", unitTestImageID, "hostname"); err != nil {
t.Fatal(err)
}
close(c)
}()
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
if cmdOutput != "foobar\n" {
t.Fatalf("'hostname' should display '%s', not '%s'", "foobar\n", cmdOutput)
}
utils.Debugf("--")
setTimeout(t, "Reading command output time out", 2*time.Second, func() {
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
if cmdOutput != "foobar\n" {
t.Fatalf("'hostname' should display '%s', not '%s'", "foobar\n", cmdOutput)
}
})
setTimeout(t, "CmdRun timed out", 2*time.Second, func() {
setTimeout(t, "CmdRun timed out", 5*time.Second, func() {
<-c
cmdWait(srv, srv.runtime.List()[0])
})
}
func TestRunExit(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
c1 := make(chan struct{})
go func() {
srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-i", GetTestImage(runtime).Id, "/bin/cat")
close(c1)
}()
setTimeout(t, "Read/Write assertion timed out", 2*time.Second, func() {
if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 15); err != nil {
t.Fatal(err)
}
})
container := runtime.List()[0]
// Closing /bin/cat stdin, expect it to exit
p, err := container.StdinPipe()
if err != nil {
t.Fatal(err)
}
if err := p.Close(); err != nil {
t.Fatal(err)
}
// as the process exited, CmdRun must finish and unblock. Wait for it
setTimeout(t, "Waiting for CmdRun timed out", 2*time.Second, func() {
<-c1
cmdWait(srv, container)
})
// Make sure that the client has been disconnected
setTimeout(t, "The client should have been disconnected once the remote process exited.", 2*time.Second, func() {
// Expecting pipe i/o error, just check that read does not block
stdin.Read([]byte{})
})
// Cleanup pipes
if err := closeWrap(stdin, stdinPipe, stdout, stdoutPipe); err != nil {
t.Fatal(err)
}
}
// Expected behaviour: the process dies when the client disconnects
func TestRunDisconnect(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
c1 := make(chan struct{})
go func() {
// We're simulating a disconnect so the return value doesn't matter. What matters is the
// fact that CmdRun returns.
srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-i", GetTestImage(runtime).Id, "/bin/cat")
close(c1)
}()
setTimeout(t, "Read/Write assertion timed out", 2*time.Second, func() {
if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 15); err != nil {
t.Fatal(err)
}
})
// Close pipes (simulate disconnect)
if err := closeWrap(stdin, stdinPipe, stdout, stdoutPipe); err != nil {
t.Fatal(err)
}
// as the pipes are close, we expect the process to die,
// therefore CmdRun to unblock. Wait for CmdRun
setTimeout(t, "Waiting for CmdRun timed out", 2*time.Second, func() {
<-c1
})
// Client disconnect after run -i should cause stdin to be closed, which should
// cause /bin/cat to exit.
setTimeout(t, "Waiting for /bin/cat to exit timed out", 2*time.Second, func() {
container := runtime.List()[0]
container.Wait()
if container.State.Running {
t.Fatalf("/bin/cat is still running after closing stdin")
}
})
}
// Expected behaviour: the process dies when the client disconnects
func TestRunDisconnectTty(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
c1 := make(chan struct{})
go func() {
// We're simulating a disconnect so the return value doesn't matter. What matters is the
// fact that CmdRun returns.
srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-i", "-t", GetTestImage(runtime).Id, "/bin/cat")
close(c1)
}()
setTimeout(t, "Waiting for the container to be started timed out", 2*time.Second, func() {
for {
// Client disconnect after run -i should keep stdin out in TTY mode
l := runtime.List()
if len(l) == 1 && l[0].State.Running {
break
}
time.Sleep(10 * time.Millisecond)
}
})
// Client disconnect after run -i should keep stdin out in TTY mode
container := runtime.List()[0]
setTimeout(t, "Read/Write assertion timed out", 2*time.Second, func() {
if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 15); err != nil {
t.Fatal(err)
}
})
// Close pipes (simulate disconnect)
if err := closeWrap(stdin, stdinPipe, stdout, stdoutPipe); err != nil {
t.Fatal(err)
}
// In tty mode, we expect the process to stay alive even after client's stdin closes.
// Do not wait for run to finish
// Give some time to monitor to do his thing
container.WaitTimeout(500 * time.Millisecond)
if !container.State.Running {
t.Fatalf("/bin/cat should still be running after closing stdin (tty mode)")
}
}
// TestAttachStdin checks attaching to stdin without stdout and stderr.
// 'docker run -i -a stdin' should sends the client's stdin to the command,
// then detach from it and print the container id.
func TestRunAttachStdin(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(stdin, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
ch := make(chan struct{})
go func() {
srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-i", "-a", "stdin", GetTestImage(runtime).Id, "sh", "-c", "echo hello; cat")
close(ch)
defer close(ch)
cli.CmdRun("-i", "-a", "stdin", unitTestImageID, "sh", "-c", "echo hello && cat")
}()
// Send input to the command, close stdin
setTimeout(t, "Write timed out", 2*time.Second, func() {
setTimeout(t, "Write timed out", 10*time.Second, func() {
if _, err := stdinPipe.Write([]byte("hi there\n")); err != nil {
t.Fatal(err)
}
@@ -365,23 +119,27 @@ func TestRunAttachStdin(t *testing.T) {
}
})
container := runtime.List()[0]
container := globalRuntime.List()[0]
// Check output
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
if cmdOutput != container.ShortId()+"\n" {
t.Fatalf("Wrong output: should be '%s', not '%s'\n", container.ShortId()+"\n", cmdOutput)
}
setTimeout(t, "Reading command output time out", 10*time.Second, func() {
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
if cmdOutput != container.ShortID()+"\n" {
t.Fatalf("Wrong output: should be '%s', not '%s'\n", container.ShortID()+"\n", cmdOutput)
}
})
// wait for CmdRun to return
setTimeout(t, "Waiting for CmdRun timed out", 2*time.Second, func() {
setTimeout(t, "Waiting for CmdRun timed out", 5*time.Second, func() {
// Unblock hijack end
stdout.Read([]byte{})
<-ch
})
setTimeout(t, "Waiting for command to exit timed out", 2*time.Second, func() {
setTimeout(t, "Waiting for command to exit timed out", 5*time.Second, func() {
container.Wait()
})
@@ -399,71 +157,3 @@ func TestRunAttachStdin(t *testing.T) {
}
}
}
// Expected behaviour, the process stays alive when the client disconnects
func TestAttachDisconnect(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
container, err := NewBuilder(runtime).Create(
&Config{
Image: GetTestImage(runtime).Id,
Memory: 33554432,
Cmd: []string{"/bin/cat"},
OpenStdin: true,
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container)
// Start the process
if err := container.Start(); err != nil {
t.Fatal(err)
}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
// Attach to it
c1 := make(chan struct{})
go func() {
// We're simulating a disconnect so the return value doesn't matter. What matters is the
// fact that CmdAttach returns.
srv.CmdAttach(stdin, rcli.NewDockerLocalConn(stdoutPipe), container.Id)
close(c1)
}()
setTimeout(t, "First read/write assertion timed out", 2*time.Second, func() {
if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 15); err != nil {
t.Fatal(err)
}
})
// Close pipes (client disconnects)
if err := closeWrap(stdin, stdinPipe, stdout, stdoutPipe); err != nil {
t.Fatal(err)
}
// Wait for attach to finish, the client disconnected, therefore, Attach finished his job
setTimeout(t, "Waiting for CmdAttach timed out", 2*time.Second, func() {
<-c1
})
// We closed stdin, expect /bin/cat to still be running
// Wait a little bit to make sure container.monitor() did his thing
err = container.WaitTimeout(500 * time.Millisecond)
if err == nil || !container.State.Running {
t.Fatalf("/bin/cat is not running after closing stdin")
}
// Try to avoid the timeoout in destroy. Best effort, don't check error
cStdin, _ := container.StdinPipe()
cStdin.Close()
container.Wait()
}

View File

@@ -2,8 +2,10 @@ package docker
import (
"encoding/json"
"flag"
"fmt"
"github.com/dotcloud/docker/rcli"
"github.com/dotcloud/docker/term"
"github.com/dotcloud/docker/utils"
"github.com/kr/pty"
"io"
"io/ioutil"
@@ -11,6 +13,7 @@ import (
"os"
"os/exec"
"path"
"path/filepath"
"sort"
"strconv"
"strings"
@@ -21,7 +24,7 @@ import (
type Container struct {
root string
Id string
ID string
Created time.Time
@@ -39,8 +42,8 @@ type Container struct {
ResolvConfPath string
cmd *exec.Cmd
stdout *writeBroadcaster
stderr *writeBroadcaster
stdout *utils.WriteBroadcaster
stderr *utils.WriteBroadcaster
stdin io.ReadCloser
stdinPipe io.WriteCloser
ptyMaster io.Closer
@@ -49,6 +52,9 @@ type Container struct {
waitLock chan struct{}
Volumes map[string]string
// Store rw/ro in a separate structure to preserve reserve-compatibility on-disk.
// Easier than migrating older container configs :)
VolumesRW map[string]bool
}
type Config struct {
@@ -56,6 +62,7 @@ type Config struct {
User string
Memory int64 // Memory limit (in bytes)
MemorySwap int64 // Total memory usage (memory + swap); set `-1' to disable swap
CpuShares int64 // CPU shares (relative weight vs. other containers)
AttachStdin bool
AttachStdout bool
AttachStderr bool
@@ -69,10 +76,21 @@ type Config struct {
Image string // Name of the image as it was passed by the operator (eg. could be symbolic)
Volumes map[string]struct{}
VolumesFrom string
Entrypoint []string
}
func ParseRun(args []string, stdout io.Writer, capabilities *Capabilities) (*Config, error) {
cmd := rcli.Subcmd(stdout, "run", "[OPTIONS] IMAGE COMMAND [ARG...]", "Run a command in a new container")
type HostConfig struct {
Binds []string
}
type BindMap struct {
SrcPath string
DstPath string
Mode string
}
func ParseRun(args []string, capabilities *Capabilities) (*Config, *HostConfig, *flag.FlagSet, error) {
cmd := Subcmd("run", "[OPTIONS] IMAGE [COMMAND] [ARG...]", "Run a command in a new container")
if len(args) > 0 && args[0] != "--help" {
cmd.SetOutput(ioutil.Discard)
}
@@ -86,11 +104,13 @@ func ParseRun(args []string, stdout io.Writer, capabilities *Capabilities) (*Con
flTty := cmd.Bool("t", false, "Allocate a pseudo-tty")
flMemory := cmd.Int64("m", 0, "Memory limit (in bytes)")
if *flMemory > 0 && !capabilities.MemoryLimit {
fmt.Fprintf(stdout, "WARNING: Your kernel does not support memory limit capabilities. Limitation discarded.\n")
if capabilities != nil && *flMemory > 0 && !capabilities.MemoryLimit {
//fmt.Fprintf(stdout, "WARNING: Your kernel does not support memory limit capabilities. Limitation discarded.\n")
*flMemory = 0
}
flCpuShares := cmd.Int64("c", 0, "CPU shares (relative weight)")
var flPorts ListOpts
cmd.Var(&flPorts, "p", "Expose a container's port to the host (use 'docker port' to see the actual mapping)")
@@ -101,15 +121,16 @@ func ParseRun(args []string, stdout io.Writer, capabilities *Capabilities) (*Con
cmd.Var(&flDns, "dns", "Set custom dns servers")
flVolumes := NewPathOpts()
cmd.Var(flVolumes, "v", "Attach a data volume")
cmd.Var(flVolumes, "v", "Bind mount a volume (e.g. from the host: -v /host:/container, from docker: -v /container)")
flVolumesFrom := cmd.String("volumes-from", "", "Mount volumes from the specified container")
flEntrypoint := cmd.String("entrypoint", "", "Overwrite the default entrypoint of the image")
if err := cmd.Parse(args); err != nil {
return nil, err
return nil, nil, cmd, err
}
if *flDetach && len(flAttach) > 0 {
return nil, fmt.Errorf("Conflicting options: -a and -d")
return nil, nil, cmd, fmt.Errorf("Conflicting options: -a and -d")
}
// If neither -d or -a are set, attach to everything by default
if len(flAttach) == 0 && !*flDetach {
@@ -121,8 +142,23 @@ func ParseRun(args []string, stdout io.Writer, capabilities *Capabilities) (*Con
}
}
}
var binds []string
// add any bind targets to the list of container volumes
for bind := range flVolumes {
arr := strings.Split(bind, ":")
if len(arr) > 1 {
dstDir := arr[1]
flVolumes[dstDir] = struct{}{}
binds = append(binds, bind)
delete(flVolumes, bind)
}
}
parsedArgs := cmd.Args()
runCmd := []string{}
entrypoint := []string{}
image := ""
if len(parsedArgs) >= 1 {
image = cmd.Arg(0)
@@ -130,6 +166,10 @@ func ParseRun(args []string, stdout io.Writer, capabilities *Capabilities) (*Con
if len(parsedArgs) > 1 {
runCmd = parsedArgs[1:]
}
if *flEntrypoint != "" {
entrypoint = []string{*flEntrypoint}
}
config := &Config{
Hostname: *flHostname,
PortSpecs: flPorts,
@@ -137,6 +177,7 @@ func ParseRun(args []string, stdout io.Writer, capabilities *Capabilities) (*Con
Tty: *flTty,
OpenStdin: *flStdin,
Memory: *flMemory,
CpuShares: *flCpuShares,
AttachStdin: flAttach.Get("stdin"),
AttachStdout: flAttach.Get("stdout"),
AttachStderr: flAttach.Get("stderr"),
@@ -146,10 +187,14 @@ func ParseRun(args []string, stdout io.Writer, capabilities *Capabilities) (*Con
Image: image,
Volumes: flVolumes,
VolumesFrom: *flVolumesFrom,
Entrypoint: entrypoint,
}
hostConfig := &HostConfig{
Binds: binds,
}
if *flMemory > 0 && !capabilities.SwapLimit {
fmt.Fprintf(stdout, "WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.\n")
if capabilities != nil && *flMemory > 0 && !capabilities.SwapLimit {
//fmt.Fprintf(stdout, "WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.\n")
config.MemorySwap = -1
}
@@ -157,23 +202,28 @@ func ParseRun(args []string, stdout io.Writer, capabilities *Capabilities) (*Con
if config.OpenStdin && config.AttachStdin {
config.StdinOnce = true
}
return config, nil
return config, hostConfig, cmd, nil
}
type PortMapping map[string]string
type NetworkSettings struct {
IpAddress string
IpPrefixLen int
IPAddress string
IPPrefixLen int
Gateway string
Bridge string
PortMapping map[string]string
PortMapping map[string]PortMapping
}
// String returns a human-readable description of the port mapping defined in the settings
func (settings *NetworkSettings) PortMappingHuman() string {
var mapping []string
for private, public := range settings.PortMapping {
for private, public := range settings.PortMapping["Tcp"] {
mapping = append(mapping, fmt.Sprintf("%s->%s", public, private))
}
for private, public := range settings.PortMapping["Udp"] {
mapping = append(mapping, fmt.Sprintf("%s->%s/udp", public, private))
}
sort.Strings(mapping)
return strings.Join(mapping, ", ")
}
@@ -223,6 +273,26 @@ func (container *Container) ToDisk() (err error) {
return ioutil.WriteFile(container.jsonPath(), data, 0666)
}
func (container *Container) ReadHostConfig() (*HostConfig, error) {
data, err := ioutil.ReadFile(container.hostConfigPath())
if err != nil {
return &HostConfig{}, err
}
hostConfig := &HostConfig{}
if err := json.Unmarshal(data, hostConfig); err != nil {
return &HostConfig{}, err
}
return hostConfig, nil
}
func (container *Container) SaveHostConfig(hostConfig *HostConfig) (err error) {
data, err := json.Marshal(hostConfig)
if err != nil {
return
}
return ioutil.WriteFile(container.hostConfigPath(), data, 0666)
}
func (container *Container) generateLXCConfig() error {
fo, err := os.Create(container.lxcConfigPath())
if err != nil {
@@ -247,9 +317,9 @@ func (container *Container) startPty() error {
// Copy the PTYs to our broadcasters
go func() {
defer container.stdout.CloseWriters()
Debugf("[startPty] Begin of stdout pipe")
utils.Debugf("[startPty] Begin of stdout pipe")
io.Copy(container.stdout, ptyMaster)
Debugf("[startPty] End of stdout pipe")
utils.Debugf("[startPty] End of stdout pipe")
}()
// stdin
@@ -258,9 +328,9 @@ func (container *Container) startPty() error {
container.cmd.SysProcAttr = &syscall.SysProcAttr{Setctty: true, Setsid: true}
go func() {
defer container.stdin.Close()
Debugf("[startPty] Begin of stdin pipe")
utils.Debugf("[startPty] Begin of stdin pipe")
io.Copy(ptyMaster, container.stdin)
Debugf("[startPty] End of stdin pipe")
utils.Debugf("[startPty] End of stdin pipe")
}()
}
if err := container.cmd.Start(); err != nil {
@@ -280,9 +350,9 @@ func (container *Container) start() error {
}
go func() {
defer stdin.Close()
Debugf("Begin of stdin pipe [start]")
utils.Debugf("Begin of stdin pipe [start]")
io.Copy(stdin, container.stdin)
Debugf("End of stdin pipe [start]")
utils.Debugf("End of stdin pipe [start]")
}()
}
return container.cmd.Start()
@@ -299,8 +369,8 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
errors <- err
} else {
go func() {
Debugf("[start] attach stdin\n")
defer Debugf("[end] attach stdin\n")
utils.Debugf("[start] attach stdin\n")
defer utils.Debugf("[end] attach stdin\n")
// No matter what, when stdin is closed (io.Copy unblock), close stdout and stderr
if cStdout != nil {
defer cStdout.Close()
@@ -312,12 +382,12 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
defer cStdin.Close()
}
if container.Config.Tty {
_, err = CopyEscapable(cStdin, stdin)
_, err = utils.CopyEscapable(cStdin, stdin)
} else {
_, err = io.Copy(cStdin, stdin)
}
if err != nil {
Debugf("[error] attach stdin: %s\n", err)
utils.Debugf("[error] attach stdin: %s\n", err)
}
// Discard error, expecting pipe error
errors <- nil
@@ -331,8 +401,8 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
} else {
cStdout = p
go func() {
Debugf("[start] attach stdout\n")
defer Debugf("[end] attach stdout\n")
utils.Debugf("[start] attach stdout\n")
defer utils.Debugf("[end] attach stdout\n")
// If we are in StdinOnce mode, then close stdin
if container.Config.StdinOnce {
if stdin != nil {
@@ -344,11 +414,23 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
}
_, err := io.Copy(stdout, cStdout)
if err != nil {
Debugf("[error] attach stdout: %s\n", err)
utils.Debugf("[error] attach stdout: %s\n", err)
}
errors <- err
}()
}
} else {
go func() {
if stdinCloser != nil {
defer stdinCloser.Close()
}
if cStdout, err := container.StdoutPipe(); err != nil {
utils.Debugf("Error stdout pipe")
} else {
io.Copy(&utils.NopWriter{}, cStdout)
}
}()
}
if stderr != nil {
nJobs += 1
@@ -357,8 +439,8 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
} else {
cStderr = p
go func() {
Debugf("[start] attach stderr\n")
defer Debugf("[end] attach stderr\n")
utils.Debugf("[start] attach stderr\n")
defer utils.Debugf("[end] attach stderr\n")
// If we are in StdinOnce mode, then close stdin
if container.Config.StdinOnce {
if stdin != nil {
@@ -370,13 +452,26 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
}
_, err := io.Copy(stderr, cStderr)
if err != nil {
Debugf("[error] attach stderr: %s\n", err)
utils.Debugf("[error] attach stderr: %s\n", err)
}
errors <- err
}()
}
} else {
go func() {
if stdinCloser != nil {
defer stdinCloser.Close()
}
if cStderr, err := container.StderrPipe(); err != nil {
utils.Debugf("Error stdout pipe")
} else {
io.Copy(&utils.NopWriter{}, cStderr)
}
}()
}
return Go(func() error {
return utils.Go(func() error {
if cStdout != nil {
defer cStdout.Close()
}
@@ -386,24 +481,28 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
// FIXME: how do clean up the stdin goroutine without the unwanted side effect
// of closing the passed stdin? Add an intermediary io.Pipe?
for i := 0; i < nJobs; i += 1 {
Debugf("Waiting for job %d/%d\n", i+1, nJobs)
utils.Debugf("Waiting for job %d/%d\n", i+1, nJobs)
if err := <-errors; err != nil {
Debugf("Job %d returned error %s. Aborting all jobs\n", i+1, err)
utils.Debugf("Job %d returned error %s. Aborting all jobs\n", i+1, err)
return err
}
Debugf("Job %d completed successfully\n", i+1)
utils.Debugf("Job %d completed successfully\n", i+1)
}
Debugf("All jobs completed successfully\n")
utils.Debugf("All jobs completed successfully\n")
return nil
})
}
func (container *Container) Start() error {
container.State.lock()
defer container.State.unlock()
func (container *Container) Start(hostConfig *HostConfig) error {
container.State.Lock()
defer container.State.Unlock()
if len(hostConfig.Binds) == 0 {
hostConfig, _ = container.ReadHostConfig()
}
if container.State.Running {
return fmt.Errorf("The container %s is already running.", container.Id)
return fmt.Errorf("The container %s is already running.", container.ID)
}
if err := container.EnsureMounted(); err != nil {
return err
@@ -422,32 +521,89 @@ func (container *Container) Start() error {
container.Config.MemorySwap = -1
}
container.Volumes = make(map[string]string)
container.VolumesRW = make(map[string]bool)
// Create the requested bind mounts
binds := make(map[string]BindMap)
// Define illegal container destinations
illegalDsts := []string{"/", "."}
for _, bind := range hostConfig.Binds {
// FIXME: factorize bind parsing in parseBind
var src, dst, mode string
arr := strings.Split(bind, ":")
if len(arr) == 2 {
src = arr[0]
dst = arr[1]
mode = "rw"
} else if len(arr) == 3 {
src = arr[0]
dst = arr[1]
mode = arr[2]
} else {
return fmt.Errorf("Invalid bind specification: %s", bind)
}
// Bail if trying to mount to an illegal destination
for _, illegal := range illegalDsts {
if dst == illegal {
return fmt.Errorf("Illegal bind destination: %s", dst)
}
}
bindMap := BindMap{
SrcPath: src,
DstPath: dst,
Mode: mode,
}
binds[path.Clean(dst)] = bindMap
}
// FIXME: evaluate volumes-from before individual volumes, so that the latter can override the former.
// Create the requested volumes volumes
for volPath := range container.Config.Volumes {
if c, err := container.runtime.volumes.Create(nil, container, "", "", nil); err != nil {
return err
} else {
if err := os.MkdirAll(path.Join(container.RootfsPath(), volPath), 0755); err != nil {
return nil
volPath = path.Clean(volPath)
// If an external bind is defined for this volume, use that as a source
if bindMap, exists := binds[volPath]; exists {
container.Volumes[volPath] = bindMap.SrcPath
if strings.ToLower(bindMap.Mode) == "rw" {
container.VolumesRW[volPath] = true
}
container.Volumes[volPath] = c.Id
// Otherwise create an directory in $ROOT/volumes/ and use that
} else {
c, err := container.runtime.volumes.Create(nil, container, "", "", nil)
if err != nil {
return err
}
srcPath, err := c.layer()
if err != nil {
return err
}
container.Volumes[volPath] = srcPath
container.VolumesRW[volPath] = true // RW by default
}
// Create the mountpoint
if err := os.MkdirAll(path.Join(container.RootfsPath(), volPath), 0755); err != nil {
return nil
}
}
if container.Config.VolumesFrom != "" {
c := container.runtime.Get(container.Config.VolumesFrom)
if c == nil {
return fmt.Errorf("Container %s not found. Impossible to mount its volumes", container.Id)
return fmt.Errorf("Container %s not found. Impossible to mount its volumes", container.ID)
}
for volPath, id := range c.Volumes {
if _, exists := container.Volumes[volPath]; exists {
return fmt.Errorf("The requested volume %s overlap one of the volume of the container %s", volPath, c.Id)
return fmt.Errorf("The requested volume %s overlap one of the volume of the container %s", volPath, c.ID)
}
if err := os.MkdirAll(path.Join(container.RootfsPath(), volPath), 0755); err != nil {
return nil
}
container.Volumes[volPath] = id
if isRW, exists := c.VolumesRW[volPath]; exists {
container.VolumesRW[volPath] = isRW
}
}
}
@@ -456,7 +612,7 @@ func (container *Container) Start() error {
}
params := []string{
"-n", container.Id,
"-n", container.ID,
"-f", container.lxcConfigPath(),
"--",
"/sbin/init",
@@ -515,12 +671,14 @@ func (container *Container) Start() error {
container.waitLock = make(chan struct{})
container.ToDisk()
container.SaveHostConfig(hostConfig)
go container.monitor()
return nil
}
func (container *Container) Run() error {
if err := container.Start(); err != nil {
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
return err
}
container.Wait()
@@ -533,7 +691,8 @@ func (container *Container) Output() (output []byte, err error) {
return nil, err
}
defer pipe.Close()
if err := container.Start(); err != nil {
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
return nil, err
}
output, err = ioutil.ReadAll(pipe)
@@ -551,13 +710,13 @@ func (container *Container) StdinPipe() (io.WriteCloser, error) {
func (container *Container) StdoutPipe() (io.ReadCloser, error) {
reader, writer := io.Pipe()
container.stdout.AddWriter(writer)
return newBufReader(reader), nil
return utils.NewBufReader(reader), nil
}
func (container *Container) StderrPipe() (io.ReadCloser, error) {
reader, writer := io.Pipe()
container.stderr.AddWriter(writer)
return newBufReader(reader), nil
return utils.NewBufReader(reader), nil
}
func (container *Container) allocateNetwork() error {
@@ -565,19 +724,23 @@ func (container *Container) allocateNetwork() error {
if err != nil {
return err
}
container.NetworkSettings.PortMapping = make(map[string]string)
container.NetworkSettings.PortMapping = make(map[string]PortMapping)
container.NetworkSettings.PortMapping["Tcp"] = make(PortMapping)
container.NetworkSettings.PortMapping["Udp"] = make(PortMapping)
for _, spec := range container.Config.PortSpecs {
if nat, err := iface.AllocatePort(spec); err != nil {
nat, err := iface.AllocatePort(spec)
if err != nil {
iface.Release()
return err
} else {
container.NetworkSettings.PortMapping[strconv.Itoa(nat.Backend)] = strconv.Itoa(nat.Frontend)
}
proto := strings.Title(nat.Proto)
backend, frontend := strconv.Itoa(nat.Backend), strconv.Itoa(nat.Frontend)
container.NetworkSettings.PortMapping[proto][backend] = frontend
}
container.network = iface
container.NetworkSettings.Bridge = container.runtime.networkManager.bridgeIface
container.NetworkSettings.IpAddress = iface.IPNet.IP.String()
container.NetworkSettings.IpPrefixLen, _ = iface.IPNet.Mask.Size()
container.NetworkSettings.IPAddress = iface.IPNet.IP.String()
container.NetworkSettings.IPPrefixLen, _ = iface.IPNet.Mask.Size()
container.NetworkSettings.Gateway = iface.Gateway.String()
return nil
}
@@ -591,36 +754,35 @@ func (container *Container) releaseNetwork() {
// FIXME: replace this with a control socket within docker-init
func (container *Container) waitLxc() error {
for {
if output, err := exec.Command("lxc-info", "-n", container.Id).CombinedOutput(); err != nil {
output, err := exec.Command("lxc-info", "-n", container.ID).CombinedOutput()
if err != nil {
return err
} else {
if !strings.Contains(string(output), "RUNNING") {
return nil
}
}
if !strings.Contains(string(output), "RUNNING") {
return nil
}
time.Sleep(500 * time.Millisecond)
}
return nil
}
func (container *Container) monitor() {
// Wait for the program to exit
Debugf("Waiting for process")
utils.Debugf("Waiting for process")
// If the command does not exists, try to wait via lxc
if container.cmd == nil {
if err := container.waitLxc(); err != nil {
Debugf("%s: Process: %s", container.Id, err)
utils.Debugf("%s: Process: %s", container.ID, err)
}
} else {
if err := container.cmd.Wait(); err != nil {
// Discard the error as any signals or non 0 returns will generate an error
Debugf("%s: Process: %s", container.Id, err)
utils.Debugf("%s: Process: %s", container.ID, err)
}
}
Debugf("Process finished")
utils.Debugf("Process finished")
var exitCode int = -1
exitCode := -1
if container.cmd != nil {
exitCode = container.cmd.ProcessState.Sys().(syscall.WaitStatus).ExitStatus()
}
@@ -629,24 +791,24 @@ func (container *Container) monitor() {
container.releaseNetwork()
if container.Config.OpenStdin {
if err := container.stdin.Close(); err != nil {
Debugf("%s: Error close stdin: %s", container.Id, err)
utils.Debugf("%s: Error close stdin: %s", container.ID, err)
}
}
if err := container.stdout.CloseWriters(); err != nil {
Debugf("%s: Error close stdout: %s", container.Id, err)
utils.Debugf("%s: Error close stdout: %s", container.ID, err)
}
if err := container.stderr.CloseWriters(); err != nil {
Debugf("%s: Error close stderr: %s", container.Id, err)
utils.Debugf("%s: Error close stderr: %s", container.ID, err)
}
if container.ptyMaster != nil {
if err := container.ptyMaster.Close(); err != nil {
Debugf("%s: Error closing Pty master: %s", container.Id, err)
utils.Debugf("%s: Error closing Pty master: %s", container.ID, err)
}
}
if err := container.Unmount(); err != nil {
log.Printf("%v: Failed to umount filesystem: %v", container.Id, err)
log.Printf("%v: Failed to umount filesystem: %v", container.ID, err)
}
// Re-create a brand new stdin pipe once the container exited
@@ -667,7 +829,7 @@ func (container *Container) monitor() {
// This is because State.setStopped() has already been called, and has caused Wait()
// to return.
// FIXME: why are we serializing running state to disk in the first place?
//log.Printf("%s: Failed to dump configuration to the disk: %s", container.Id, err)
//log.Printf("%s: Failed to dump configuration to the disk: %s", container.ID, err)
}
}
@@ -677,17 +839,17 @@ func (container *Container) kill() error {
}
// Sending SIGKILL to the process via lxc
output, err := exec.Command("lxc-kill", "-n", container.Id, "9").CombinedOutput()
output, err := exec.Command("lxc-kill", "-n", container.ID, "9").CombinedOutput()
if err != nil {
log.Printf("error killing container %s (%s, %s)", container.Id, output, err)
log.Printf("error killing container %s (%s, %s)", container.ID, output, err)
}
// 2. Wait for the process to die, in last resort, try to kill the process directly
if err := container.WaitTimeout(10 * time.Second); err != nil {
if container.cmd == nil {
return fmt.Errorf("lxc-kill failed, impossible to kill the container %s", container.Id)
return fmt.Errorf("lxc-kill failed, impossible to kill the container %s", container.ID)
}
log.Printf("Container %s failed to exit within 10 seconds of lxc SIGKILL - trying direct SIGKILL", container.Id)
log.Printf("Container %s failed to exit within 10 seconds of lxc SIGKILL - trying direct SIGKILL", container.ID)
if err := container.cmd.Process.Kill(); err != nil {
return err
}
@@ -699,8 +861,8 @@ func (container *Container) kill() error {
}
func (container *Container) Kill() error {
container.State.lock()
defer container.State.unlock()
container.State.Lock()
defer container.State.Unlock()
if !container.State.Running {
return nil
}
@@ -708,14 +870,14 @@ func (container *Container) Kill() error {
}
func (container *Container) Stop(seconds int) error {
container.State.lock()
defer container.State.unlock()
container.State.Lock()
defer container.State.Unlock()
if !container.State.Running {
return nil
}
// 1. Send a SIGTERM
if output, err := exec.Command("lxc-kill", "-n", container.Id, "15").CombinedOutput(); err != nil {
if output, err := exec.Command("lxc-kill", "-n", container.ID, "15").CombinedOutput(); err != nil {
log.Print(string(output))
log.Print("Failed to send SIGTERM to the process, force killing")
if err := container.kill(); err != nil {
@@ -725,7 +887,7 @@ func (container *Container) Stop(seconds int) error {
// 2. Wait for the process to exit on its own
if err := container.WaitTimeout(time.Duration(seconds) * time.Second); err != nil {
log.Printf("Container %v failed to exit within %d seconds of SIGTERM - using the force", container.Id, seconds)
log.Printf("Container %v failed to exit within %d seconds of SIGTERM - using the force", container.ID, seconds)
if err := container.kill(); err != nil {
return err
}
@@ -737,7 +899,8 @@ func (container *Container) Restart(seconds int) error {
if err := container.Stop(seconds); err != nil {
return err
}
if err := container.Start(); err != nil {
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
return err
}
return nil
@@ -749,6 +912,14 @@ func (container *Container) Wait() int {
return container.State.ExitCode
}
func (container *Container) Resize(h, w int) error {
pty, ok := container.ptyMaster.(*os.File)
if !ok {
return fmt.Errorf("ptyMaster does not have Fd() method")
}
return term.SetWinsize(pty.Fd(), &term.Winsize{Height: uint16(h), Width: uint16(w)})
}
func (container *Container) ExportRw() (Archive, error) {
return Tar(container.rwPath(), Uncompressed)
}
@@ -758,7 +929,7 @@ func (container *Container) RwChecksum() (string, error) {
if err != nil {
return "", err
}
return HashData(rwData)
return utils.HashData(rwData)
}
func (container *Container) Export() (Archive, error) {
@@ -781,7 +952,6 @@ func (container *Container) WaitTimeout(timeout time.Duration) error {
case <-done:
return nil
}
panic("unreachable")
}
func (container *Container) EnsureMounted() error {
@@ -824,22 +994,26 @@ func (container *Container) Unmount() error {
return Unmount(container.RootfsPath())
}
// ShortId returns a shorthand version of the container's id for convenience.
// ShortID returns a shorthand version of the container's id for convenience.
// A collision with other container shorthands is very unlikely, but possible.
// In case of a collision a lookup with Runtime.Get() will fail, and the caller
// will need to use a langer prefix, or the full-length container Id.
func (container *Container) ShortId() string {
return TruncateId(container.Id)
func (container *Container) ShortID() string {
return utils.TruncateID(container.ID)
}
func (container *Container) logPath(name string) string {
return path.Join(container.root, fmt.Sprintf("%s-%s.log", container.Id, name))
return path.Join(container.root, fmt.Sprintf("%s-%s.log", container.ID, name))
}
func (container *Container) ReadLog(name string) (io.Reader, error) {
return os.Open(container.logPath(name))
}
func (container *Container) hostConfigPath() string {
return path.Join(container.root, "hostconfig.json")
}
func (container *Container) jsonPath() string {
return path.Join(container.root, "config.json")
}
@@ -853,29 +1027,36 @@ func (container *Container) RootfsPath() string {
return path.Join(container.root, "rootfs")
}
func (container *Container) GetVolumes() (map[string]string, error) {
ret := make(map[string]string)
for volPath, id := range container.Volumes {
volume, err := container.runtime.volumes.Get(id)
if err != nil {
return nil, err
}
root, err := volume.root()
if err != nil {
return nil, err
}
ret[volPath] = path.Join(root, "layer")
}
return ret, nil
}
func (container *Container) rwPath() string {
return path.Join(container.root, "rw")
}
func validateId(id string) error {
func validateID(id string) error {
if id == "" {
return fmt.Errorf("Invalid empty id")
}
return nil
}
// GetSize, return real size, virtual size
func (container *Container) GetSize() (int64, int64) {
var sizeRw, sizeRootfs int64
filepath.Walk(container.rwPath(), func(path string, fileInfo os.FileInfo, err error) error {
if fileInfo != nil {
sizeRw += fileInfo.Size()
}
return nil
})
_, err := os.Stat(container.RootfsPath())
if err == nil {
filepath.Walk(container.RootfsPath(), func(path string, fileInfo os.FileInfo, err error) error {
if fileInfo != nil {
sizeRootfs += fileInfo.Size()
}
return nil
})
}
return sizeRw, sizeRootfs
}

View File

@@ -7,6 +7,7 @@ import (
"io/ioutil"
"math/rand"
"os"
"path"
"regexp"
"sort"
"strings"
@@ -14,39 +15,33 @@ import (
"time"
)
func TestIdFormat(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
func TestIDFormat(t *testing.T) {
runtime := mkRuntime(t)
defer nuke(runtime)
container1, err := NewBuilder(runtime).Create(
&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"/bin/sh", "-c", "echo hello world"},
},
)
if err != nil {
t.Fatal(err)
}
match, err := regexp.Match("^[0-9a-f]{64}$", []byte(container1.Id))
match, err := regexp.Match("^[0-9a-f]{64}$", []byte(container1.ID))
if err != nil {
t.Fatal(err)
}
if !match {
t.Fatalf("Invalid container ID: %s", container1.Id)
t.Fatalf("Invalid container ID: %s", container1.ID)
}
}
func TestMultipleAttachRestart(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(
&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"/bin/sh", "-c",
"i=1; while [ $i -le 5 ]; do i=`expr $i + 1`; echo hello; done"},
},
@@ -70,7 +65,8 @@ func TestMultipleAttachRestart(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if err := container.Start(); err != nil {
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
l1, err := bufio.NewReader(stdout1).ReadString('\n')
@@ -111,7 +107,7 @@ func TestMultipleAttachRestart(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if err := container.Start(); err != nil {
if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
@@ -142,10 +138,7 @@ func TestMultipleAttachRestart(t *testing.T) {
}
func TestDiff(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
runtime := mkRuntime(t)
defer nuke(runtime)
builder := NewBuilder(runtime)
@@ -153,7 +146,7 @@ func TestDiff(t *testing.T) {
// Create a container and remove a file
container1, err := builder.Create(
&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"/bin/rm", "/etc/passwd"},
},
)
@@ -194,7 +187,7 @@ func TestDiff(t *testing.T) {
// Create a new container from the commited image
container2, err := builder.Create(
&Config{
Image: img.Id,
Image: img.ID,
Cmd: []string{"cat", "/etc/passwd"},
},
)
@@ -217,19 +210,47 @@ func TestDiff(t *testing.T) {
t.Fatalf("/etc/passwd should not be present in the diff after commit.")
}
}
}
func TestCommitAutoRun(t *testing.T) {
runtime, err := newTestRuntime()
// Create a new containere
container3, err := builder.Create(
&Config{
Image: GetTestImage(runtime).ID,
Cmd: []string{"rm", "/bin/httpd"},
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container3)
if err := container3.Run(); err != nil {
t.Fatal(err)
}
// Check the changelog
c, err = container3.Changes()
if err != nil {
t.Fatal(err)
}
success = false
for _, elem := range c {
if elem.Path == "/bin/httpd" && elem.Kind == 2 {
success = true
}
}
if !success {
t.Fatalf("/bin/httpd should be present in the diff after commit.")
}
}
func TestCommitAutoRun(t *testing.T) {
runtime := mkRuntime(t)
defer nuke(runtime)
builder := NewBuilder(runtime)
container1, err := builder.Create(
&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"/bin/sh", "-c", "echo hello > /world"},
},
)
@@ -260,7 +281,7 @@ func TestCommitAutoRun(t *testing.T) {
// FIXME: Make a TestCommit that stops here and check docker.root/layers/img.id/world
container2, err := builder.Create(
&Config{
Image: img.Id,
Image: img.ID,
},
)
if err != nil {
@@ -275,7 +296,8 @@ func TestCommitAutoRun(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if err := container2.Start(); err != nil {
hostConfig := &HostConfig{}
if err := container2.Start(hostConfig); err != nil {
t.Fatal(err)
}
container2.Wait()
@@ -299,17 +321,14 @@ func TestCommitAutoRun(t *testing.T) {
}
func TestCommitRun(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
runtime := mkRuntime(t)
defer nuke(runtime)
builder := NewBuilder(runtime)
container1, err := builder.Create(
&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"/bin/sh", "-c", "echo hello > /world"},
},
)
@@ -341,7 +360,7 @@ func TestCommitRun(t *testing.T) {
container2, err := builder.Create(
&Config{
Image: img.Id,
Image: img.ID,
Cmd: []string{"cat", "/world"},
},
)
@@ -357,7 +376,8 @@ func TestCommitRun(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if err := container2.Start(); err != nil {
hostConfig := &HostConfig{}
if err := container2.Start(hostConfig); err != nil {
t.Fatal(err)
}
container2.Wait()
@@ -381,15 +401,13 @@ func TestCommitRun(t *testing.T) {
}
func TestStart(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(
&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Memory: 33554432,
CpuShares: 1000,
Cmd: []string{"/bin/cat"},
OpenStdin: true,
},
@@ -399,7 +417,13 @@ func TestStart(t *testing.T) {
}
defer runtime.Destroy(container)
if err := container.Start(); err != nil {
cStdin, err := container.StdinPipe()
if err != nil {
t.Fatal(err)
}
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
@@ -409,25 +433,21 @@ func TestStart(t *testing.T) {
if !container.State.Running {
t.Errorf("Container should be running")
}
if err := container.Start(); err == nil {
if err := container.Start(hostConfig); err == nil {
t.Fatalf("A running containter should be able to be started")
}
// Try to avoid the timeoout in destroy. Best effort, don't check error
cStdin, _ := container.StdinPipe()
cStdin.Close()
container.WaitTimeout(2 * time.Second)
}
func TestRun(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(
&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"ls", "-al"},
},
)
@@ -448,14 +468,11 @@ func TestRun(t *testing.T) {
}
func TestOutput(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(
&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"echo", "-n", "foobar"},
},
)
@@ -473,13 +490,10 @@ func TestOutput(t *testing.T) {
}
func TestKillDifferentUser(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"tail", "-f", "/etc/resolv.conf"},
User: "daemon",
},
@@ -492,16 +506,19 @@ func TestKillDifferentUser(t *testing.T) {
if container.State.Running {
t.Errorf("Container shouldn't be running")
}
if err := container.Start(); err != nil {
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
// Give some time to lxc to spawn the process (setuid might take some time)
container.WaitTimeout(500 * time.Millisecond)
setTimeout(t, "Waiting for the container to be started timed out", 2*time.Second, func() {
for !container.State.Running {
time.Sleep(10 * time.Millisecond)
}
})
if !container.State.Running {
t.Errorf("Container should be running")
}
// Even if the state is running, lets give some time to lxc to spawn the process
container.WaitTimeout(500 * time.Millisecond)
if err := container.Kill(); err != nil {
t.Fatal(err)
@@ -520,15 +537,33 @@ func TestKillDifferentUser(t *testing.T) {
}
}
func TestKill(t *testing.T) {
runtime, err := newTestRuntime()
// Test that creating a container with a volume doesn't crash. Regression test for #995.
func TestCreateVolume(t *testing.T) {
runtime := mkRuntime(t)
defer nuke(runtime)
config, hc, _, err := ParseRun([]string{"-v", "/var/lib/data", GetTestImage(runtime).ID, "echo", "hello", "world"}, nil)
if err != nil {
t.Fatal(err)
}
c, err := NewBuilder(runtime).Create(config)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(c)
if err := c.Start(hc); err != nil {
t.Fatal(err)
}
c.WaitTimeout(500 * time.Millisecond)
c.Wait()
}
func TestKill(t *testing.T) {
runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"cat", "/dev/zero"},
Image: GetTestImage(runtime).ID,
Cmd: []string{"sleep", "2"},
},
)
if err != nil {
@@ -539,7 +574,8 @@ func TestKill(t *testing.T) {
if container.State.Running {
t.Errorf("Container shouldn't be running")
}
if err := container.Start(); err != nil {
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
@@ -566,16 +602,13 @@ func TestKill(t *testing.T) {
}
func TestExitCode(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
runtime := mkRuntime(t)
defer nuke(runtime)
builder := NewBuilder(runtime)
trueContainer, err := builder.Create(&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"/bin/true", ""},
})
if err != nil {
@@ -590,7 +623,7 @@ func TestExitCode(t *testing.T) {
}
falseContainer, err := builder.Create(&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"/bin/false", ""},
})
if err != nil {
@@ -606,13 +639,10 @@ func TestExitCode(t *testing.T) {
}
func TestRestart(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"echo", "-n", "foobar"},
},
)
@@ -639,13 +669,10 @@ func TestRestart(t *testing.T) {
}
func TestRestartStdin(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"cat"},
OpenStdin: true,
@@ -664,7 +691,8 @@ func TestRestartStdin(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if err := container.Start(); err != nil {
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
if _, err := io.WriteString(stdin, "hello world"); err != nil {
@@ -694,7 +722,7 @@ func TestRestartStdin(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if err := container.Start(); err != nil {
if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
if _, err := io.WriteString(stdin, "hello world #2"); err != nil {
@@ -717,17 +745,14 @@ func TestRestartStdin(t *testing.T) {
}
func TestUser(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
runtime := mkRuntime(t)
defer nuke(runtime)
builder := NewBuilder(runtime)
// Default user must be root
container, err := builder.Create(&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"id"},
},
)
@@ -745,7 +770,7 @@ func TestUser(t *testing.T) {
// Set a username
container, err = builder.Create(&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"id"},
User: "root",
@@ -765,7 +790,7 @@ func TestUser(t *testing.T) {
// Set a UID
container, err = builder.Create(&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"id"},
User: "0",
@@ -785,7 +810,7 @@ func TestUser(t *testing.T) {
// Set a different user by uid
container, err = builder.Create(&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"id"},
User: "1",
@@ -807,7 +832,7 @@ func TestUser(t *testing.T) {
// Set a different user by username
container, err = builder.Create(&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"id"},
User: "daemon",
@@ -824,20 +849,34 @@ func TestUser(t *testing.T) {
if !strings.Contains(string(output), "uid=1(daemon) gid=1(daemon)") {
t.Error(string(output))
}
}
func TestMultipleContainers(t *testing.T) {
runtime, err := newTestRuntime()
// Test an wrong username
container, err = builder.Create(&Config{
Image: GetTestImage(runtime).ID,
Cmd: []string{"id"},
User: "unkownuser",
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container)
output, err = container.Output()
if container.State.ExitCode == 0 {
t.Fatal("Starting container with wrong uid should fail but it passed.")
}
}
func TestMultipleContainers(t *testing.T) {
runtime := mkRuntime(t)
defer nuke(runtime)
builder := NewBuilder(runtime)
container1, err := builder.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"cat", "/dev/zero"},
Image: GetTestImage(runtime).ID,
Cmd: []string{"sleep", "2"},
},
)
if err != nil {
@@ -846,8 +885,8 @@ func TestMultipleContainers(t *testing.T) {
defer runtime.Destroy(container1)
container2, err := builder.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"cat", "/dev/zero"},
Image: GetTestImage(runtime).ID,
Cmd: []string{"sleep", "2"},
},
)
if err != nil {
@@ -856,10 +895,11 @@ func TestMultipleContainers(t *testing.T) {
defer runtime.Destroy(container2)
// Start both containers
if err := container1.Start(); err != nil {
hostConfig := &HostConfig{}
if err := container1.Start(hostConfig); err != nil {
t.Fatal(err)
}
if err := container2.Start(); err != nil {
if err := container2.Start(hostConfig); err != nil {
t.Fatal(err)
}
@@ -886,13 +926,10 @@ func TestMultipleContainers(t *testing.T) {
}
func TestStdin(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"cat"},
OpenStdin: true,
@@ -911,7 +948,8 @@ func TestStdin(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if err := container.Start(); err != nil {
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
defer stdin.Close()
@@ -933,13 +971,10 @@ func TestStdin(t *testing.T) {
}
func TestTty(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"cat"},
OpenStdin: true,
@@ -958,7 +993,8 @@ func TestTty(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if err := container.Start(); err != nil {
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
defer stdin.Close()
@@ -980,14 +1016,11 @@ func TestTty(t *testing.T) {
}
func TestEnv(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"/usr/bin/env"},
Image: GetTestImage(runtime).ID,
Cmd: []string{"env"},
},
)
if err != nil {
@@ -1000,7 +1033,8 @@ func TestEnv(t *testing.T) {
t.Fatal(err)
}
defer stdout.Close()
if err := container.Start(); err != nil {
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
container.Wait()
@@ -1028,6 +1062,29 @@ func TestEnv(t *testing.T) {
}
}
func TestEntrypoint(t *testing.T) {
runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(
&Config{
Image: GetTestImage(runtime).ID,
Entrypoint: []string{"/bin/echo"},
Cmd: []string{"-n", "foobar"},
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container)
output, err := container.Output()
if err != nil {
t.Fatal(err)
}
if string(output) != "foobar" {
t.Error(string(output))
}
}
func grepFile(t *testing.T, path string, pattern string) {
f, err := os.Open(path)
if err != nil {
@@ -1049,22 +1106,24 @@ func grepFile(t *testing.T, path string, pattern string) {
}
func TestLXCConfig(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
runtime := mkRuntime(t)
defer nuke(runtime)
// Memory is allocated randomly for testing
rand.Seed(time.Now().UTC().UnixNano())
memMin := 33554432
memMax := 536870912
mem := memMin + rand.Intn(memMax-memMin)
// CPU shares as well
cpuMin := 100
cpuMax := 10000
cpu := cpuMin + rand.Intn(cpuMax-cpuMin)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"/bin/true"},
Hostname: "foobar",
Memory: int64(mem),
Hostname: "foobar",
Memory: int64(mem),
CpuShares: int64(cpu),
},
)
if err != nil {
@@ -1080,14 +1139,11 @@ func TestLXCConfig(t *testing.T) {
}
func BenchmarkRunSequencial(b *testing.B) {
runtime, err := newTestRuntime()
if err != nil {
b.Fatal(err)
}
runtime := mkRuntime(b)
defer nuke(runtime)
for i := 0; i < b.N; i++ {
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"echo", "-n", "foo"},
},
)
@@ -1109,10 +1165,7 @@ func BenchmarkRunSequencial(b *testing.B) {
}
func BenchmarkRunParallel(b *testing.B) {
runtime, err := newTestRuntime()
if err != nil {
b.Fatal(err)
}
runtime := mkRuntime(b)
defer nuke(runtime)
var tasks []chan error
@@ -1122,7 +1175,7 @@ func BenchmarkRunParallel(b *testing.B) {
tasks = append(tasks, complete)
go func(i int, complete chan error) {
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).Id,
Image: GetTestImage(runtime).ID,
Cmd: []string{"echo", "-n", "foo"},
},
)
@@ -1131,7 +1184,8 @@ func BenchmarkRunParallel(b *testing.B) {
return
}
defer runtime.Destroy(container)
if err := container.Start(); err != nil {
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
complete <- err
return
}
@@ -1160,3 +1214,88 @@ func BenchmarkRunParallel(b *testing.B) {
b.Fatal(errors)
}
}
func tempDir(t *testing.T) string {
tmpDir, err := ioutil.TempDir("", "docker-test")
if err != nil {
t.Fatal(err)
}
return tmpDir
}
func TestBindMounts(t *testing.T) {
r := mkRuntime(t)
defer nuke(r)
tmpDir := tempDir(t)
defer os.RemoveAll(tmpDir)
writeFile(path.Join(tmpDir, "touch-me"), "", t)
// Test reading from a read-only bind mount
stdout, _ := runContainer(r, []string{"-v", fmt.Sprintf("%s:/tmp:ro", tmpDir), "_", "ls", "/tmp"}, t)
if !strings.Contains(stdout, "touch-me") {
t.Fatal("Container failed to read from bind mount")
}
// test writing to bind mount
runContainer(r, []string{"-v", fmt.Sprintf("%s:/tmp:rw", tmpDir), "_", "touch", "/tmp/holla"}, t)
readFile(path.Join(tmpDir, "holla"), t) // Will fail if the file doesn't exist
// test mounting to an illegal destination directory
if _, err := runContainer(r, []string{"-v", fmt.Sprintf("%s:.", tmpDir), "ls", "."}, nil); err == nil {
t.Fatal("Container bind mounted illegal directory")
}
}
// Test that VolumesRW values are copied to the new container. Regression test for #1201
func TestVolumesFromReadonlyMount(t *testing.T) {
runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(
&Config{
Image: GetTestImage(runtime).ID,
Cmd: []string{"/bin/echo", "-n", "foobar"},
Volumes: map[string]struct{}{"/test": {}},
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container)
_, err = container.Output()
if err != nil {
t.Fatal(err)
}
if !container.VolumesRW["/test"] {
t.Fail()
}
container2, err := NewBuilder(runtime).Create(
&Config{
Image: GetTestImage(runtime).ID,
Cmd: []string{"/bin/echo", "-n", "foobar"},
VolumesFrom: container.ID,
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container2)
_, err = container2.Output()
if err != nil {
t.Fatal(err)
}
if container.Volumes["/test"] != container2.Volumes["/test"] {
t.Fail()
}
actual, exists := container2.VolumesRW["/test"]
if !exists {
t.Fail()
}
if container.VolumesRW["/test"] != actual {
t.Fail()
}
}

1
contrib/MAINTAINERS Normal file
View File

@@ -0,0 +1 @@
# Maintainer wanted! Enroll on #docker@freenode

View File

@@ -11,13 +11,13 @@ import (
"time"
)
var DOCKER_PATH string = path.Join(os.Getenv("DOCKERPATH"), "docker")
var DOCKERPATH = path.Join(os.Getenv("DOCKERPATH"), "docker")
// WARNING: this crashTest will 1) crash your host, 2) remove all containers
func runDaemon() (*exec.Cmd, error) {
os.Remove("/var/run/docker.pid")
exec.Command("rm", "-rf", "/var/lib/docker/containers").Run()
cmd := exec.Command(DOCKER_PATH, "-d")
cmd := exec.Command(DOCKERPATH, "-d")
outPipe, err := cmd.StdoutPipe()
if err != nil {
return nil, err
@@ -77,7 +77,7 @@ func crashTest() error {
stop = false
for i := 0; i < 100 && !stop; {
func() error {
cmd := exec.Command(DOCKER_PATH, "run", "base", "echo", fmt.Sprintf("%d", totalTestCount))
cmd := exec.Command(DOCKERPATH, "run", "base", "echo", fmt.Sprintf("%d", totalTestCount))
i++
totalTestCount++
outPipe, err := cmd.StdoutPipe()
@@ -116,7 +116,6 @@ func crashTest() error {
return err
}
}
return nil
}
func main() {

View File

@@ -1,68 +0,0 @@
# docker-build: build your software with docker
## Description
docker-build is a script to build docker images from source. It will be deprecated once the 'build' feature is incorporated into docker itself (See https://github.com/dotcloud/docker/issues/278)
Author: Solomon Hykes <solomon@dotcloud.com>
## Install
docker-builder requires:
1) A reasonably recent Python setup (tested on 2.7.2).
2) A running docker daemon at version 0.1.4 or more recent (http://www.docker.io/gettingstarted)
## Usage
First create a valid Changefile, which defines a sequence of changes to apply to a base image.
$ cat Changefile
# Start build from a know base image
from base:ubuntu-12.10
# Update ubuntu sources
run echo 'deb http://archive.ubuntu.com/ubuntu quantal main universe multiverse' > /etc/apt/sources.list
run apt-get update
# Install system packages
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q git
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q curl
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q golang
# Insert files from the host (./myscript must be present in the current directory)
copy myscript /usr/local/bin/myscript
Run docker-build, and pass the contents of your Changefile as standard input.
$ IMG=$(./docker-build < Changefile)
This will take a while: for each line of the changefile, docker-build will:
1. Create a new container to execute the given command or insert the given file
2. Wait for the container to complete execution
3. Commit the resulting changes as a new image
4. Use the resulting image as the input of the next step
If all the steps succeed, the result will be an image containing the combined results of each build step.
You can trace back those build steps by inspecting the image's history:
$ docker history $IMG
ID CREATED CREATED BY
1e9e2045de86 A few seconds ago /bin/sh -c cat > /usr/local/bin/myscript; chmod +x /usr/local/bin/git
77db140aa62a A few seconds ago /bin/sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -q golang
77db140aa62a A few seconds ago /bin/sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -q curl
77db140aa62a A few seconds ago /bin/sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -q git
83e85d155451 A few seconds ago /bin/sh -c apt-get update
bfd53b36d9d3 A few seconds ago /bin/sh -c echo 'deb http://archive.ubuntu.com/ubuntu quantal main universe multiverse' > /etc/apt/sources.list
base 2 weeks ago /bin/bash
27cf78414709 2 weeks ago
Note that your build started from 'base', as instructed by your Changefile. But that base image itself seems to have been built in 2 steps - hence the extra step in the history.
You can use this build technique to create any image you want: a database, a web application, or anything else that can be build by a sequence of unix commands - in other words, anything else.

View File

@@ -1,142 +0,0 @@
#!/usr/bin/env python
# docker-build is a script to build docker images from source.
# It will be deprecated once the 'build' feature is incorporated into docker itself.
# (See https://github.com/dotcloud/docker/issues/278)
#
# Author: Solomon Hykes <solomon@dotcloud.com>
# First create a valid Changefile, which defines a sequence of changes to apply to a base image.
#
# $ cat Changefile
# # Start build from a know base image
# from base:ubuntu-12.10
# # Update ubuntu sources
# run echo 'deb http://archive.ubuntu.com/ubuntu quantal main universe multiverse' > /etc/apt/sources.list
# run apt-get update
# # Install system packages
# run DEBIAN_FRONTEND=noninteractive apt-get install -y -q git
# run DEBIAN_FRONTEND=noninteractive apt-get install -y -q curl
# run DEBIAN_FRONTEND=noninteractive apt-get install -y -q golang
# # Insert files from the host (./myscript must be present in the current directory)
# copy myscript /usr/local/bin/myscript
#
#
# Run docker-build, and pass the contents of your Changefile as standard input.
#
# $ IMG=$(./docker-build < Changefile)
#
# This will take a while: for each line of the changefile, docker-build will:
#
# 1. Create a new container to execute the given command or insert the given file
# 2. Wait for the container to complete execution
# 3. Commit the resulting changes as a new image
# 4. Use the resulting image as the input of the next step
import sys
import subprocess
import json
import hashlib
def docker(args, stdin=None):
print "# docker " + " ".join(args)
p = subprocess.Popen(["docker"] + list(args), stdin=stdin, stdout=subprocess.PIPE)
return p.stdout
def image_exists(img):
return docker(["inspect", img]).read().strip() != ""
def image_config(img):
return json.loads(docker(["inspect", img]).read()).get("config", {})
def run_and_commit(img_in, cmd, stdin=None, author=None, run=None):
run_id = docker(["run"] + (["-i", "-a", "stdin"] if stdin else ["-d"]) + [img_in, "/bin/sh", "-c", cmd], stdin=stdin).read().rstrip()
print "---> Waiting for " + run_id
result=int(docker(["wait", run_id]).read().rstrip())
if result != 0:
print "!!! '{}' return non-zero exit code '{}'. Aborting.".format(cmd, result)
sys.exit(1)
return docker(["commit"] + (["-author", author] if author else []) + (["-run", json.dumps(run)] if run is not None else []) + [run_id]).read().rstrip()
def insert(base, src, dst, author=None):
print "COPY {} to {} in {}".format(src, dst, base)
if dst == "":
raise Exception("Missing destination path")
stdin = file(src)
stdin.seek(0)
return run_and_commit(base, "cat > {0}; chmod +x {0}".format(dst), stdin=stdin, author=author)
def add(base, src, dst, author=None):
print "PUSH to {} in {}".format(dst, base)
if src == ".":
tar = subprocess.Popen(["tar", "-c", "."], stdout=subprocess.PIPE).stdout
else:
tar = subprocess.Popen(["curl", src], stdout=subprocess.PIPE).stdout
if dst == "":
raise Exception("Missing argument to push")
return run_and_commit(base, "mkdir -p '{0}' && tar -C '{0}' -x".format(dst), stdin=tar, author=author)
def main():
base=""
maintainer=""
steps = []
try:
for line in sys.stdin.readlines():
line = line.strip()
# Skip comments and empty lines
if line == "" or line[0] == "#":
continue
op, param = line.split(None, 1)
print op.upper() + " " + param
if op == "from":
base = param
steps.append(base)
elif op == "maintainer":
maintainer = param
elif op == "run":
result = run_and_commit(base, param, author=maintainer)
steps.append(result)
base = result
print "===> " + base
elif op == "copy":
src, dst = param.split(" ", 1)
result = insert(base, src, dst, author=maintainer)
steps.append(result)
base = result
print "===> " + base
elif op == "add":
src, dst = param.split(" ", 1)
result = add(base, src, dst, author=maintainer)
steps.append(result)
base=result
print "===> " + base
elif op == "expose":
config = image_config(base)
if config.get("PortSpecs") is None:
config["PortSpecs"] = []
portspec = param.strip()
config["PortSpecs"].append(portspec)
result = run_and_commit(base, "# (nop) expose port {}".format(portspec), author=maintainer, run=config)
steps.append(result)
base=result
print "===> " + base
elif op == "cmd":
config = image_config(base)
cmd = list(json.loads(param))
config["Cmd"] = cmd
result = run_and_commit(base, "# (nop) set default command to '{}'".format(" ".join(cmd)), author=maintainer, run=config)
steps.append(result)
base=result
print "===> " + base
else:
print "Skipping uknown op " + op
except:
docker(["rmi"] + steps[1:])
raise
print base
if __name__ == "__main__":
main()

View File

@@ -1,13 +0,0 @@
# Start build from a know base image
maintainer Solomon Hykes <solomon@dotcloud.com>
from base:ubuntu-12.10
# Update ubuntu sources
run echo 'deb http://archive.ubuntu.com/ubuntu quantal main universe multiverse' > /etc/apt/sources.list
run apt-get update
# Install system packages
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q git
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q curl
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q golang
# Insert files from the host (./myscript must be present in the current directory)
copy myscript /usr/local/bin/myscript
push /src

View File

@@ -1,3 +0,0 @@
#!/bin/sh
echo hello, world!

View File

@@ -8,7 +8,7 @@
echo "Ensuring basic dependencies are installed..."
apt-get -qq update
apt-get -qq install lxc wget bsdtar
apt-get -qq install lxc wget
echo "Looking in /proc/filesystems to see if we have AUFS support..."
if grep -q aufs /proc/filesystems

View File

@@ -2,18 +2,15 @@
set -e
# these should match the names found at http://www.debian.org/releases/
stableSuite='squeeze'
testingSuite='wheezy'
stableSuite='wheezy'
testingSuite='jessie'
unstableSuite='sid'
# if suite is equal to this, it gets the "latest" tag
latestSuite="$testingSuite"
variant='minbase'
include='iproute,iputils-ping'
repo="$1"
suite="${2:-$latestSuite}"
suite="${2:-$stableSuite}"
mirror="${3:-}" # stick to the default debootstrap mirror if one is not provided
if [ ! "$repo" ]; then
@@ -41,17 +38,14 @@ img=$(sudo tar -c . | docker import -)
# tag suite
docker tag $img $repo $suite
if [ "$suite" = "$latestSuite" ]; then
# tag latest
docker tag $img $repo latest
fi
# test the image
docker run -i -t $repo:$suite echo success
# unstable's version numbers match testing (since it's mostly just a sandbox for testing), so it doesn't get a version number tag
if [ "$suite" != "$unstableSuite" -a "$suite" != 'unstable' ]; then
# tag the specific version
if [ "$suite" = "$stableSuite" -o "$suite" = 'stable' ]; then
# tag latest
docker tag $img $repo latest
# tag the specific debian release version
ver=$(docker run $repo:$suite cat /etc/debian_version)
docker tag $img $repo $ver
fi

49
contrib/mkimage-unittest.sh Executable file
View File

@@ -0,0 +1,49 @@
#!/bin/bash
# Generate a very minimal filesystem based on busybox-static,
# and load it into the local docker under the name "docker-ut".
missing_pkg() {
echo "Sorry, I could not locate $1"
echo "Try 'apt-get install ${2:-$1}'?"
exit 1
}
BUSYBOX=$(which busybox)
[ "$BUSYBOX" ] || missing_pkg busybox busybox-static
SOCAT=$(which socat)
[ "$SOCAT" ] || missing_pkg socat
shopt -s extglob
set -ex
ROOTFS=`mktemp -d /tmp/rootfs-busybox.XXXXXXXXXX`
trap "rm -rf $ROOTFS" INT QUIT TERM
cd $ROOTFS
mkdir bin etc dev dev/pts lib proc sys tmp
touch etc/resolv.conf
cp /etc/nsswitch.conf etc/nsswitch.conf
echo root:x:0:0:root:/:/bin/sh > etc/passwd
echo daemon:x:1:1:daemon:/usr/sbin:/bin/sh >> etc/passwd
echo root:x:0: > etc/group
echo daemon:x:1: >> etc/group
ln -s lib lib64
ln -s bin sbin
cp $BUSYBOX $SOCAT bin
for X in $(busybox --list)
do
ln -s busybox bin/$X
done
rm bin/init
ln bin/busybox bin/init
cp -P /lib/x86_64-linux-gnu/lib{pthread*,c*(-*),dl*(-*),nsl*(-*),nss_*,util*(-*),wrap,z}.so* lib
cp /lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 lib
cp -P /usr/lib/x86_64-linux-gnu/lib{crypto,ssl}.so* lib
for X in console null ptmx random stdin stdout stderr tty urandom zero
do
cp -a /dev/$X dev
done
chmod 0755 $ROOTFS # See #486
tar -cf- . | docker import - docker-ut
docker run -i -u root docker-ut /bin/echo Success.
rm -rf $ROOTFS

View File

@@ -4,23 +4,22 @@ import (
"flag"
"fmt"
"github.com/dotcloud/docker"
"github.com/dotcloud/docker/rcli"
"github.com/dotcloud/docker/term"
"io"
"github.com/dotcloud/docker/utils"
"io/ioutil"
"log"
"os"
"os/signal"
"strconv"
"strings"
"syscall"
)
var (
GIT_COMMIT string
GITCOMMIT string
)
func main() {
if docker.SelfPath() == "/sbin/init" {
if utils.SelfPath() == "/sbin/init" {
// Running in init mode
docker.SysInit()
return
@@ -31,7 +30,19 @@ func main() {
flAutoRestart := flag.Bool("r", false, "Restart previously running containers")
bridgeName := flag.String("b", "", "Attach containers to a pre-existing network bridge")
pidfile := flag.String("p", "/var/run/docker.pid", "File containing process PID")
flGraphPath := flag.String("g", "/var/lib/docker", "Path to graph storage base dir.")
flEnableCors := flag.Bool("api-enable-cors", false, "Enable CORS requests in the remote api.")
flDns := flag.String("dns", "", "Set custom dns servers")
flHosts := docker.ListOpts{fmt.Sprintf("tcp://%s:%d", docker.DEFAULTHTTPHOST, docker.DEFAULTHTTPPORT)}
flag.Var(&flHosts, "H", "tcp://host:port to bind/connect to or unix://path/to/socket to use")
flag.Parse()
if len(flHosts) > 1 {
flHosts = flHosts[1:] //trick to display a nice defaul value in the usage
}
for i, flHost := range flHosts {
flHosts[i] = utils.ParseHost(docker.DEFAULTHTTPHOST, docker.DEFAULTHTTPPORT, flHost)
}
if *bridgeName != "" {
docker.NetworkBridgeIface = *bridgeName
} else {
@@ -40,18 +51,25 @@ func main() {
if *flDebug {
os.Setenv("DEBUG", "1")
}
docker.GIT_COMMIT = GIT_COMMIT
docker.GITCOMMIT = GITCOMMIT
if *flDaemon {
if flag.NArg() != 0 {
flag.Usage()
return
}
if err := daemon(*pidfile, *flAutoRestart); err != nil {
if err := daemon(*pidfile, *flGraphPath, flHosts, *flAutoRestart, *flEnableCors, *flDns); err != nil {
log.Fatal(err)
os.Exit(-1)
}
} else {
if err := runCommand(flag.Args()); err != nil {
if len(flHosts) > 1 {
log.Fatal("Please specify only one -H")
return
}
protoAddrParts := strings.SplitN(flHosts[0], "://", 2)
if err := docker.ParseCommands(protoAddrParts[0], protoAddrParts[1], flag.Args()...); err != nil {
log.Fatal(err)
os.Exit(-1)
}
}
}
@@ -83,7 +101,7 @@ func removePidFile(pidfile string) {
}
}
func daemon(pidfile string, autoRestart bool) error {
func daemon(pidfile string, flGraphPath string, protoAddrs []string, autoRestart, enableCors bool, flDns string) error {
if err := createPidFile(pidfile); err != nil {
log.Fatal(err)
}
@@ -97,51 +115,36 @@ func daemon(pidfile string, autoRestart bool) error {
removePidFile(pidfile)
os.Exit(0)
}()
service, err := docker.NewServer(autoRestart)
var dns []string
if flDns != "" {
dns = []string{flDns}
}
server, err := docker.NewServer(flGraphPath, autoRestart, enableCors, dns)
if err != nil {
return err
}
return rcli.ListenAndServe("tcp", "127.0.0.1:4242", service)
}
func runCommand(args []string) error {
// FIXME: we want to use unix sockets here, but net.UnixConn doesn't expose
// CloseWrite(), which we need to cleanly signal that stdin is closed without
// closing the connection.
// See http://code.google.com/p/go/issues/detail?id=3345
if conn, err := rcli.Call("tcp", "127.0.0.1:4242", args...); err == nil {
options := conn.GetOptions()
if options.RawTerminal &&
term.IsTerminal(int(os.Stdin.Fd())) &&
os.Getenv("NORAW") == "" {
if oldState, err := rcli.SetRawTerminal(); err != nil {
return err
} else {
defer rcli.RestoreTerminal(oldState)
chErrors := make(chan error, len(protoAddrs))
for _, protoAddr := range protoAddrs {
protoAddrParts := strings.SplitN(protoAddr, "://", 2)
if protoAddrParts[0] == "unix" {
syscall.Unlink(protoAddrParts[1])
} else if protoAddrParts[0] == "tcp" {
if !strings.HasPrefix(protoAddrParts[1], "127.0.0.1") {
log.Println("/!\\ DON'T BIND ON ANOTHER IP ADDRESS THAN 127.0.0.1 IF YOU DON'T KNOW WHAT YOU'RE DOING /!\\")
}
} else {
log.Fatal("Invalid protocol format.")
os.Exit(-1)
}
receiveStdout := docker.Go(func() error {
_, err := io.Copy(os.Stdout, conn)
return err
})
sendStdin := docker.Go(func() error {
_, err := io.Copy(conn, os.Stdin)
if err := conn.CloseWrite(); err != nil {
log.Printf("Couldn't send EOF: " + err.Error())
}
return err
})
if err := <-receiveStdout; err != nil {
go func() {
chErrors <- docker.ListenAndServe(protoAddrParts[0], protoAddrParts[1], server, true)
}()
}
for i := 0; i < len(protoAddrs); i += 1 {
err := <-chErrors
if err != nil {
return err
}
if !term.IsTerminal(int(os.Stdin.Fd())) {
if err := <-sendStdin; err != nil {
return err
}
}
} else {
return fmt.Errorf("Can't connect to docker daemon. Is 'docker -d' running on this host?")
}
return nil
}

2
docs/MAINTAINERS Normal file
View File

@@ -0,0 +1,2 @@
Andy Rothfusz <andy@dotcloud.com>
Ken Cochrane <ken@dotcloud.com>

View File

@@ -6,6 +6,7 @@ SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
PYTHON = python
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
@@ -38,17 +39,19 @@ help:
# @echo " linkcheck to check all external links for integrity"
# @echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " docs to build the docs and copy the static files to the outputdir"
@echo " server to serve the docs in your browser under \`http://localhost:8000\`"
@echo " publish to publish the app to dotcloud"
clean:
-rm -rf $(BUILDDIR)/*
docs:
-rm -rf $(BUILDDIR)/*
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The documentation pages are now in $(BUILDDIR)/html."
server: docs
@cd $(BUILDDIR)/html; $(PYTHON) -m SimpleHTTPServer 8000
site:
cp -r website $(BUILDDIR)/
@@ -58,19 +61,15 @@ site:
connect:
@echo connecting dotcloud to www.docker.io website, make sure to use user 1
@cd _build/website/ ; \
dotcloud list ; \
dotcloud connect dockerwebsite
@echo or create your own "dockerwebsite" app
@cd $(BUILDDIR)/website/ ; \
dotcloud connect dockerwebsite ; \
dotcloud list
push:
@cd _build/website/ ; \
@cd $(BUILDDIR)/website/ ; \
dotcloud push
github-deploy: docs
rm -fr github-deploy
git clone ssh://git@github.com/dotcloud/docker github-deploy
cd github-deploy && git checkout -f gh-pages && git rm -r * && rsync -avH ../_build/html/ ./ && touch .nojekyll && echo "docker.io" > CNAME && git add * && git commit -m "Updating docs"
$(VERSIONS):
@echo "Hello world"

View File

@@ -14,20 +14,22 @@ Installation
------------
* Work in your own fork of the code, we accept pull requests.
* Install sphinx: ``pip install sphinx``
* Install sphinx httpdomain contrib package ``sphinxcontrib-httpdomain``
* Install sphinx: `pip install sphinx`
* Mac OS X: `[sudo] pip-2.7 install sphinx`)
* Install sphinx httpdomain contrib package: `pip install sphinxcontrib-httpdomain`
* Mac OS X: `[sudo] pip-2.7 install sphinxcontrib-httpdomain`
* If pip is not available you can probably install it using your favorite package manager as **python-pip**
Usage
-----
* change the .rst files with your favorite editor to your liking
* run *make docs* to clean up old files and generate new ones
* your static website can now be found in the _build dir
* to preview what you have generated, cd into _build/html and then run 'python -m SimpleHTTPServer 8000'
* Change the `.rst` files with your favorite editor to your liking.
* Run `make docs` to clean up old files and generate new ones.
* Your static website can now be found in the `_build` directory.
* To preview what you have generated run `make server` and open <http://localhost:8000/> in your favorite browser.
Working using github's file editor
Working using GitHub's file editor
----------------------------------
Alternatively, for small changes and typo's you might want to use github's built in file editor. It allows
Alternatively, for small changes and typo's you might want to use GitHub's built in file editor. It allows
you to preview your changes right online. Just be carefull not to create many commits.
Images
@@ -72,4 +74,4 @@ Guides on using sphinx
* Code examples
Start without $, so it's easy to copy and paste.
Start without $, so it's easy to copy and paste.

View File

@@ -0,0 +1 @@
Solomon Hykes <solomon@dotcloud.com>

View File

@@ -0,0 +1,5 @@
This directory holds the authoritative specifications of APIs defined and implemented by Docker. Currently this includes:
* The remote API by which a docker node can be queried over HTTP
* The registry API by which a docker node can download and upload container images for storage and sharing
* The index search API by which a docker node can search the public index for images to download

View File

@@ -0,0 +1,140 @@
:title: Remote API
:description: API Documentation for Docker
:keywords: API, Docker, rcli, REST, documentation
=================
Docker Remote API
=================
.. contents:: Table of Contents
1. Brief introduction
=====================
- The Remote API is replacing rcli
- Default port in the docker deamon is 4243
- The API tends to be REST, but for some complex commands, like attach or pull, the HTTP connection is hijacked to transport stdout stdin and stderr
- Since API version 1.2, the auth configuration is now handled client side, so the client has to send the authConfig as POST in /images/(name)/push
2. Versions
===========
The current verson of the API is 1.3
Calling /images/<name>/insert is the same as calling /v1.3/images/<name>/insert
You can still call an old version of the api using /v1.0/images/<name>/insert
:doc:`docker_remote_api_v1.3`
*****************************
What's new
----------
Listing processes (/top):
- List the processes inside a container
Builder (/build):
- Simplify the upload of the build context
- Simply stream a tarball instead of multipart upload with 4 intermediary buffers
- Simpler, less memory usage, less disk usage and faster
.. Note::
The /build improvements are not reverse-compatible. Pre 1.3 clients will break on /build.
List containers (/containers/json):
- You can use size=1 to get the size of the containers
Start containers (/containers/<id>/start):
- You can now pass host-specific configuration (e.g. bind mounts) in the POST body for start calls
:doc:`docker_remote_api_v1.2`
*****************************
docker v0.4.2 2e7649b_
What's new
----------
The auth configuration is now handled by the client.
The client should send it's authConfig as POST on each call of /images/(name)/push
.. http:get:: /auth is now deprecated
.. http:post:: /auth only checks the configuration but doesn't store it on the server
Deleting an image is now improved, will only untag the image if it has chidrens and remove all the untagged parents if has any.
.. http:post:: /images/<name>/delete now returns a JSON with the list of images deleted/untagged
:doc:`docker_remote_api_v1.1`
*****************************
docker v0.4.0 a8ae398_
What's new
----------
.. http:post:: /images/create
.. http:post:: /images/(name)/insert
.. http:post:: /images/(name)/push
Uses json stream instead of HTML hijack, it looks like this:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-Type: application/json
{"status":"Pushing..."}
{"status":"Pushing", "progress":"1/? (n/a)"}
{"error":"Invalid..."}
...
:doc:`docker_remote_api_v1.0`
*****************************
docker v0.3.4 8d73740_
What's new
----------
Initial version
.. _a8ae398: https://github.com/dotcloud/docker/commit/a8ae398bf52e97148ee7bd0d5868de2e15bd297f
.. _8d73740: https://github.com/dotcloud/docker/commit/8d73740343778651c09160cde9661f5f387b36f4
.. _2e7649b: https://github.com/dotcloud/docker/commit/2e7649beda7c820793bd46766cbc2cfeace7b168
==================================
Docker Remote API Client Libraries
==================================
These libraries have been not tested by the Docker Maintainers for
compatibility. Please file issues with the library owners. If you
find more library implementations, please list them in Docker doc bugs
and we will add the libraries here.
+----------------------+----------------+--------------------------------------------+
| Language/Framework | Name | Repository |
+======================+================+============================================+
| Python | docker-py | https://github.com/dotcloud/docker-py |
+----------------------+----------------+--------------------------------------------+
| Ruby | docker-ruby | https://github.com/ActiveState/docker-ruby |
+----------------------+----------------+--------------------------------------------+
| Ruby | docker-client | https://github.com/geku/docker-client |
+----------------------+----------------+--------------------------------------------+
| Ruby | docker-api | https://github.com/swipely/docker-api |
+----------------------+----------------+--------------------------------------------+
| Javascript | docker-js | https://github.com/dgoujard/docker-js |
+----------------------+----------------+--------------------------------------------+
| Javascript (Angular) | dockerui | https://github.com/crosbymichael/dockerui |
| **WebUI** | | |
+----------------------+----------------+--------------------------------------------+
| Java | docker-java | https://github.com/kpelykh/docker-java |
+----------------------+----------------+--------------------------------------------+

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,18 @@
:title: API Documentation
:description: docker documentation
:keywords: docker, ipa, documentation
APIs
====
Your programs and scripts can access Docker's functionality via these interfaces:
.. toctree::
:maxdepth: 3
registry_index_spec
registry_api
index_api
docker_remote_api

View File

@@ -0,0 +1,553 @@
:title: Index API
:description: API Documentation for Docker Index
:keywords: API, Docker, index, REST, documentation
=================
Docker Index API
=================
.. contents:: Table of Contents
1. Brief introduction
=====================
- This is the REST API for the Docker index
- Authorization is done with basic auth over SSL
- Not all commands require authentication, only those noted as such.
2. Endpoints
============
2.1 Repository
^^^^^^^^^^^^^^
Repositories
*************
User Repo
~~~~~~~~~
.. http:put:: /v1/repositories/(namespace)/(repo_name)/
Create a user repository with the given ``namespace`` and ``repo_name``.
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foo/bar/ HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
X-Docker-Token: true
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”}]
:parameter namespace: the namespace for the repo
:parameter repo_name: the name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
WWW-Authenticate: Token signature=123abc,repository=”foo/bar”,access=write
X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]
""
:statuscode 200: Created
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active
.. http:delete:: /v1/repositories/(namespace)/(repo_name)/
Delete a user repository with the given ``namespace`` and ``repo_name``.
**Example Request**:
.. sourcecode:: http
DELETE /v1/repositories/foo/bar/ HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
X-Docker-Token: true
""
:parameter namespace: the namespace for the repo
:parameter repo_name: the name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 202
Vary: Accept
Content-Type: application/json
WWW-Authenticate: Token signature=123abc,repository=”foo/bar”,access=delete
X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]
""
:statuscode 200: Deleted
:statuscode 202: Accepted
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active
Library Repo
~~~~~~~~~~~~
.. http:put:: /v1/repositories/(repo_name)/
Create a library repository with the given ``repo_name``.
This is a restricted feature only available to docker admins.
When namespace is missing, it is assumed to be ``library``
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foobar/ HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
X-Docker-Token: true
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”}]
:parameter repo_name: the library name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
WWW-Authenticate: Token signature=123abc,repository=”library/foobar”,access=write
X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]
""
:statuscode 200: Created
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active
.. http:delete:: /v1/repositories/(repo_name)/
Delete a library repository with the given ``repo_name``.
This is a restricted feature only available to docker admins.
When namespace is missing, it is assumed to be ``library``
**Example Request**:
.. sourcecode:: http
DELETE /v1/repositories/foobar/ HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
X-Docker-Token: true
""
:parameter repo_name: the library name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 202
Vary: Accept
Content-Type: application/json
WWW-Authenticate: Token signature=123abc,repository=”library/foobar”,access=delete
X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]
""
:statuscode 200: Deleted
:statuscode 202: Accepted
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active
Repository Images
*****************
User Repo Images
~~~~~~~~~~~~~~~~
.. http:put:: /v1/repositories/(namespace)/(repo_name)/images
Update the images for a user repo.
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foo/bar/images HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”,
“checksum”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”}]
:parameter namespace: the namespace for the repo
:parameter repo_name: the name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 204
Vary: Accept
Content-Type: application/json
""
:statuscode 204: Created
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active or permission denied
.. http:get:: /v1/repositories/(namespace)/(repo_name)/images
get the images for a user repo.
**Example Request**:
.. sourcecode:: http
GET /v1/repositories/foo/bar/images HTTP/1.1
Host: index.docker.io
Accept: application/json
:parameter namespace: the namespace for the repo
:parameter repo_name: the name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”,
“checksum”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”},
{“id”: “ertwetewtwe38722009fe6857087b486531f9a779a0c1dfddgfgsdgdsgds”,
“checksum”: “34t23f23fc17e3ed29dae8f12c4f9e89cc6f0bsdfgfsdgdsgdsgerwgew”}]
:statuscode 200: OK
:statuscode 404: Not found
Library Repo Images
~~~~~~~~~~~~~~~~~~~
.. http:put:: /v1/repositories/(repo_name)/images
Update the images for a library repo.
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foobar/images HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”,
“checksum”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”}]
:parameter repo_name: the library name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 204
Vary: Accept
Content-Type: application/json
""
:statuscode 204: Created
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active or permission denied
.. http:get:: /v1/repositories/(repo_name)/images
get the images for a library repo.
**Example Request**:
.. sourcecode:: http
GET /v1/repositories/foobar/images HTTP/1.1
Host: index.docker.io
Accept: application/json
:parameter repo_name: the library name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”,
“checksum”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”},
{“id”: “ertwetewtwe38722009fe6857087b486531f9a779a0c1dfddgfgsdgdsgds”,
“checksum”: “34t23f23fc17e3ed29dae8f12c4f9e89cc6f0bsdfgfsdgdsgdsgerwgew”}]
:statuscode 200: OK
:statuscode 404: Not found
Repository Authorization
************************
Library Repo
~~~~~~~~~~~~
.. http:put:: /v1/repositories/(repo_name)/auth
authorize a token for a library repo
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foobar/auth HTTP/1.1
Host: index.docker.io
Accept: application/json
Authorization: Token signature=123abc,repository="library/foobar",access=write
:parameter repo_name: the library name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
"OK"
:statuscode 200: OK
:statuscode 403: Permission denied
:statuscode 404: Not found
User Repo
~~~~~~~~~
.. http:put:: /v1/repositories/(namespace)/(repo_name)/auth
authorize a token for a user repo
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foo/bar/auth HTTP/1.1
Host: index.docker.io
Accept: application/json
Authorization: Token signature=123abc,repository="foo/bar",access=write
:parameter namespace: the namespace for the repo
:parameter repo_name: the name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
"OK"
:statuscode 200: OK
:statuscode 403: Permission denied
:statuscode 404: Not found
2.2 Users
^^^^^^^^^
User Login
**********
.. http:get:: /v1/users
If you want to check your login, you can try this endpoint
**Example Request**:
.. sourcecode:: http
GET /v1/users HTTP/1.1
Host: index.docker.io
Accept: application/json
Authorization: Basic akmklmasadalkm==
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Vary: Accept
Content-Type: application/json
OK
:statuscode 200: no error
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active
User Register
*************
.. http:post:: /v1/users
Registering a new account.
**Example request**:
.. sourcecode:: http
POST /v1/users HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
{"email": "sam@dotcloud.com",
"password": "toto42",
"username": "foobar"'}
:jsonparameter email: valid email address, that needs to be confirmed
:jsonparameter username: min 4 character, max 30 characters, must match the regular expression [a-z0-9_].
:jsonparameter password: min 5 characters
**Example Response**:
.. sourcecode:: http
HTTP/1.1 201 OK
Vary: Accept
Content-Type: application/json
"User Created"
:statuscode 201: User Created
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
Update User
***********
.. http:put:: /v1/users/(username)/
Change a password or email address for given user. If you pass in an email,
it will add it to your account, it will not remove the old one. Passwords will
be updated.
It is up to the client to verify that that password that is sent is the one that
they want. Common approach is to have them type it twice.
**Example Request**:
.. sourcecode:: http
PUT /v1/users/fakeuser/ HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
{"email": "sam@dotcloud.com",
"password": "toto42"}
:parameter username: username for the person you want to update
**Example Response**:
.. sourcecode:: http
HTTP/1.1 204
Vary: Accept
Content-Type: application/json
""
:statuscode 204: User Updated
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active
:statuscode 404: User not found
2.3 Search
^^^^^^^^^^
If you need to search the index, this is the endpoint you would use.
Search
******
.. http:get:: /v1/search
Search the Index given a search term. It accepts :http:method:`get` only.
**Example request**:
.. sourcecode:: http
GET /v1/search?q=search_term HTTP/1.1
Host: example.com
Accept: application/json
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Vary: Accept
Content-Type: application/json
{"query":"search_term",
"num_results": 2,
"results" : [
{"name": "dotcloud/base", "description": "A base ubuntu64 image..."},
{"name": "base2", "description": "A base ubuntu64 image..."},
]
}
:query q: what you want to search for
:statuscode 200: no error
:statuscode 500: server error

View File

@@ -0,0 +1,463 @@
:title: Registry API
:description: API Documentation for Docker Registry
:keywords: API, Docker, index, registry, REST, documentation
===================
Docker Registry API
===================
.. contents:: Table of Contents
1. Brief introduction
=====================
- This is the REST API for the Docker Registry
- It stores the images and the graph for a set of repositories
- It does not have user accounts data
- It has no notion of user accounts or authorization
- It delegates authentication and authorization to the Index Auth service using tokens
- It supports different storage backends (S3, cloud files, local FS)
- It doesnt have a local database
- It will be open-sourced at some point
We expect that there will be multiple registries out there. To help to grasp the context, here are some examples of registries:
- **sponsor registry**: such a registry is provided by a third-party hosting infrastructure as a convenience for their customers and the docker community as a whole. Its costs are supported by the third party, but the management and operation of the registry are supported by dotCloud. It features read/write access, and delegates authentication and authorization to the Index.
- **mirror registry**: such a registry is provided by a third-party hosting infrastructure but is targeted at their customers only. Some mechanism (unspecified to date) ensures that public images are pulled from a sponsor registry to the mirror registry, to make sure that the customers of the third-party provider can “docker pull” those images locally.
- **vendor registry**: such a registry is provided by a software vendor, who wants to distribute docker images. It would be operated and managed by the vendor. Only users authorized by the vendor would be able to get write access. Some images would be public (accessible for anyone), others private (accessible only for authorized users). Authentication and authorization would be delegated to the Index. The goal of vendor registries is to let someone do “docker pull basho/riak1.3” and automatically push from the vendor registry (instead of a sponsor registry); i.e. get all the convenience of a sponsor registry, while retaining control on the asset distribution.
- **private registry**: such a registry is located behind a firewall, or protected by an additional security layer (HTTP authorization, SSL client-side certificates, IP address authorization...). The registry is operated by a private entity, outside of dotClouds control. It can optionally delegate additional authorization to the Index, but it is not mandatory.
.. note::
Mirror registries and private registries which do not use the Index dont even need to run the registry code. They can be implemented by any kind of transport implementing HTTP GET and PUT. Read-only registries can be powered by a simple static HTTP server.
.. note::
The latter implies that while HTTP is the protocol of choice for a registry, multiple schemes are possible (and in some cases, trivial):
- HTTP with GET (and PUT for read-write registries);
- local mount point;
- remote docker addressed through SSH.
The latter would only require two new commands in docker, e.g. “registryget” and “registryput”, wrapping access to the local filesystem (and optionally doing consistency checks). Authentication and authorization are then delegated to SSH (e.g. with public keys).
2. Endpoints
============
2.1 Images
----------
Layer
*****
.. http:get:: /v1/images/(image_id)/layer
get image layer for a given ``image_id``
**Example Request**:
.. sourcecode:: http
GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/layer HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Token akmklmasadalkmsdfgsdgdge33
:parameter image_id: the id for the layer you want to get
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
{
id: "088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c",
parent: "aeee6396d62273d180a49c96c62e45438d87c7da4a5cf5d2be6bee4e21bc226f",
created: "2013-04-30T17:46:10.843673+03:00",
container: "8305672a76cc5e3d168f97221106ced35a76ec7ddbb03209b0f0d96bf74f6ef7",
container_config: {
Hostname: "host-test",
User: "",
Memory: 0,
MemorySwap: 0,
AttachStdin: false,
AttachStdout: false,
AttachStderr: false,
PortSpecs: null,
Tty: false,
OpenStdin: false,
StdinOnce: false,
Env: null,
Cmd: [
"/bin/bash",
"-c",
"apt-get -q -yy -f install libevent-dev"
],
Dns: null,
Image: "imagename/blah",
Volumes: { },
VolumesFrom: ""
},
docker_version: "0.1.7"
}
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Image not found
.. http:put:: /v1/images/(image_id)/layer
put image layer for a given ``image_id``
**Example Request**:
.. sourcecode:: http
PUT /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/layer HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Token akmklmasadalkmsdfgsdgdge33
{
id: "088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c",
parent: "aeee6396d62273d180a49c96c62e45438d87c7da4a5cf5d2be6bee4e21bc226f",
created: "2013-04-30T17:46:10.843673+03:00",
container: "8305672a76cc5e3d168f97221106ced35a76ec7ddbb03209b0f0d96bf74f6ef7",
container_config: {
Hostname: "host-test",
User: "",
Memory: 0,
MemorySwap: 0,
AttachStdin: false,
AttachStdout: false,
AttachStderr: false,
PortSpecs: null,
Tty: false,
OpenStdin: false,
StdinOnce: false,
Env: null,
Cmd: [
"/bin/bash",
"-c",
"apt-get -q -yy -f install libevent-dev"
],
Dns: null,
Image: "imagename/blah",
Volumes: { },
VolumesFrom: ""
},
docker_version: "0.1.7"
}
:parameter image_id: the id for the layer you want to get
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
""
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Image not found
Image
*****
.. http:put:: /v1/images/(image_id)/json
put image for a given ``image_id``
**Example Request**:
.. sourcecode:: http
PUT /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/json HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
{
“id”: “088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c”,
“checksum”: “sha256:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”
}
:parameter image_id: the id for the layer you want to get
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
""
:statuscode 200: OK
:statuscode 401: Requires authorization
.. http:get:: /v1/images/(image_id)/json
get image for a given ``image_id``
**Example Request**:
.. sourcecode:: http
GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/json HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
:parameter image_id: the id for the layer you want to get
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
{
“id”: “088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c”,
“checksum”: “sha256:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”
}
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Image not found
Ancestry
********
.. http:get:: /v1/images/(image_id)/ancestry
get ancestry for an image given an ``image_id``
**Example Request**:
.. sourcecode:: http
GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/ancestry HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
:parameter image_id: the id for the layer you want to get
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
["088b4502f51920fbd9b7c503e87c7a2c05aa3adc3d35e79c031fa126b403200f",
"aeee63968d87c7da4a5cf5d2be6bee4e21bc226fd62273d180a49c96c62e4543",
"bfa4c5326bc764280b0863b46a4b20d940bc1897ef9c1dfec060604bdc383280",
"6ab5893c6927c15a15665191f2c6cf751f5056d8b95ceee32e43c5e8a3648544"]
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Image not found
2.2 Tags
--------
.. http:get:: /v1/repositories/(namespace)/(repository)/tags
get all of the tags for the given repo.
**Example Request**:
.. sourcecode:: http
GET /v1/repositories/foo/bar/tags HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
:parameter namespace: namespace for the repo
:parameter repository: name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
{
"latest": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f",
“0.1.1”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”
}
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Repository not found
.. http:get:: /v1/repositories/(namespace)/(repository)/tags/(tag)
get a tag for the given repo.
**Example Request**:
.. sourcecode:: http
GET /v1/repositories/foo/bar/tags/latest HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
:parameter namespace: namespace for the repo
:parameter repository: name for the repo
:parameter tag: name of tag you want to get
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f"
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Tag not found
.. http:delete:: /v1/repositories/(namespace)/(repository)/tags/(tag)
delete the tag for the repo
**Example Request**:
.. sourcecode:: http
DELETE /v1/repositories/foo/bar/tags/latest HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
:parameter namespace: namespace for the repo
:parameter repository: name for the repo
:parameter tag: name of tag you want to delete
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
""
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Tag not found
.. http:put:: /v1/repositories/(namespace)/(repository)/tags/(tag)
put a tag for the given repo.
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foo/bar/tags/latest HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
“9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”
:parameter namespace: namespace for the repo
:parameter repository: name for the repo
:parameter tag: name of tag you want to add
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
""
:statuscode 200: OK
:statuscode 400: Invalid data
:statuscode 401: Requires authorization
:statuscode 404: Image not found
2.3 Repositories
----------------
.. http:delete:: /v1/repositories/(namespace)/(repository)/
delete a repository
**Example Request**:
.. sourcecode:: http
DELETE /v1/repositories/foo/bar/ HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
""
:parameter namespace: namespace for the repo
:parameter repository: name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
""
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Repository not found
3.0 Authorization
=================
This is where we describe the authorization process, including the tokens and cookies.
TODO: add more info.

View File

@@ -1,6 +1,11 @@
===================
Docker Registry API
===================
:title: Registry Documentation
:description: Documentation for docker Registry and Registry API
:keywords: docker, registry, api, index
=====================
Registry & index Spec
=====================
.. contents:: Table of Contents
@@ -44,7 +49,7 @@ We expect that there will be multiple registries out there. To help to grasp the
.. note::
Mirror registries and private registries which do not use the Index dont even need to run the registry code. They can be implemented by any kind of transport implementing HTTP GET and PUT. Read-only registries can be powered by a simple static HTTP server.
Mirror registries and private registries which do not use the Index dont even need to run the registry code. They can be implemented by any kind of transport implementing HTTP GET and PUT. Read-only registries can be powered by a simple static HTTP server.
.. note::
@@ -80,7 +85,7 @@ On top of being a runtime for LXC, Docker is the Registry client. It supports:
5. Index returns true/false lettings registry know if it should proceed or error out
6. Get the payload for all layers
Its possible to run docker pull https://<registry>/repositories/samalba/busybox. In this case, docker bypasses the Index. However the security is not guaranteed (in case Registry A is corrupted) because there wont be any checksum checks.
Its possible to run docker pull \https://<registry>/repositories/samalba/busybox. In this case, docker bypasses the Index. However the security is not guaranteed (in case Registry A is corrupted) because there wont be any checksum checks.
Currently registry redirects to s3 urls for downloads, going forward all downloads need to be streamed through the registry. The Registry will then abstract the calls to S3 by a top-level class which implements sub-classes for S3 and local storage.
@@ -107,7 +112,7 @@ API (pulling repository foo/bar):
Jsonified checksums (see part 4.4.1)
3. (Docker -> Registry) GET /v1/repositories/foo/bar/tags/latest
**Headers**:
**Headers**:
Authorization: Token signature=123abc,repository=”foo/bar”,access=write
4. (Registry -> Index) GET /v1/repositories/foo/bar/images
@@ -121,10 +126,10 @@ API (pulling repository foo/bar):
**Action**:
( Lookup token see if they have access to pull.)
If good:
If good:
HTTP 200 OK
Index will invalidate the token
If bad:
If bad:
HTTP 401 Unauthorized
5. (Docker -> Registry) GET /v1/images/928374982374/ancestry
@@ -186,9 +191,9 @@ API (pushing repos foo/bar):
**Headers**:
Authorization: Token signature=123abc,repository=”foo/bar”,access=write
**Action**::
- Index:
- Index:
will invalidate the token.
- Registry:
- Registry:
grants a session (if token is approved) and fetches the images id
5. (Docker -> Registry) PUT /v1/images/98765432_parent/json
@@ -223,7 +228,7 @@ API (pushing repos foo/bar):
**Body**:
(The image, ids, tags and checksums)
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”,
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”,
“checksum”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”}]
**Return** HTTP 204
@@ -234,14 +239,80 @@ API (pushing repos foo/bar):
If it's a retry on the Registry, Docker has a cookie (provided by the registry after token validation). So the Index wont have to provide a new token.
2.3 Delete
----------
If you need to delete something from the index or registry, we need a nice clean way to do that. Here is the workflow.
1. Docker contacts the index to request a delete of a repository “samalba/busybox” (authentication required with user credentials)
2. If authentication works and repository is valid, “samalba/busybox” is marked as deleted and a temporary token is returned
3. Send a delete request to the registry for the repository (along with the token)
4. Registry A contacts the Index to verify the token (token must corresponds to the repository name)
5. Index validates the token. Registry A deletes the repository and everything associated to it.
6. docker contacts the index to let it know it was removed from the registry, the index removes all records from the database.
.. note::
The Docker client should present an "Are you sure?" prompt to confirm the deletion before starting the process. Once it starts it can't be undone.
API (deleting repository foo/bar):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. (Docker -> Index) DELETE /v1/repositories/foo/bar/
**Headers**:
Authorization: Basic sdkjfskdjfhsdkjfh==
X-Docker-Token: true
**Action**::
- in index, we make sure it is a valid repository, and set to deleted (logically)
**Body**::
Empty
2. (Index -> Docker) 202 Accepted
**Headers**:
- WWW-Authenticate: Token signature=123abc,repository=”foo/bar”,access=delete
- X-Docker-Endpoints: registry.docker.io [, registry2.docker.io] # list of endpoints where this repo lives.
3. (Docker -> Registry) DELETE /v1/repositories/foo/bar/
**Headers**:
Authorization: Token signature=123abc,repository=”foo/bar”,access=delete
4. (Registry->Index) PUT /v1/repositories/foo/bar/auth
**Headers**:
Authorization: Token signature=123abc,repository=”foo/bar”,access=delete
**Action**::
- Index:
will invalidate the token.
- Registry:
deletes the repository (if token is approved)
5. (Registry -> Docker) 200 OK
200 If success
403 if forbidden
400 if bad request
404 if repository isn't found
6. (Docker -> Index) DELETE /v1/repositories/foo/bar/
**Headers**:
Authorization: Basic 123oislifjsldfj==
X-Docker-Endpoints: registry-1.docker.io (no validation on this right now)
**Body**:
Empty
**Return** HTTP 200
3. How to use the Registry in standalone mode
=============================================
The Index has two main purposes (along with its fancy social features):
- Resolve short names (to avoid passing absolute URLs all the time)
- username/projectname -> https://registry.docker.io/users/<username>/repositories/<projectname>/
- team/projectname -> https://registry.docker.io/team/<team>/repositories/<projectname>/
- username/projectname -> \https://registry.docker.io/users/<username>/repositories/<projectname>/
- team/projectname -> \https://registry.docker.io/team/<team>/repositories/<projectname>/
- Authenticate a user as a repos owner (for a central referenced repository)
3.1 Without an Index
@@ -296,7 +367,7 @@ POST /v1/users
{"email": "sam@dotcloud.com", "password": "toto42", "username": "foobar"'}
**Validation**:
- **username** : min 4 character, max 30 characters, all lowercase no special characters.
- **username** : min 4 character, max 30 characters, must match the regular expression [a-z0-9_].
- **password**: min 5 characters
**Valid**: return HTTP 200
@@ -340,6 +411,11 @@ GET /v1/users
The Registry does not know anything about users. Even though repositories are under usernames, its just a namespace for the registry. Allowing us to implement organizations or different namespaces per user later, without modifying the Registrys API.
The following naming restrictions apply:
- Namespaces must match the same regular expression as usernames (See 4.2.1.)
- Repository names must match the regular expression [a-zA-Z0-9-_.]
4.3.1 Get all tags
^^^^^^^^^^^^^^^^^^
@@ -387,9 +463,27 @@ PUT /v1/repositories/<namespace>/<repo_name>/images
**Body**:
[ {“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”, “checksum”: “sha256:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”} ]
**Return** 204
4.5 Repositories
----------------
4.5.1 Remove a Repository (Registry)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
DELETE /v1/repositories/<namespace>/<repo_name>
Return 200 OK
4.5.2 Remove a Repository (Index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This starts the delete process. see 2.3 for more details.
DELETE /v1/repositories/<namespace>/<repo_name>
Return 202 OK
5. Chaining Registries
======================
@@ -466,3 +560,10 @@ Next request::
GET /(...)
Cookie: session="wD/J7LqL5ctqw8haL10vgfhrb2Q=?foo=UydiYXInCnAxCi4=&timestamp=RjEzNjYzMTQ5NDcuNDc0NjQzCi4="
7.0 Document Version
---------------------
- 1.0 : May 6th 2013 : initial release
- 1.1 : June 1st 2013 : Added Delete Repository and way to handle new source namespace.

View File

@@ -1,130 +0,0 @@
==============
Docker Builder
==============
.. contents:: Table of Contents
1. Format
=========
The Docker builder format is quite simple:
``instruction arguments``
The first instruction must be `FROM`
All instruction are to be placed in a file named `Dockerfile`
In order to place comments within a Dockerfile, simply prefix the line with "`#`"
2. Instructions
===============
Docker builder comes with a set of instructions:
1. FROM: Set from what image to build
2. RUN: Execute a command
3. INSERT: Insert a remote file (http) into the image
2.1 FROM
--------
``FROM <image>``
The `FROM` instruction must be the first one in order for Builder to know from where to run commands.
`FROM` can also be used in order to build multiple images within a single Dockerfile
2.2 MAINTAINER
--------------
``MAINTAINER <name>``
The `MAINTAINER` instruction allow you to set the Author field of the generated images.
This instruction is never automatically reset.
2.3 RUN
-------
``RUN <command>``
The `RUN` instruction is the main one, it allows you to execute any commands on the `FROM` image and to save the results.
You can use as many `RUN` as you want within a Dockerfile, the commands will be executed on the result of the previous command.
2.4 CMD
-------
``CMD <command>``
The `CMD` instruction sets the command to be executed when running the image.
It is equivalent to do `docker commit -run '{"Cmd": <command>}'` outside the builder.
.. note::
Do not confuse `RUN` with `CMD`. `RUN` actually run a command and save the result, `CMD` does not execute anything.
2.5 EXPOSE
----------
``EXPOSE <port> [<port>...]``
The `EXPOSE` instruction sets ports to be publicly exposed when running the image.
This is equivalent to do `docker commit -run '{"PortSpecs": ["<port>", "<port2>"]}'` outside the builder.
2.6 ENV
-------
``ENV <key> <value>``
The `ENV` instruction set as environment variable `<key>` with the value `<value>`. This value will be passed to all future ``RUN`` instructions.
.. note::
The environment variables are local to the Dockerfile, they will not be set as autorun.
2.7 INSERT
----------
``INSERT <file url> <path>``
The `INSERT` instruction will download the file at the given url and place it within the image at the given path.
.. note::
The path must include the file name.
3. Dockerfile Examples
======================
::
# Nginx
#
# VERSION 0.0.1
# DOCKER-VERSION 0.2
from ubuntu
maintainer Guillaume J. Charmes "guillaume@dotcloud.com"
# make sure the package repository is up to date
run echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
run apt-get update
run apt-get install -y inotify-tools nginx apache2 openssh-server
insert https://raw.github.com/creack/docker-vps/master/nginx-wrapper.sh /usr/sbin/nginx-wrapper
::
# Firefox over VNC
#
# VERSION 0.3
# DOCKER-VERSION 0.2
from ubuntu
# make sure the package repository is up to date
run echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
run apt-get update
# Install vnc, xvfb in order to create a 'fake' display and firefox
run apt-get install -y x11vnc xvfb firefox
run mkdir /.vnc
# Setup a password
run x11vnc -storepasswd 1234 ~/.vnc/passwd
# Autostart firefox (might not be the best way to do it, but it does the trick)
run bash -c 'echo "firefox" >> /.bashrc'
expose 5900
cmd ["x11vnc", "-forever", "-usepw", "-create"]

View File

@@ -1,14 +0,0 @@
:title: docker documentation
:description: Documentation for docker builder
:keywords: docker, builder, dockerfile
Builder
=======
Contents:
.. toctree::
:maxdepth: 2
basics

View File

@@ -4,7 +4,7 @@
.. _cli:
Command Line Interface
Overview
======================
Docker Usage
@@ -14,7 +14,8 @@ To list available commands, either run ``docker`` with no parameters or execute
``docker help``::
$ docker
Usage: docker COMMAND [arg...]
Usage: docker [OPTIONS] COMMAND [arg...]
-H=[tcp://127.0.0.1:4243]: tcp://host:port to bind/connect to or unix://path/to/socket to use
A self-sufficient runtime for linux containers.
@@ -24,7 +25,7 @@ Available Commands
~~~~~~~~~~~~~~~~~~
.. toctree::
:maxdepth: 1
:maxdepth: 2
command/attach
command/build
@@ -51,5 +52,6 @@ Available Commands
command/start
command/stop
command/tag
command/top
command/version
command/wait

View File

@@ -1,3 +1,7 @@
:title: Attach Command
:description: Attach to a running container
:keywords: attach, container, docker, documentation
===========================================
``attach`` -- Attach to a running container
===========================================

View File

@@ -1,9 +1,44 @@
========================================================
``build`` -- Build a container from Dockerfile via stdin
========================================================
:title: Build Command
:description: Build a new image from the Dockerfile passed via stdin
:keywords: build, docker, container, documentation
================================================
``build`` -- Build a container from a Dockerfile
================================================
::
Usage: docker build -
Example: cat Dockerfile | docker build -
Build a new image from the Dockerfile passed via stdin
Usage: docker build [OPTIONS] PATH | URL | -
Build a new container image from the source code at PATH
-t="": Tag to be applied to the resulting image in case of success.
-q=false: Suppress verbose build output.
When a single Dockerfile is given as URL, then no context is set. When a git repository is set as URL, the repository is used as context
Examples
--------
.. code-block:: bash
docker build .
| This will read the Dockerfile from the current directory. It will also send any other files and directories found in the current directory to the docker daemon.
| The contents of this directory would be used by ADD commands found within the Dockerfile.
| This will send a lot of data to the docker daemon if the current directory contains a lot of data.
| If the absolute path is provided instead of '.', only the files and directories required by the ADD commands from the Dockerfile will be added to the context and transferred to the docker daemon.
|
.. code-block:: bash
docker build - < Dockerfile
| This will read a Dockerfile from Stdin without context. Due to the lack of a context, no contents of any local directory will be sent to the docker daemon.
| ADD doesn't work when running in this mode due to the absence of the context, thus having no source files to copy to the container.
.. code-block:: bash
docker build github.com/creack/docker-firefox
| This will clone the github repository and use it as context. The Dockerfile at the root of the repository is used as Dockerfile.
| Note that you can specify an arbitrary git repository by using the 'git://' schema.

View File

@@ -1,3 +1,7 @@
:title: Commit Command
:description: Create a new image from a container's changes
:keywords: commit, docker, container, documentation
===========================================================
``commit`` -- Create a new image from a container's changes
===========================================================
@@ -16,6 +20,7 @@ Full -run example::
{"Hostname": "",
"User": "",
"CpuShares": 0,
"Memory": 0,
"MemorySwap": 0,
"PortSpecs": ["22", "80", "443"],

View File

@@ -1,3 +1,7 @@
:title: Diff Command
:description: Inspect changes on a container's filesystem
:keywords: diff, docker, container, documentation
=======================================================
``diff`` -- Inspect changes on a container's filesystem
=======================================================

View File

@@ -1,3 +1,7 @@
:title: Export Command
:description: Export the contents of a filesystem as a tar archive
:keywords: export, docker, container, documentation
=================================================================
``export`` -- Stream the contents of a container as a tar archive
=================================================================

View File

@@ -1,3 +1,7 @@
:title: History Command
:description: Show the history of an image
:keywords: history, docker, container, documentation
===========================================
``history`` -- Show the history of an image
===========================================

View File

@@ -1,3 +1,7 @@
:title: Images Command
:description: List images
:keywords: images, docker, container, documentation
=========================
``images`` -- List images
=========================

View File

@@ -1,9 +1,40 @@
:title: Import Command
:description: Create a new filesystem image from the contents of a tarball
:keywords: import, tarball, docker, url, documentation
==========================================================================
``import`` -- Create a new filesystem image from the contents of a tarball
==========================================================================
::
Usage: docker import [OPTIONS] URL|- [REPOSITORY [TAG]]
Usage: docker import URL|- [REPOSITORY [TAG]]
Create a new filesystem image from the contents of a tarball
At this time, the URL must start with ``http`` and point to a single file archive (.tar, .tar.gz, .bzip)
containing a root filesystem. If you would like to import from a local directory or archive,
you can use the ``-`` parameter to take the data from standard in.
Examples
--------
Import from a remote location
.............................
``$ docker import http://example.com/exampleimage.tgz exampleimagerepo``
Import from a local file
........................
Import to docker via pipe and standard in
``$ cat exampleimage.tgz | docker import - exampleimagelocal``
Import from a local directory
.............................
``$ sudo tar -c . | docker import - exampleimagedir``
Note the ``sudo`` in this example -- you must preserve the ownership of the files (especially root ownership)
during the archiving with tar. If you are not root (or sudo) when you tar, then the ownerships might not get preserved.

View File

@@ -1,3 +1,7 @@
:title: Info Command
:description: Display system-wide information.
:keywords: info, docker, information, documentation
===========================================
``info`` -- Display system-wide information
===========================================

View File

@@ -1,3 +1,7 @@
:title: Inspect Command
:description: Return low-level information on a container
:keywords: inspect, container, docker, documentation
==========================================================
``inspect`` -- Return low-level information on a container
==========================================================

View File

@@ -1,3 +1,7 @@
:title: Kill Command
:description: Kill a running container
:keywords: kill, container, docker, documentation
====================================
``kill`` -- Kill a running container
====================================

View File

@@ -1,9 +1,17 @@
:title: Login Command
:description: Register or Login to the docker registry server
:keywords: login, docker, documentation
============================================================
``login`` -- Register or Login to the docker registry server
============================================================
::
Usage: docker login
Usage: docker login [OPTIONS]
Register or Login to the docker registry server
-e="": email
-p="": password
-u="": username

View File

@@ -1,3 +1,7 @@
:title: Logs Command
:description: Fetch the logs of a container
:keywords: logs, container, docker, documentation
=========================================
``logs`` -- Fetch the logs of a container
=========================================

View File

@@ -1,3 +1,7 @@
:title: Port Command
:description: Lookup the public-facing port which is NAT-ed to PRIVATE_PORT
:keywords: port, docker, container, documentation
=========================================================================
``port`` -- Lookup the public-facing port which is NAT-ed to PRIVATE_PORT
=========================================================================

View File

@@ -1,3 +1,7 @@
:title: Ps Command
:description: List containers
:keywords: ps, docker, documentation, container
=========================
``ps`` -- List containers
=========================

View File

@@ -1,3 +1,7 @@
:title: Pull Command
:description: Pull an image or a repository from the registry
:keywords: pull, image, repo, repository, documentation, docker
=========================================================================
``pull`` -- Pull an image or a repository from the docker registry server
=========================================================================

View File

@@ -1,3 +1,7 @@
:title: Push Command
:description: Push an image or a repository to the registry
:keywords: push, docker, image, repository, documentation, repo
=======================================================================
``push`` -- Push an image or a repository to the docker registry server
=======================================================================

View File

@@ -1,3 +1,7 @@
:title: Restart Command
:description: Restart a running container
:keywords: restart, container, docker, documentation
==========================================
``restart`` -- Restart a running container
==========================================

View File

@@ -1,3 +1,7 @@
:title: Rm Command
:description: Remove a container
:keywords: remove, container, docker, documentation, rm
============================
``rm`` -- Remove a container
============================
@@ -6,4 +10,4 @@
Usage: docker rm [OPTIONS] CONTAINER
Remove a container
Remove one or more containers

View File

@@ -1,9 +1,13 @@
:title: Rmi Command
:description: Remove an image
:keywords: rmi, remove, image, docker, documentation
==========================
``rmi`` -- Remove an image
==========================
::
Usage: docker rmimage [OPTIONS] IMAGE
Usage: docker rmi IMAGE [IMAGE...]
Remove an image
Remove one or more images

View File

@@ -1,14 +1,19 @@
:title: Run Command
:description: Run a command in a new container
:keywords: run, container, docker, documentation
===========================================
``run`` -- Run a command in a new container
===========================================
::
Usage: docker run [OPTIONS] IMAGE COMMAND [ARG...]
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Run a command in a new container
-a=map[]: Attach to stdin, stdout or stderr.
-c=0: CPU shares (relative weight)
-d=false: Detached mode: leave the container running in the background
-e=[]: Set environment variables
-h="": Container host name
@@ -18,5 +23,6 @@
-t=false: Allocate a pseudo-tty
-u="": Username or UID
-d=[]: Set custom dns servers for the container
-v=[]: Creates a new volumes and mount it at the specified path.
-v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro]. If "host-dir" is missing, then docker creates a new volume.
-volumes-from="": Mount all volumes from the given container.
-entrypoint="": Overwrite the default entrypoint set by the image.

View File

@@ -1,3 +1,7 @@
:title: Search Command
:description: Searches for the TERM parameter on the Docker index and prints out a list of repositories that match.
:keywords: search, docker, image, documentation
===================================================================
``search`` -- Search for an image in the docker index
===================================================================

View File

@@ -1,3 +1,7 @@
:title: Start Command
:description: Start a stopped container
:keywords: start, docker, container, documentation
======================================
``start`` -- Start a stopped container
======================================

View File

@@ -1,3 +1,7 @@
:title: Stop Command
:description: Stop a running container
:keywords: stop, container, docker, documentation
====================================
``stop`` -- Stop a running container
====================================

View File

@@ -1,3 +1,7 @@
:title: Tag Command
:description: Tag an image into a repository
:keywords: tag, docker, image, repository, documentation, repo
=========================================
``tag`` -- Tag an image into a repository
=========================================

View File

@@ -0,0 +1,13 @@
:title: Top Command
:description: Lookup the running processes of a container
:keywords: top, docker, container, documentation
=======================================================
``top`` -- Lookup the running processes of a container
=======================================================
::
Usage: docker top CONTAINER
Lookup the running processes of a container

View File

@@ -1,3 +1,7 @@
:title: Version Command
:description:
:keywords: version, docker, documentation
==================================================
``version`` -- Show the docker version information
==================================================

View File

@@ -1,3 +1,7 @@
:title: Wait Command
:description: Block until a container stops, then print its exit code.
:keywords: wait, docker, container, documentation
===================================================================
``wait`` -- Block until a container stops, then print its exit code
===================================================================

View File

@@ -1,6 +1,6 @@
:title: docker documentation
:title: Commands
:description: -- todo: change me
:keywords: todo: change me
:keywords: todo, commands, command line, help, docker, documentation
Commands
@@ -9,8 +9,33 @@ Commands
Contents:
.. toctree::
:maxdepth: 3
:maxdepth: 1
basics
workingwithrepository
cli
cli
attach <command/attach>
build <command/build>
commit <command/commit>
diff <command/diff>
export <command/export>
history <command/history>
images <command/images>
import <command/import>
info <command/info>
inspect <command/inspect>
kill <command/kill>
login <command/login>
logs <command/logs>
port <command/port>
ps <command/ps>
pull <command/pull>
push <command/push>
restart <command/restart>
rm <command/rm>
rmi <command/rmi>
run <command/run>
search <command/search>
start <command/start>
stop <command/stop>
tag <command/tag>
version <command/version>
wait <command/wait>

View File

@@ -1,42 +0,0 @@
.. _working_with_the_repository:
Working with the repository
============================
Connecting to the repository
----------------------------
You create a user on the central docker repository by running
.. code-block:: bash
docker login
If your username does not exist it will prompt you to also enter a password and your e-mail address. It will then
automatically log you in.
Committing a container to a named image
---------------------------------------
In order to commit to the repository it is required to have committed your container to an image with your namespace.
.. code-block:: bash
# for example docker commit $CONTAINER_ID dhrp/kickassapp
docker commit <container_id> <your username>/<some_name>
Pushing a container to the repository
-----------------------------------------
In order to push an image to the repository you need to have committed your container to a named image (see above)
Now you can commit this image to the repository
.. code-block:: bash
# for example docker push dhrp/kickassapp
docker push <image-name>

View File

@@ -1,25 +0,0 @@
:title: Building blocks
:description: An introduction to docker and standard containers?
:keywords: containers, lxc, concepts, explanation
Building blocks
===============
.. _images:
Images
------
An original container image. These are stored on disk and are comparable with what you normally expect from a stopped virtual machine image. Images are stored (and retrieved from) repository
Images are stored on your local file system under /var/lib/docker/graph
.. _containers:
Containers
----------
A container is a local version of an image. It can be running or stopped, The equivalent would be a virtual machine instance.
Containers are stored on your local file system under /var/lib/docker/containers

View File

@@ -1,8 +0,0 @@
:title: Introduction
:description: An introduction to docker and standard containers?
:keywords: containers, lxc, concepts, explanation
:note: This version of the introduction is temporary, just to make sure we don't break the links from the website when the documentation is updated
This document has been moved to :ref:`introduction`, please update your bookmarks.

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

View File

Before

Width:  |  Height:  |  Size: 194 KiB

After

Width:  |  Height:  |  Size: 194 KiB

View File

@@ -1,10 +1,10 @@
:title: docker documentation
:description: -- todo: change me
:keywords: todo: change me
:title: Overview
:description: Docker documentation summary
:keywords: concepts, documentation, docker, containers
Concepts
Overview
========
Contents:
@@ -12,6 +12,5 @@ Contents:
.. toctree::
:maxdepth: 1
introduction
buildingblocks
../index
manifesto

View File

@@ -1,127 +0,0 @@
:title: Introduction
:description: An introduction to docker and standard containers?
:keywords: containers, lxc, concepts, explanation
.. _introduction:
Introduction
============
Docker - The Linux container runtime
------------------------------------
Docker complements LXC with a high-level API which operates at the process level. It runs unix processes with strong guarantees of isolation and repeatability across servers.
Docker is a great building block for automating distributed systems: large-scale web deployments, database clusters, continuous deployment systems, private PaaS, service-oriented architectures, etc.
- **Heterogeneous payloads** Any combination of binaries, libraries, configuration files, scripts, virtualenvs, jars, gems, tarballs, you name it. No more juggling between domain-specific tools. Docker can deploy and run them all.
- **Any server** Docker can run on any x64 machine with a modern linux kernel - whether it's a laptop, a bare metal server or a VM. This makes it perfect for multi-cloud deployments.
- **Isolation** docker isolates processes from each other and from the underlying host, using lightweight containers.
- **Repeatability** Because containers are isolated in their own filesystem, they behave the same regardless of where, when, and alongside what they run.
.. image:: http://www.docker.io/_static/lego_docker.jpg
What is a Standard Container?
-----------------------------
Docker defines a unit of software delivery called a Standard Container. The goal of a Standard Container is to encapsulate a software component and all its dependencies in
a format that is self-describing and portable, so that any compliant runtime can run it without extra dependency, regardless of the underlying machine and the contents of the container.
The spec for Standard Containers is currently work in progress, but it is very straightforward. It mostly defines 1) an image format, 2) a set of standard operations, and 3) an execution environment.
A great analogy for this is the shipping container. Just like Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery.
Standard operations
~~~~~~~~~~~~~~~~~~~
Just like shipping containers, Standard Containers define a set of STANDARD OPERATIONS. Shipping containers can be lifted, stacked, locked, loaded, unloaded and labelled. Similarly, standard containers can be started, stopped, copied, snapshotted, downloaded, uploaded and tagged.
Content-agnostic
~~~~~~~~~~~~~~~~~~~
Just like shipping containers, Standard Containers are CONTENT-AGNOSTIC: all standard operations have the same effect regardless of the contents. A shipping container will be stacked in exactly the same way whether it contains Vietnamese powder coffee or spare Maserati parts. Similarly, Standard Containers are started or uploaded in the same way whether they contain a postgres database, a php application with its dependencies and application server, or Java build artifacts.
Infrastructure-agnostic
~~~~~~~~~~~~~~~~~~~~~~~~~~
Both types of containers are INFRASTRUCTURE-AGNOSTIC: they can be transported to thousands of facilities around the world, and manipulated by a wide variety of equipment. A shipping container can be packed in a factory in Ukraine, transported by truck to the nearest routing center, stacked onto a train, loaded into a German boat by an Australian-built crane, stored in a warehouse at a US facility, etc. Similarly, a standard container can be bundled on my laptop, uploaded to S3, downloaded, run and snapshotted by a build server at Equinix in Virginia, uploaded to 10 staging servers in a home-made Openstack cluster, then sent to 30 production instances across 3 EC2 regions.
Designed for automation
~~~~~~~~~~~~~~~~~~~~~~~~~~
Because they offer the same standard operations regardless of content and infrastructure, Standard Containers, just like their physical counterpart, are extremely well-suited for automation. In fact, you could say automation is their secret weapon.
Many things that once required time-consuming and error-prone human effort can now be programmed. Before shipping containers, a bag of powder coffee was hauled, dragged, dropped, rolled and stacked by 10 different people in 10 different locations by the time it reached its destination. 1 out of 50 disappeared. 1 out of 20 was damaged. The process was slow, inefficient and cost a fortune - and was entirely different depending on the facility and the type of goods.
Similarly, before Standard Containers, by the time a software component ran in production, it had been individually built, configured, bundled, documented, patched, vendored, templated, tweaked and instrumented by 10 different people on 10 different computers. Builds failed, libraries conflicted, mirrors crashed, post-it notes were lost, logs were misplaced, cluster updates were half-broken. The process was slow, inefficient and cost a fortune - and was entirely different depending on the language and infrastructure provider.
Industrial-grade delivery
~~~~~~~~~~~~~~~~~~~~~~~~~~
There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded on the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away.
With Standard Containers we can put an end to that embarrassment, by making INDUSTRIAL-GRADE DELIVERY of software a reality.
Standard Container Specification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
(TODO)
Image format
~~~~~~~~~~~~
Standard operations
~~~~~~~~~~~~~~~~~~~
- Copy
- Run
- Stop
- Wait
- Commit
- Attach standard streams
- List filesystem changes
- ...
Execution environment
~~~~~~~~~~~~~~~~~~~~~
Root filesystem
^^^^^^^^^^^^^^^
Environment variables
^^^^^^^^^^^^^^^^^^^^^
Process arguments
^^^^^^^^^^^^^^^^^
Networking
^^^^^^^^^^
Process namespacing
^^^^^^^^^^^^^^^^^^^
Resource limits
^^^^^^^^^^^^^^^
Process monitoring
^^^^^^^^^^^^^^^^^^
Logging
^^^^^^^
Signals
^^^^^^^
Pseudo-terminal allocation
^^^^^^^^^^^^^^^^^^^^^^^^^^
Security
^^^^^^^^

View File

@@ -0,0 +1,190 @@
:title: Manifesto
:description: An overview of Docker and standard containers
:keywords: containers, lxc, concepts, explanation
.. _dockermanifesto:
*(This was our original Welcome page, but it is a bit forward-looking
for docs, and maybe not enough vision for a true manifesto. We'll
reveal more vision in the future to make it more Manifesto-y.)*
Docker Manifesto
----------------
Docker complements LXC with a high-level API which operates at the
process level. It runs unix processes with strong guarantees of
isolation and repeatability across servers.
Docker is a great building block for automating distributed systems:
large-scale web deployments, database clusters, continuous deployment
systems, private PaaS, service-oriented architectures, etc.
- **Heterogeneous payloads** Any combination of binaries, libraries,
configuration files, scripts, virtualenvs, jars, gems, tarballs, you
name it. No more juggling between domain-specific tools. Docker can
deploy and run them all.
- **Any server** Docker can run on any x64 machine with a modern linux
kernel - whether it's a laptop, a bare metal server or a VM. This
makes it perfect for multi-cloud deployments.
- **Isolation** docker isolates processes from each other and from the
underlying host, using lightweight containers.
- **Repeatability** Because containers are isolated in their own
filesystem, they behave the same regardless of where, when, and
alongside what they run.
.. image:: images/lego_docker.jpg
:target: http://bricks.argz.com/ins/7823-1/12
What is a Standard Container?
.............................
Docker defines a unit of software delivery called a Standard
Container. The goal of a Standard Container is to encapsulate a
software component and all its dependencies in a format that is
self-describing and portable, so that any compliant runtime can run it
without extra dependency, regardless of the underlying machine and the
contents of the container.
The spec for Standard Containers is currently work in progress, but it
is very straightforward. It mostly defines 1) an image format, 2) a
set of standard operations, and 3) an execution environment.
A great analogy for this is the shipping container. Just like Standard
Containers are a fundamental unit of software delivery, shipping
containers are a fundamental unit of physical delivery.
Standard operations
~~~~~~~~~~~~~~~~~~~
Just like shipping containers, Standard Containers define a set of
STANDARD OPERATIONS. Shipping containers can be lifted, stacked,
locked, loaded, unloaded and labelled. Similarly, standard containers
can be started, stopped, copied, snapshotted, downloaded, uploaded and
tagged.
Content-agnostic
~~~~~~~~~~~~~~~~~~~
Just like shipping containers, Standard Containers are
CONTENT-AGNOSTIC: all standard operations have the same effect
regardless of the contents. A shipping container will be stacked in
exactly the same way whether it contains Vietnamese powder coffee or
spare Maserati parts. Similarly, Standard Containers are started or
uploaded in the same way whether they contain a postgres database, a
php application with its dependencies and application server, or Java
build artifacts.
Infrastructure-agnostic
~~~~~~~~~~~~~~~~~~~~~~~~~~
Both types of containers are INFRASTRUCTURE-AGNOSTIC: they can be
transported to thousands of facilities around the world, and
manipulated by a wide variety of equipment. A shipping container can
be packed in a factory in Ukraine, transported by truck to the nearest
routing center, stacked onto a train, loaded into a German boat by an
Australian-built crane, stored in a warehouse at a US facility,
etc. Similarly, a standard container can be bundled on my laptop,
uploaded to S3, downloaded, run and snapshotted by a build server at
Equinix in Virginia, uploaded to 10 staging servers in a home-made
Openstack cluster, then sent to 30 production instances across 3 EC2
regions.
Designed for automation
~~~~~~~~~~~~~~~~~~~~~~~
Because they offer the same standard operations regardless of content
and infrastructure, Standard Containers, just like their physical
counterpart, are extremely well-suited for automation. In fact, you
could say automation is their secret weapon.
Many things that once required time-consuming and error-prone human
effort can now be programmed. Before shipping containers, a bag of
powder coffee was hauled, dragged, dropped, rolled and stacked by 10
different people in 10 different locations by the time it reached its
destination. 1 out of 50 disappeared. 1 out of 20 was damaged. The
process was slow, inefficient and cost a fortune - and was entirely
different depending on the facility and the type of goods.
Similarly, before Standard Containers, by the time a software
component ran in production, it had been individually built,
configured, bundled, documented, patched, vendored, templated, tweaked
and instrumented by 10 different people on 10 different
computers. Builds failed, libraries conflicted, mirrors crashed,
post-it notes were lost, logs were misplaced, cluster updates were
half-broken. The process was slow, inefficient and cost a fortune -
and was entirely different depending on the language and
infrastructure provider.
Industrial-grade delivery
~~~~~~~~~~~~~~~~~~~~~~~~~
There are 17 million shipping containers in existence, packed with
every physical good imaginable. Every single one of them can be loaded
on the same boats, by the same cranes, in the same facilities, and
sent anywhere in the World with incredible efficiency. It is
embarrassing to think that a 30 ton shipment of coffee can safely
travel half-way across the World in *less time* than it takes a
software team to deliver its code from one datacenter to another
sitting 10 miles away.
With Standard Containers we can put an end to that embarrassment, by
making INDUSTRIAL-GRADE DELIVERY of software a reality.
Standard Container Specification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
(TODO)
Image format
~~~~~~~~~~~~
Standard operations
~~~~~~~~~~~~~~~~~~~
- Copy
- Run
- Stop
- Wait
- Commit
- Attach standard streams
- List filesystem changes
- ...
Execution environment
~~~~~~~~~~~~~~~~~~~~~
Root filesystem
^^^^^^^^^^^^^^^
Environment variables
^^^^^^^^^^^^^^^^^^^^^
Process arguments
^^^^^^^^^^^^^^^^^
Networking
^^^^^^^^^^
Process namespacing
^^^^^^^^^^^^^^^^^^^
Resource limits
^^^^^^^^^^^^^^^
Process monitoring
^^^^^^^^^^^^^^^^^^
Logging
^^^^^^^
Signals
^^^^^^^
Pseudo-terminal allocation
^^^^^^^^^^^^^^^^^^^^^^^^^^
Security
^^^^^^^^

View File

@@ -20,6 +20,21 @@ import sys, os
# -- General configuration -----------------------------------------------------
# Additional templates that should be rendered to pages, maps page names to
# template names.
# the 'redirect_home.html' page redirects using a http meta refresh which, according
# to official sources is more or less equivalent of a 301.
html_additional_pages = {
'concepts/containers': 'redirect_home.html',
'concepts/introduction': 'redirect_home.html',
'builder/basics': 'redirect_build.html',
}
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
@@ -41,7 +56,7 @@ html_add_permalinks = None
# The master toctree document.
master_doc = 'index'
master_doc = 'toctree'
# General information about the project.
project = u'Docker'
@@ -120,7 +135,11 @@ html_theme_path = ['../theme']
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# We use a png favicon. This is not compatible with internet explorer, but looks
# much better on all other browsers. However, sphynx doesn't like it (it likes
# .ico better) so we have just put it in the template rather than used this setting
# html_favicon = 'favicon.png'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
@@ -138,10 +157,6 @@ html_static_path = ['static_files']
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True

Some files were not shown because too many files have changed in this diff Show More