Compare commits

..

7 Commits

Author SHA1 Message Date
Guillaume J. Charmes
38b8373434 Implement the COPY operator within the builder 2013-04-24 14:28:51 -07:00
Guillaume J. Charmes
03b5f8a585 Make sure the destination directory exists when using docker insert 2013-04-24 13:51:28 -07:00
Guillaume J. Charmes
bc260f0225 Add insert command in order to insert external files within an image 2013-04-24 13:37:00 -07:00
Guillaume J. Charmes
45dcd1125b Add a Builder.Commit method 2013-04-24 13:35:57 -07:00
Guillaume J. Charmes
d2e063d9e1 make builder.Run public it now runs only given arguments without sh -c 2013-04-24 12:31:20 -07:00
Guillaume J. Charmes
567a484b66 Clear the containers/images upon failure 2013-04-24 12:02:00 -07:00
Guillaume J. Charmes
5d4b886ad6 Add build command 2013-04-24 11:03:01 -07:00
249 changed files with 5658 additions and 23321 deletions

1
.gitignore vendored
View File

@@ -15,4 +15,3 @@ docs/_build
docs/_static
docs/_templates
.gopath/
.dotcloud

View File

@@ -2,7 +2,7 @@
<charles.hooper@dotcloud.com> <chooper@plumata.com>
<daniel.mizyrycki@dotcloud.com> <daniel@dotcloud.com>
<daniel.mizyrycki@dotcloud.com> <mzdaniel@glidelink.net>
Guillaume J. Charmes <guillaume.charmes@dotcloud.com> <charmes.guillaume@gmail.com>
Guillaume J. Charmes <guillaume.charmes@dotcloud.com> creack <charmes.guillaume@gmail.com>
<guillaume.charmes@dotcloud.com> <guillaume@dotcloud.com>
<kencochrane@gmail.com> <KenCochrane@gmail.com>
<sridharr@activestate.com> <github@srid.name>
@@ -16,10 +16,4 @@ Tim Terhorst <mynamewastaken+git@gmail.com>
Andy Smith <github@anarkystic.com>
<kalessin@kalessin.fr> <louis@dotcloud.com>
<victor.vieux@dotcloud.com> <victor@dotcloud.com>
<victor.vieux@dotcloud.com> <dev@vvieux.com>
<dominik@honnef.co> <dominikh@fork-bomb.org>
Thatcher Peskens <thatcher@dotcloud.com>
<ehanchrow@ine.com> <eric.hanchrow@gmail.com>
Walter Stanish <walter@pratyeka.org>
<daniel@gasienica.ch> <dgasienica@zynga.com>
Roberto Hashioka <roberto_hashioka@hotmail.com>

46
AUTHORS
View File

@@ -1,89 +1,45 @@
# This file lists all individuals having contributed content to the repository.
# If you're submitting a patch, please add your name here in alphabetical order as part of the patch.
#
# For a list of active project maintainers, see the MAINTAINERS file.
#
Al Tobey <al@ooyala.com>
Alexey Shamrin <shamrin@gmail.com>
Andrea Luzzardi <aluzzardi@gmail.com>
Andreas Tiefenthaler <at@an-ti.eu>
Andrew Munsell <andrew@wizardapps.net>
Andy Rothfusz <github@metaliveblog.com>
Andy Smith <github@anarkystic.com>
Antony Messerli <amesserl@rackspace.com>
Barry Allard <barry.allard@gmail.com>
Brandon Liu <bdon@bdon.org>
Brian McCallister <brianm@skife.org>
Bruno Bigras <bigras.bruno@gmail.com>
Caleb Spare <cespare@gmail.com>
Calen Pennington <cale@edx.org>
Charles Hooper <charles.hooper@dotcloud.com>
Christopher Currie <codemonkey+github@gmail.com>
Daniel Gasienica <daniel@gasienica.ch>
Daniel Mizyrycki <daniel.mizyrycki@dotcloud.com>
Daniel Robinson <gottagetmac@gmail.com>
Daniel Von Fange <daniel@leancoder.com>
Dominik Honnef <dominik@honnef.co>
Don Spaulding <donspauldingii@gmail.com>
Dr Nic Williams <drnicwilliams@gmail.com>
Elias Probst <mail@eliasprobst.eu>
Eric Hanchrow <ehanchrow@ine.com>
Evan Wies <evan@neomantra.net>
Eric Myhre <hash@exultant.us>
ezbercih <cem.ezberci@gmail.com>
Flavio Castelli <fcastelli@suse.com>
Francisco Souza <f@souza.cc>
Frederick F. Kautz IV <fkautz@alumni.cmu.edu>
Gareth Rushgrove <gareth@morethanseven.net>
Guillaume J. Charmes <guillaume.charmes@dotcloud.com>
Harley Laue <losinggeneration@gmail.com>
Hunter Blanks <hunter@twilio.com>
Jeff Lindsay <progrium@gmail.com>
Jeremy Grosser <jeremy@synack.me>
Joffrey F <joffrey@dotcloud.com>
John Costa <john.costa@gmail.com>
Jon Wedaman <jweede@gmail.com>
Jonas Pfenniger <jonas@pfenniger.name>
Jonathan Rudenberg <jonathan@titanous.com>
Joseph Anthony Pasquale Holsten <joseph@josephholsten.com>
Julien Barbier <write0@gmail.com>
Jérôme Petazzoni <jerome.petazzoni@dotcloud.com>
Ken Cochrane <kencochrane@gmail.com>
Kevin J. Lynagh <kevin@keminglabs.com>
kim0 <email.ahmedkamal@googlemail.com>
Kiran Gangadharan <kiran.daredevil@gmail.com>
Louis Opter <kalessin@kalessin.fr>
Marcus Farkas <toothlessgear@finitebox.com>
Mark McGranaghan <mmcgrana@gmail.com>
Maxim Treskin <zerthurd@gmail.com>
meejah <meejah@meejah.ca>
Michael Crosby <crosby.michael@gmail.com>
Mikhail Sobolev <mss@mawhrin.net>
Nate Jones <nate@endot.org>
Nelson Chen <crazysim@gmail.com>
Niall O'Higgins <niallo@unworkable.org>
odk- <github@odkurzacz.org>
Paul Bowsher <pbowsher@globalpersonals.co.uk>
Paul Hammond <paul@paulhammond.org>
Phil Spitler <pspitler@gmail.com>
Piotr Bogdan <ppbogdan@gmail.com>
Renato Riccieri Santos Zannon <renato.riccieri@gmail.com>
Robert Obryk <robryk@gmail.com>
Roberto Hashioka <roberto_hashioka@hotmail.com>
Sam Alba <sam.alba@gmail.com>
Sam J Sharpe <sam.sharpe@digital.cabinet-office.gov.uk>
Shawn Siefkas <shawn.siefkas@meredith.com>
Silas Sewell <silas@sewell.org>
Solomon Hykes <solomon@dotcloud.com>
Sridhar Ratnakumar <sridharr@activestate.com>
Thatcher Peskens <thatcher@dotcloud.com>
Thomas Bikeev <thomas.bikeev@mac.com>
Thomas Hansen <thomas.hansen@gmail.com>
Tianon Gravi <admwiggin@gmail.com>
Tim Terhorst <mynamewastaken+git@gmail.com>
Tobias Bieniek <Tobias.Bieniek@gmx.de>
Troy Howard <thoward37@gmail.com>
unclejack <unclejacksons@gmail.com>
Victor Vieux <victor.vieux@dotcloud.com>
Vivek Agarwal <me@vivek.im>
Walter Stanish <walter@pratyeka.org>
Will Dietz <w@wdtz.org>

View File

@@ -1,165 +1,6 @@
# Changelog
## 0.5.0 (2013-07-17)
+ Runtime: List all processes running inside a container with 'docker top'
+ Runtime: Host directories can be mounted as volumes with 'docker run -v'
+ Runtime: Containers can expose public UDP ports (eg, '-p 123/udp')
+ Runtime: Optionally specify an exact public port (eg. '-p 80:4500')
+ Registry: New image naming scheme inspired by Go packaging convention allows arbitrary combinations of registries
+ Builder: ENTRYPOINT instruction sets a default binary entry point to a container
+ Builder: VOLUME instruction marks a part of the container as persistent data
* Builder: 'docker build' displays the full output of a build by default
* Runtime: 'docker login' supports additional options
- Runtime: Dont save a container's hostname when committing an image.
- Registry: Fix issues when uploading images to a private registry
## 0.4.8 (2013-07-01)
+ Builder: New build operation ENTRYPOINT adds an executable entry point to the container.
- Runtime: Fix a bug which caused 'docker run -d' to no longer print the container ID.
- Tests: Fix issues in the test suite
## 0.4.7 (2013-06-28)
* Registry: easier push/pull to a custom registry
* Remote API: the progress bar updates faster when downloading and uploading large files
- Remote API: fix a bug in the optional unix socket transport
* Runtime: improve detection of kernel version
+ Runtime: host directories can be mounted as volumes with 'docker run -b'
- Runtime: fix an issue when only attaching to stdin
* Runtime: use 'tar --numeric-owner' to avoid uid mismatch across multiple hosts
* Hack: improve test suite and dev environment
* Hack: remove dependency on unit tests on 'os/user'
+ Documentation: add terminology section
## 0.4.6 (2013-06-22)
- Runtime: fix a bug which caused creation of empty images (and volumes) to crash.
## 0.4.5 (2013-06-21)
+ Builder: 'docker build git://URL' fetches and builds a remote git repository
* Runtime: 'docker ps -s' optionally prints container size
* Tests: Improved and simplified
- Runtime: fix a regression introduced in 0.4.3 which caused the logs command to fail.
- Builder: fix a regression when using ADD with single regular file.
## 0.4.4 (2013-06-19)
- Builder: fix a regression introduced in 0.4.3 which caused builds to fail on new clients.
## 0.4.3 (2013-06-19)
+ Builder: ADD of a local file will detect tar archives and unpack them
* Runtime: Remove bsdtar dependency
* Runtime: Add unix socket and multiple -H support
* Runtime: Prevent rm of running containers
* Runtime: Use go1.1 cookiejar
* Builder: ADD improvements: use tar for copy + automatically unpack local archives
* Builder: ADD uses tar/untar for copies instead of calling 'cp -ar'
* Builder: nicer output for 'docker build'
* Builder: fixed the behavior of ADD to be (mostly) reverse-compatible, predictable and well-documented.
* Client: HumanReadable ProgressBar sizes in pull
* Client: Fix docker version's git commit output
* API: Send all tags on History API call
* API: Add tag lookup to history command. Fixes #882
- Runtime: Fix issue detaching from running TTY container
- Runtime: Forbid parralel push/pull for a single image/repo. Fixes #311
- Runtime: Fix race condition within Run command when attaching.
- Builder: fix a bug which caused builds to fail if ADD was the first command
- Documentation: fix missing command in irc bouncer example
## 0.4.2 (2013-06-17)
- Packaging: Bumped version to work around an Ubuntu bug
## 0.4.1 (2013-06-17)
+ Remote Api: Add flag to enable cross domain requests
+ Remote Api/Client: Add images and containers sizes in docker ps and docker images
+ Runtime: Configure dns configuration host-wide with 'docker -d -dns'
+ Runtime: Detect faulty DNS configuration and replace it with a public default
+ Runtime: allow docker run <name>:<id>
+ Runtime: you can now specify public port (ex: -p 80:4500)
* Client: allow multiple params in inspect
* Client: Print the container id before the hijack in `docker run`
* Registry: add regexp check on repo's name
* Registry: Move auth to the client
* Runtime: improved image removal to garbage-collect unreferenced parents
* Vagrantfile: Add the rest api port to vagrantfile's port_forward
* Upgrade to Go 1.1
- Builder: don't ignore last line in Dockerfile when it doesn't end with \n
- Registry: Remove login check on pull
## 0.4.0 (2013-06-03)
+ Introducing Builder: 'docker build' builds a container, layer by layer, from a source repository containing a Dockerfile
+ Introducing Remote API: control Docker programmatically using a simple HTTP/json API
* Runtime: various reliability and usability improvements
## 0.3.4 (2013-05-30)
+ Builder: 'docker build' builds a container, layer by layer, from a source repository containing a Dockerfile
+ Builder: 'docker build -t FOO' applies the tag FOO to the newly built container.
+ Runtime: interactive TTYs correctly handle window resize
* Runtime: fix how configuration is merged between layers
+ Remote API: split stdout and stderr on 'docker run'
+ Remote API: optionally listen on a different IP and port (use at your own risk)
* Documentation: improved install instructions.
## 0.3.3 (2013-05-23)
- Registry: Fix push regression
- Various bugfixes
## 0.3.2 (2013-05-09)
* Runtime: Store the actual archive on commit
* Registry: Improve the checksum process
* Registry: Use the size to have a good progress bar while pushing
* Registry: Use the actual archive if it exists in order to speed up the push
- Registry: Fix error 400 on push
## 0.3.1 (2013-05-08)
+ Builder: Implement the autorun capability within docker builder
+ Builder: Add caching to docker builder
+ Builder: Add support for docker builder with native API as top level command
+ Runtime: Add go version to debug infos
+ Builder: Implement ENV within docker builder
+ Registry: Add docker search top level command in order to search a repository
+ Images: output graph of images to dot (graphviz)
+ Documentation: new introduction and high-level overview
+ Documentation: Add the documentation for docker builder
+ Website: new high-level overview
- Makefile: Swap "go get" for "go get -d", especially to compile on go1.1rc
- Images: fix ByParent function
- Builder: Check the command existance prior create and add Unit tests for the case
- Registry: Fix pull for official images with specific tag
- Registry: Fix issue when login in with a different user and trying to push
- Documentation: CSS fix for docker documentation to make REST API docs look better.
- Documentation: Fixed CouchDB example page header mistake
- Documentation: fixed README formatting
* Registry: Improve checksum - async calculation
* Runtime: kernel version - don't show the dash if flavor is empty
* Documentation: updated www.docker.io website.
* Builder: use any whitespaces instead of tabs
* Packaging: packaging ubuntu; issue #510: Use goland-stable PPA package to build docker
## 0.3.0 (2013-05-06)
+ Registry: Implement the new registry
+ Documentation: new example: sharing data between 2 couchdb databases
- Runtime: Fix the command existance check
- Runtime: strings.Split may return an empty string on no match
- Runtime: Fix an index out of range crash if cgroup memory is not
* Documentation: Various improvments
* Vagrant: Use only one deb line in /etc/apt
## 0.2.2 (2013-05-03)
+ Support for data volumes ('docker run -v=PATH')
+ Share data volumes between containers ('docker run -volumes-from')
+ Improved documentation
* Upgrade to Go 1.0.3
* Various upgrades to the dev environment for contributors
## 0.2.1 (2013-05-01)
+ 'docker commit -run' bundles a layer with default runtime options: command, ports etc.
* Improve install process on Vagrant
+ New Dockerfile operation: "maintainer"
+ New Dockerfile operation: "expose"
+ New Dockerfile operation: "cmd"
+ Contrib script to build a Debian base layer
+ 'docker -d -r': restart crashed containers at daemon startup
* Runtime: improve test coverage
## 0.2.0 (2013-04-23)
## 0.2.0 (2012-04-23)
- Runtime: ghost containers can be killed and waited for
* Documentation: update install intructions
- Packaging: fix Vagrantfile
@@ -167,12 +8,13 @@
+ Add a changelog
- Various bugfixes
## 0.1.8 (2013-04-22)
- Dynamically detect cgroup capabilities
- Issue stability warning on kernels <3.8
- 'docker push' buffers on disk instead of memory
- Fix 'docker diff' for removed files
- Fix 'docker stop' for ghost containers
- Fix 'docker stop' for ghost containers
- Fix handling of pidfile
- Various bugfixes and stability improvements
@@ -193,7 +35,7 @@
- Improve diagnosis of missing system capabilities
- Allow disabling memory limits at compile time
- Add debian packaging
- Documentation: installing on Arch Linux
- Documentation: installing on Arch Linux
- Documentation: running Redis on docker
- Fixed lxc 0.9 compatibility
- Automatically load aufs module

View File

@@ -1,6 +1,9 @@
# Contributing to Docker
Want to hack on Docker? Awesome! Here are instructions to get you started. They are probably not perfect, please let us know if anything feels
Want to hack on Docker? Awesome! There are instructions to get you
started on the website: http://docker.io/gettingstarted.html
They are probably not perfect, please let us know if anything feels
wrong or incomplete.
## Contribution guidelines
@@ -88,73 +91,3 @@ Add your name to the AUTHORS file, but make sure the list is sorted and your
name and email address match your git configuration. The AUTHORS file is
regenerated occasionally from the git commit history, so a mismatch may result
in your changes being overwritten.
## Decision process
### How are decisions made?
Short answer: with pull requests to the docker repository.
Docker is an open-source project with an open design philosophy. This means that the repository is the source of truth for EVERY aspect of the project,
including its philosophy, design, roadmap and APIs. *If it's part of the project, it's in the repo. It's in the repo, it's part of the project.*
As a result, all decisions can be expressed as changes to the repository. An implementation change is a change to the source code. An API change is a change to
the API specification. A philosophy change is a change to the philosophy manifesto. And so on.
All decisions affecting docker, big and small, follow the same 3 steps:
* Step 1: Open a pull request. Anyone can do this.
* Step 2: Discuss the pull request. Anyone can do this.
* Step 3: Accept or refuse a pull request. The relevant maintainer does this (see below "Who decides what?")
### Who decides what?
So all decisions are pull requests, and the relevant maintainer makes the decision by accepting or refusing the pull request.
But how do we identify the relevant maintainer for a given pull request?
Docker follows the timeless, highly efficient and totally unfair system known as [Benevolent dictator for life](http://en.wikipedia.org/wiki/Benevolent_Dictator_for_Life),
with yours truly, Solomon Hykes, in the role of BDFL.
This means that all decisions are made by default by me. Since making every decision myself would be highly unscalable, in practice decisions are spread across multiple maintainers.
The relevant maintainer for a pull request is assigned in 3 steps:
* Step 1: Determine the subdirectory affected by the pull request. This might be src/registry, docs/source/api, or any other part of the repo.
* Step 2: Find the MAINTAINERS file which affects this directory. If the directory itself does not have a MAINTAINERS file, work your way up the the repo hierarchy until you find one.
* Step 3: The first maintainer listed is the primary maintainer. The pull request is assigned to him. He may assign it to other listed maintainers, at his discretion.
### I'm a maintainer, should I make pull requests too?
Primary maintainers are not required to create pull requests when changing their own subdirectory, but secondary maintainers are.
### Who assigns maintainers?
Solomon.
### How can I become a maintainer?
* Step 1: learn the component inside out
* Step 2: make yourself useful by contributing code, bugfixes, support etc.
* Step 3: volunteer on the irc channel (#docker@freenode)
Don't forget: being a maintainer is a time investment. Make sure you will have time to make yourself available.
You don't have to be a maintainer to make a difference on the project!
### What are a maintainer's responsibility?
It is every maintainer's responsibility to:
* 1) Expose a clear roadmap for improving their component.
* 2) Deliver prompt feedback and decisions on pull requests.
* 3) Be available to anyone with questions, bug reports, criticism etc. on their component. This includes irc, github requests and the mailing list.
* 4) Make sure their component respects the philosophy, design and roadmap of the project.
### How is this process changed?
Just like everything else: by making a pull request :)

View File

@@ -1,30 +0,0 @@
# This file describes the standard way to build Docker, using docker
docker-version 0.4.2
from ubuntu:12.04
maintainer Solomon Hykes <solomon@dotcloud.com>
# Build dependencies
run apt-get install -y -q curl
run apt-get install -y -q git
# Install Go
run curl -s https://go.googlecode.com/files/go1.1.1.linux-amd64.tar.gz | tar -v -C /usr/local -xz
env PATH /usr/local/go/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin
env GOPATH /go
env CGO_ENABLED 0
run cd /tmp && echo 'package main' > t.go && go test -a -i -v
# Download dependencies
run PKG=github.com/kr/pty REV=27435c699; git clone http://$PKG /go/src/$PKG && cd /go/src/$PKG && git checkout -f $REV
run PKG=github.com/gorilla/context/ REV=708054d61e5; git clone http://$PKG /go/src/$PKG && cd /go/src/$PKG && git checkout -f $REV
run PKG=github.com/gorilla/mux/ REV=9b36453141c; git clone http://$PKG /go/src/$PKG && cd /go/src/$PKG && git checkout -f $REV
# Run dependencies
run apt-get install -y iptables
# lxc requires updating ubuntu sources
run echo 'deb http://archive.ubuntu.com/ubuntu precise main universe' > /etc/apt/sources.list
run apt-get update
run apt-get install -y lxc
run apt-get install -y aufs-tools
# Upload docker source
add . /go/src/github.com/dotcloud/docker
# Build the binary
run cd /go/src/github.com/dotcloud/docker/docker && go install -ldflags "-X main.GITCOMMIT '??' -d -w"
env PATH /usr/local/go/bin:/go/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin
cmd ["docker"]

36
FIXME
View File

@@ -1,36 +0,0 @@
## FIXME
This file is a loose collection of things to improve in the codebase, for the internal
use of the maintainers.
They are not big enough to be in the roadmap, not user-facing enough to be github issues,
and not important enough to be discussed in the mailing list.
They are just like FIXME comments in the source code, except we're not sure where in the source
to put them - so we put them here :)
* Merge Runtime, Server and Builder into Runtime
* Run linter on codebase
* Unify build commands and regular commands
* Move source code into src/ subdir for clarity
* Clean up the Makefile, it's a mess
* docker build: on non-existent local path for ADD, don't show full absolute path on the host
* mount into /dockerinit rather than /sbin/init
* docker tag foo REPO:TAG
* use size header for progress bar in pull
* Clean up context upload in build!!!
* Parallel pull
* Ensure /proc/sys/net/ipv4/ip_forward is 1
* Force DNS to public!
* Always generate a resolv.conf per container, to avoid changing resolv.conf under thne container's feet
* Save metadata with import/export
* Upgrade dockerd without stopping containers
* bring back git revision info, looks like it was lost
* Simple command to remove all untagged images
* Simple command to clean up containers for disk space
* Caching after an ADD
* entry point config
* bring back git revision info, looks like it was lost
* Clean up the ProgressReader api, it's a PITA to use

View File

@@ -1,5 +0,0 @@
Solomon Hykes <solomon@dotcloud.com>
Guillaume Charmes <guillaume@dotcloud.com>
Victor Vieux <victor@dotcloud.com>
api.go: Victor Vieux <victor@dotcloud.com>
Vagrantfile: Daniel Mizyrycki <daniel@dotcloud.com>

View File

@@ -2,8 +2,6 @@ DOCKER_PACKAGE := github.com/dotcloud/docker
RELEASE_VERSION := $(shell git tag | grep -E "v[0-9\.]+$$" | sort -nr | head -n 1)
SRCRELEASE := docker-$(RELEASE_VERSION)
BINRELEASE := docker-$(RELEASE_VERSION).tgz
BUILD_SRC := build_src
BUILD_PATH := ${BUILD_SRC}/src/${DOCKER_PACKAGE}
GIT_ROOT := $(shell git rev-parse --show-toplevel)
BUILD_DIR := $(CURDIR)/.gopath
@@ -19,7 +17,7 @@ endif
GIT_COMMIT = $(shell git rev-parse --short HEAD)
GIT_STATUS = $(shell test -n "`git status --porcelain`" && echo "+CHANGES")
BUILD_OPTIONS = -a -ldflags "-X main.GITCOMMIT $(GIT_COMMIT)$(GIT_STATUS) -d -w"
BUILD_OPTIONS = -ldflags "-X main.GIT_COMMIT $(GIT_COMMIT)$(GIT_STATUS)"
SRC_DIR := $(GOPATH)/src
@@ -35,22 +33,19 @@ all: $(DOCKER_BIN)
$(DOCKER_BIN): $(DOCKER_DIR)
@mkdir -p $(dir $@)
@(cd $(DOCKER_MAIN); CGO_ENABLED=0 go build $(GO_OPTIONS) $(BUILD_OPTIONS) -o $@)
@(cd $(DOCKER_MAIN); go build $(GO_OPTIONS) $(BUILD_OPTIONS) -o $@)
@echo $(DOCKER_BIN_RELATIVE) is created.
$(DOCKER_DIR):
@mkdir -p $(dir $@)
@if [ -h $@ ]; then rm -f $@; fi; ln -sf $(CURDIR)/ $@
@(cd $(DOCKER_MAIN); go get -d $(GO_OPTIONS))
@rm -f $@
@ln -sf $(CURDIR)/ $@
@(cd $(DOCKER_MAIN); go get $(GO_OPTIONS))
whichrelease:
echo $(RELEASE_VERSION)
release: $(BINRELEASE)
s3cmd -P put $(BINRELEASE) s3://get.docker.io/builds/`uname -s`/`uname -m`/docker-$(RELEASE_VERSION).tgz
s3cmd -P put docker-latest.tgz s3://get.docker.io/builds/`uname -s`/`uname -m`/docker-latest.tgz
s3cmd -P put $(SRCRELEASE)/bin/docker s3://get.docker.io/builds/`uname -s`/`uname -m`/docker
srcrelease: $(SRCRELEASE)
deps: $(DOCKER_DIR)
@@ -64,7 +59,6 @@ $(SRCRELEASE):
$(BINRELEASE): $(SRCRELEASE)
rm -f $(BINRELEASE)
cd $(SRCRELEASE); make; cp -R bin docker-$(RELEASE_VERSION); tar -f ../$(BINRELEASE) -zv -c docker-$(RELEASE_VERSION)
cd $(SRCRELEASE); cp -R bin docker-latest; tar -f ../docker-latest.tgz -zv -c docker-latest
clean:
@rm -rf $(dir $(DOCKER_BIN))
@@ -74,22 +68,11 @@ else ifneq ($(DOCKER_DIR), $(realpath $(DOCKER_DIR)))
@rm -f $(DOCKER_DIR)
endif
test:
# Copy docker source and dependencies for testing
rm -rf ${BUILD_SRC}; mkdir -p ${BUILD_PATH}
tar --exclude=${BUILD_SRC} -cz . | tar -xz -C ${BUILD_PATH}
GOPATH=${CURDIR}/${BUILD_SRC} go get -d
# Do the test
sudo -E GOPATH=${CURDIR}/${BUILD_SRC} go test ${GO_OPTIONS}
testall: all
@(cd $(DOCKER_DIR); sudo -E go test ./... $(GO_OPTIONS))
test: all
@(cd $(DOCKER_DIR); sudo -E go test $(GO_OPTIONS))
fmt:
@gofmt -s -l -w .
hack:
cd $(CURDIR)/hack && vagrant up
ssh-dev:
cd $(CURDIR)/hack && vagrant ssh
cd $(CURDIR)/buildbot && vagrant up

7
NOTICE
View File

@@ -3,9 +3,4 @@ Copyright 2012-2013 dotCloud, inc.
This product includes software developed at dotCloud, inc. (http://www.dotcloud.com).
This product contains software (https://github.com/kr/pty) developed by Keith Rarick, licensed under the MIT License.
Transfers of Docker shall be in accordance with applicable export controls of any country and all other applicable
legal requirements. Docker shall not be distributed or downloaded to or in Cuba, Iran, North Korea, Sudan or Syria
and shall not be distributed or downloaded to any person on the Denied Persons List administered by the U.S.
Department of Commerce.
This product contains software (https://github.com/kr/pty) developed by Keith Rarick, licensed under the MIT License.

118
README.md
View File

@@ -1,94 +1,37 @@
Docker: the Linux container engine
==================================
Docker: the Linux container runtime
===================================
Docker is an open-source engine which automates the deployment of applications as highly portable, self-sufficient containers.
Docker complements LXC with a high-level API which operates at the process level. It runs unix processes with strong guarantees of isolation and repeatability across servers.
Docker containers are both *hardware-agnostic* and *platform-agnostic*. This means that they can run anywhere, from your
laptop to the largest EC2 compute instance and everything in between - and they don't require that you use a particular
language, framework or packaging system. That makes them great building blocks for deploying and scaling web apps, databases
and backend services without depending on a particular stack or provider.
Docker is a great building block for automating distributed systems: large-scale web deployments, database clusters, continuous deployment systems, private PaaS, service-oriented architectures, etc.
Docker is an open-source implementation of the deployment engine which powers [dotCloud](http://dotcloud.com), a popular Platform-as-a-Service.
It benefits directly from the experience accumulated over several years of large-scale operation and support of hundreds of thousands
of applications and databases.
![Docker L](docs/sources/static_files/lego_docker.jpg "Docker")
![Docker L](docs/sources/concepts/images/lego_docker.jpg "Docker")
* *Heterogeneous payloads*: any combination of binaries, libraries, configuration files, scripts, virtualenvs, jars, gems, tarballs, you name it. No more juggling between domain-specific tools. Docker can deploy and run them all.
## Better than VMs
* *Any server*: docker can run on any x64 machine with a modern linux kernel - whether it's a laptop, a bare metal server or a VM. This makes it perfect for multi-cloud deployments.
A common method for distributing applications and sandbox their execution is to use virtual machines, or VMs. Typical VM formats
are VMWare's vmdk, Oracle Virtualbox's vdi, and Amazon EC2's ami. In theory these formats should allow every developer to
automatically package their application into a "machine" for easy distribution and deployment. In practice, that almost never
happens, for a few reasons:
* *Isolation*: docker isolates processes from each other and from the underlying host, using lightweight containers.
* *Size*: VMs are very large which makes them impractical to store and transfer.
* *Performance*: running VMs consumes significant CPU and memory, which makes them impractical in many scenarios, for example local development of multi-tier applications, and
large-scale deployment of cpu and memory-intensive applications on large numbers of machines.
* *Portability*: competing VM environments don't play well with each other. Although conversion tools do exist, they are limited and add even more overhead.
* *Hardware-centric*: VMs were designed with machine operators in mind, not software developers. As a result, they offer very limited tooling for what developers need most:
building, testing and running their software. For example, VMs offer no facilities for application versioning, monitoring, configuration, logging or service discovery.
By contrast, Docker relies on a different sandboxing method known as *containerization*. Unlike traditional virtualization,
containerization takes place at the kernel level. Most modern operating system kernels now support the primitives necessary
for containerization, including Linux with [openvz](http://openvz.org), [vserver](http://linux-vserver.org) and more recently [lxc](http://lxc.sourceforge.net),
Solaris with [zones](http://docs.oracle.com/cd/E26502_01/html/E29024/preface-1.html#scrolltoc) and FreeBSD with [Jails](http://www.freebsd.org/doc/handbook/jails.html).
Docker builds on top of these low-level primitives to offer developers a portable format and runtime environment that solves
all 4 problems. Docker containers are small (and their transfer can be optimized with layers), they have basically zero memory and cpu overhead,
they are completely portable and are designed from the ground up with an application-centric design.
The best part: because docker operates at the OS level, it can still be run inside a VM!
## Plays well with others
Docker does not require that you buy into a particular programming language, framework, packaging system or configuration language.
Is your application a unix process? Does it use files, tcp connections, environment variables, standard unix streams and command-line
arguments as inputs and outputs? Then docker can run it.
Can your application's build be expressed as a sequence of such commands? Then docker can build it.
* *Repeatability*: because containers are isolated in their own filesystem, they behave the same regardless of where, when, and alongside what they run.
## Escape dependency hell
Notable features
-----------------
A common problem for developers is the difficulty of managing all their application's dependencies in a simple and automated way.
* Filesystem isolation: each process container runs in a completely separate root filesystem.
This is usually difficult for several reasons:
* Resource isolation: system resources like cpu and memory can be allocated differently to each process container, using cgroups.
* *Cross-platform dependencies*. Modern applications often depend on a combination of system libraries and binaries, language-specific packages, framework-specific modules,
internal components developed for another project, etc. These dependencies live in different "worlds" and require different tools - these tools typically don't work
well with each other, requiring awkward custom integrations.
* Network isolation: each process container runs in its own network namespace, with a virtual interface and IP address of its own.
* Conflicting dependencies. Different applications may depend on different versions of the same dependency. Packaging tools handle these situations with various degrees of ease -
but they all handle them in different and incompatible ways, which again forces the developer to do extra work.
* Custom dependencies. A developer may need to prepare a custom version of their application's dependency. Some packaging systems can handle custom versions of a dependency,
others can't - and all of them handle it differently.
* Copy-on-write: root filesystems are created using copy-on-write, which makes deployment extremely fast, memory-cheap and disk-cheap.
* Logging: the standard streams (stdout/stderr/stdin) of each process container are collected and logged for real-time or batch retrieval.
Docker solves dependency hell by giving the developer a simple way to express *all* their application's dependencies in one place,
and streamline the process of assembling them. If this makes you think of [XKCD 927](http://xkcd.com/927/), don't worry. Docker doesn't
*replace* your favorite packaging systems. It simply orchestrates their use in a simple and repeatable way. How does it do that? With layers.
Docker defines a build as running a sequence of unix commands, one after the other, in the same container. Build commands modify the contents of the container
(usually by installing new files on the filesystem), the next command modifies it some more, etc. Since each build command inherits the result of the previous
commands, the *order* in which the commands are executed expresses *dependencies*.
Here's a typical docker build process:
```bash
from ubuntu:12.10
run apt-get update
run DEBIAN_FRONTEND=noninteractive apt-get install -q -y python
run DEBIAN_FRONTEND=noninteractive apt-get install -q -y python-pip
run pip install django
run DEBIAN_FRONTEND=noninteractive apt-get install -q -y curl
run curl -L https://github.com/shykes/helloflask/archive/master.tar.gz | tar -xzv
run cd helloflask-master && pip install -r requirements.txt
```
Note that Docker doesn't care *how* dependencies are built - as long as they can be built by running a unix command in a container.
* Change management: changes to a container's filesystem can be committed into a new image and re-used to create more containers. No templating or manual configuration required.
* Interactive shell: docker can allocate a pseudo-tty and attach to the standard input of any container, for example to run a throwaway interactive shell.
Install instructions
==================
@@ -97,7 +40,7 @@ Quick install on Ubuntu 12.04 and 12.10
---------------------------------------
```bash
curl get.docker.io | sudo sh -x
curl get.docker.io | sh -x
```
Binary installs
@@ -108,7 +51,7 @@ Note that some methods are community contributions and not yet officially suppor
* [Ubuntu 12.04 and 12.10 (officially supported)](http://docs.docker.io/en/latest/installation/ubuntulinux/)
* [Arch Linux](http://docs.docker.io/en/latest/installation/archlinux/)
* [Mac OS X (with Vagrant)](http://docs.docker.io/en/latest/installation/vagrant/)
* [MacOS X (with Vagrant)](http://docs.docker.io/en/latest/installation/macos/)
* [Windows (with Vagrant)](http://docs.docker.io/en/latest/installation/windows/)
* [Amazon EC2 (with Vagrant)](http://docs.docker.io/en/latest/installation/amazon/)
@@ -181,7 +124,7 @@ Running an irc bouncer
----------------------
```bash
BOUNCER_ID=$(docker run -d -p 6667 -u irc shykes/znc zncrun $USER $PASSWORD)
BOUNCER_ID=$(docker run -d -p 6667 -u irc shykes/znc $USER $PASSWORD)
echo "Configure your irc client to connect to port $(docker port $BOUNCER_ID 6667) of this machine"
```
@@ -216,8 +159,7 @@ PORT=$(docker port $JOB 4444)
# Connect to the public port via the host's public address
# Please note that because of how routing works connecting to localhost or 127.0.0.1 $PORT will not work.
# Replace *eth0* according to your local interface name.
IP=$(ip -o -4 addr list eth0 | perl -n -e 'if (m{inet\s([\d\.]+)\/\d+\s}xms) { print $1 }')
IP=$(ifconfig eth0 | perl -n -e 'if (m/inet addr:([\d\.]+)/g) { print $1 }')
echo hello world | nc $IP $PORT
# Verify that the network connection worked
@@ -252,7 +194,7 @@ Note
----
We also keep the documentation in this repository. The website documentation is generated using sphinx using these sources.
Please find it under docs/sources/ and read more about it https://github.com/dotcloud/docker/tree/master/docs/README.md
Please find it under docs/sources/ and read more about it https://github.com/dotcloud/docker/master/docs/README.md
Please feel free to fix / update the documentation and send us pull requests. More tutorials are also welcome.
@@ -263,14 +205,14 @@ Setting up a dev environment
Instructions that have been verified to work on Ubuntu 12.10,
```bash
sudo apt-get -y install lxc curl xz-utils golang git
sudo apt-get -y install lxc wget bsdtar curl golang git
export GOPATH=~/go/
export PATH=$GOPATH/bin:$PATH
mkdir -p $GOPATH/src/github.com/dotcloud
cd $GOPATH/src/github.com/dotcloud
git clone https://github.com/dotcloud/docker.git
git clone git@github.com:dotcloud/docker.git
cd docker
go get -v github.com/dotcloud/docker/...
@@ -294,7 +236,7 @@ a format that is self-describing and portable, so that any compliant runtime can
The spec for Standard Containers is currently a work in progress, but it is very straightforward. It mostly defines 1) an image format, 2) a set of standard operations, and 3) an execution environment.
A great analogy for this is the shipping container. Just like how Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery.
A great analogy for this is the shipping container. Just like Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery.
### 1. STANDARD OPERATIONS
@@ -322,7 +264,7 @@ Similarly, before Standard Containers, by the time a software component ran in p
### 5. INDUSTRIAL-GRADE DELIVERY
There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded onto the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away.
There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded on the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away.
With Standard Containers we can put an end to that embarrassment, by making INDUSTRIAL-GRADE DELIVERY of software a reality.
@@ -372,10 +314,4 @@ Standard Container Specification
#### Security
### Legal
Transfers of Docker shall be in accordance with applicable export controls of any country and all other applicable
legal requirements. Docker shall not be distributed or downloaded to or in Cuba, Iran, North Korea, Sudan or Syria
and shall not be distributed or downloaded to any person on the Denied Persons List administered by the U.S.
Department of Commerce.

71
SPECS/data-volumes.md Normal file
View File

@@ -0,0 +1,71 @@
## Spec for data volumes
Spec owner: Solomon Hykes <solomon@dotcloud.com>
Data volumes (issue #111) are a much-requested feature which trigger much discussion and debate. Below is the current authoritative spec for implementing data volumes.
This spec will be deprecated once the feature is fully implemented.
Discussion, requests, trolls, demands, offerings, threats and other forms of supplications concerning this spec should be addressed to Solomon here: https://github.com/dotcloud/docker/issues/111
### 1. Creating data volumes
At container creation, parts of a container's filesystem can be mounted as separate data volumes. Volumes are defined with the -v flag.
For example:
```bash
$ docker run -v /var/lib/postgres -v /var/log postgres /usr/bin/postgres
```
In this example, a new container is created from the 'postgres' image. At the same time, docker creates 2 new data volumes: one will be mapped to the container at /var/lib/postgres, the other at /var/log.
2 important notes:
1) Volumes don't have top-level names. At no point does the user provide a name, or is a name given to him. Volumes are identified by the path at which they are mounted inside their container.
2) The user doesn't choose the source of the volume. Docker only mounts volumes it created itself, in the same way that it only runs containers that it created itself. That is by design.
### 2. Sharing data volumes
Instead of creating its own volumes, a container can share another container's volumes. For example:
```bash
$ docker run --volumes-from $OTHER_CONTAINER_ID postgres /usr/local/bin/postgres-backup
```
In this example, a new container is created from the 'postgres' example. At the same time, docker will *re-use* the 2 data volumes created in the previous example. One volume will be mounted on the /var/lib/postgres of *both* containers, and the other will be mounted on the /var/log of both containers.
### 3. Under the hood
Docker stores volumes in /var/lib/docker/volumes. Each volume receives a globally unique ID at creation, and is stored at /var/lib/docker/volumes/ID.
At creation, volumes are attached to a single container - the source of truth for this mapping will be the container's configuration.
Mounting a volume consists of calling "mount --bind" from the volume's directory to the appropriate sub-directory of the container mountpoint. This may be done by Docker itself, or farmed out to lxc (which supports mount-binding) if possible.
### 4. Backups, transfers and other volume operations
Volumes sometimes need to be backed up, transfered between hosts, synchronized, etc. These operations typically are application-specific or site-specific, eg. rsync vs. S3 upload vs. replication vs...
Rather than attempting to implement all these scenarios directly, Docker will allow for custom implementations using an extension mechanism.
### 5. Custom volume handlers
Docker allows for arbitrary code to be executed against a container's volumes, to implement any custom action: backup, transfer, synchronization across hosts, etc.
Here's an example:
```bash
$ DB=$(docker run -d -v /var/lib/postgres -v /var/log postgres /usr/bin/postgres)
$ BACKUP_JOB=$(docker run -d --volumes-from $DB shykes/backuper /usr/local/bin/backup-postgres --s3creds=$S3CREDS)
$ docker wait $BACKUP_JOB
```
Congratulations, you just implemented a custom volume handler, using Docker's built-in ability to 1) execute arbitrary code and 2) share volumes between containers.

122
Vagrantfile vendored
View File

@@ -1,65 +1,40 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
BOX_NAME = ENV['BOX_NAME'] || "ubuntu"
BOX_URI = ENV['BOX_URI'] || "http://files.vagrantup.com/precise64.box"
VF_BOX_URI = ENV['BOX_URI'] || "http://files.vagrantup.com/precise64_vmware_fusion.box"
AWS_REGION = ENV['AWS_REGION'] || "us-east-1"
AWS_AMI = ENV['AWS_AMI'] || "ami-d0f89fb9"
FORWARD_DOCKER_PORTS = ENV['FORWARD_DOCKER_PORTS']
def v10(config)
config.vm.box = 'precise64'
config.vm.box_url = 'http://files.vagrantup.com/precise64.box'
Vagrant::Config.run do |config|
# Setup virtual machine box. This VM configuration code is always executed.
config.vm.box = BOX_NAME
config.vm.box_url = BOX_URI
config.vm.forward_port 4243, 4243
# Provision docker and new kernel if deployment was not done
if Dir.glob("#{File.dirname(__FILE__)}/.vagrant/machines/default/*/id").empty?
# Add lxc-docker package
pkg_cmd = "apt-get update -qq; apt-get install -q -y python-software-properties; " \
"add-apt-repository -y ppa:dotcloud/lxc-docker; apt-get update -qq; " \
"apt-get install -q -y lxc-docker; "
# Add X.org Ubuntu backported 3.8 kernel
pkg_cmd << "add-apt-repository -y ppa:ubuntu-x-swat/r-lts-backport; " \
"apt-get update -qq; apt-get install -q -y linux-image-3.8.0-19-generic; "
# Add guest additions if local vbox VM
is_vbox = true
ARGV.each do |arg| is_vbox &&= !arg.downcase.start_with?("--provider") end
if is_vbox
pkg_cmd << "apt-get install -q -y linux-headers-3.8.0-19-generic dkms; " \
"echo 'Downloading VBox Guest Additions...'; " \
"wget -q http://dlc.sun.com.edgesuite.net/virtualbox/4.2.12/VBoxGuestAdditions_4.2.12.iso; "
# Prepare the VM to add guest additions after reboot
pkg_cmd << "echo -e 'mount -o loop,ro /home/vagrant/VBoxGuestAdditions_4.2.12.iso /mnt\n" \
"echo yes | /mnt/VBoxLinuxAdditions.run\numount /mnt\n" \
"rm /root/guest_additions.sh; ' > /root/guest_additions.sh; " \
"chmod 700 /root/guest_additions.sh; " \
"sed -i -E 's#^exit 0#[ -x /root/guest_additions.sh ] \\&\\& /root/guest_additions.sh#' /etc/rc.local; " \
"echo 'Installation of VBox Guest Additions is proceeding in the background.'; " \
"echo '\"vagrant reload\" can be used in about 2 minutes to activate the new guest additions.'; "
end
# Activate new kernel
pkg_cmd << "shutdown -r +1; "
config.vm.provision :shell, :inline => pkg_cmd
end
# Install ubuntu packaging dependencies and create ubuntu packages
config.vm.provision :shell, :inline => "echo 'deb http://ppa.launchpad.net/dotcloud/lxc-docker/ubuntu precise main' >>/etc/apt/sources.list"
config.vm.provision :shell, :inline => 'export DEBIAN_FRONTEND=noninteractive; apt-get -qq update; apt-get install -qq -y --force-yes lxc-docker'
end
Vagrant::VERSION < "1.1.0" and Vagrant::Config.run do |config|
v10(config)
end
Vagrant::VERSION >= "1.1.0" and Vagrant.configure("1") do |config|
v10(config)
end
# Providers were added on Vagrant >= 1.1.0
Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
config.vm.provider :aws do |aws, override|
config.vm.provider :aws do |aws|
config.vm.box = "dummy"
config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
aws.access_key_id = ENV["AWS_ACCESS_KEY_ID"]
aws.secret_access_key = ENV["AWS_SECRET_ACCESS_KEY"]
aws.secret_access_key = ENV["AWS_SECRET_ACCESS_KEY"]
aws.keypair_name = ENV["AWS_KEYPAIR_NAME"]
override.ssh.private_key_path = ENV["AWS_SSH_PRIVKEY"]
override.ssh.username = "ubuntu"
aws.region = AWS_REGION
aws.ami = AWS_AMI
aws.ssh_private_key_path = ENV["AWS_SSH_PRIVKEY"]
aws.region = "us-east-1"
aws.ami = "ami-d0f89fb9"
aws.ssh_username = "ubuntu"
aws.instance_type = "t1.micro"
end
config.vm.provider :rackspace do |rs|
config.vm.box = "dummy"
config.vm.box_url = "https://github.com/mitchellh/vagrant-rackspace/raw/master/dummy.box"
config.ssh.private_key_path = ENV["RS_PRIVATE_KEY"]
rs.username = ENV["RS_USERNAME"]
rs.api_key = ENV["RS_API_KEY"]
@@ -68,29 +43,40 @@ Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
rs.image = /Ubuntu/
end
config.vm.provider :vmware_fusion do |f, override|
override.vm.box = BOX_NAME
override.vm.box_url = VF_BOX_URI
override.vm.synced_folder ".", "/vagrant", disabled: true
f.vmx["displayName"] = "docker"
config.vm.provider :virtualbox do |vb|
config.vm.box = 'precise64'
config.vm.box_url = 'http://files.vagrantup.com/precise64.box'
end
end
Vagrant::VERSION >= "1.2.0" and Vagrant.configure("2") do |config|
config.vm.provider :aws do |aws, override|
config.vm.box = "dummy"
config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
aws.access_key_id = ENV["AWS_ACCESS_KEY_ID"]
aws.secret_access_key = ENV["AWS_SECRET_ACCESS_KEY"]
aws.keypair_name = ENV["AWS_KEYPAIR_NAME"]
override.ssh.private_key_path = ENV["AWS_SSH_PRIVKEY"]
override.ssh.username = "ubuntu"
aws.region = "us-east-1"
aws.ami = "ami-d0f89fb9"
aws.instance_type = "t1.micro"
end
config.vm.provider :rackspace do |rs|
config.vm.box = "dummy"
config.vm.box_url = "https://github.com/mitchellh/vagrant-rackspace/raw/master/dummy.box"
config.ssh.private_key_path = ENV["RS_PRIVATE_KEY"]
rs.username = ENV["RS_USERNAME"]
rs.api_key = ENV["RS_API_KEY"]
rs.public_key_path = ENV["RS_PUBLIC_KEY"]
rs.flavor = /512MB/
rs.image = /Ubuntu/
end
config.vm.provider :virtualbox do |vb|
config.vm.box = BOX_NAME
config.vm.box_url = BOX_URI
config.vm.box = 'precise64'
config.vm.box_url = 'http://files.vagrantup.com/precise64.box'
end
end
if !FORWARD_DOCKER_PORTS.nil?
Vagrant::VERSION < "1.1.0" and Vagrant::Config.run do |config|
(49000..49900).each do |port|
config.vm.forward_port port, port
end
end
Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
(49000..49900).each do |port|
config.vm.network :forwarded_port, :host => port, :guest => port
end
end
end

963
api.go
View File

@@ -1,963 +0,0 @@
package docker
import (
"encoding/json"
"fmt"
"github.com/dotcloud/docker/auth"
"github.com/dotcloud/docker/utils"
"github.com/gorilla/mux"
"io"
"io/ioutil"
"log"
"net"
"net/http"
"os"
"os/exec"
"strconv"
"strings"
)
const APIVERSION = 1.3
const DEFAULTHTTPHOST string = "127.0.0.1"
const DEFAULTHTTPPORT int = 4243
func hijackServer(w http.ResponseWriter) (io.ReadCloser, io.Writer, error) {
conn, _, err := w.(http.Hijacker).Hijack()
if err != nil {
return nil, nil, err
}
// Flush the options to make sure the client sets the raw mode
conn.Write([]byte{})
return conn, conn, nil
}
//If we don't do this, POST method without Content-type (even with empty body) will fail
func parseForm(r *http.Request) error {
if err := r.ParseForm(); err != nil && !strings.HasPrefix(err.Error(), "mime:") {
return err
}
return nil
}
func parseMultipartForm(r *http.Request) error {
if err := r.ParseMultipartForm(4096); err != nil && !strings.HasPrefix(err.Error(), "mime:") {
return err
}
return nil
}
func httpError(w http.ResponseWriter, err error) {
statusCode := http.StatusInternalServerError
if strings.HasPrefix(err.Error(), "No such") {
statusCode = http.StatusNotFound
} else if strings.HasPrefix(err.Error(), "Bad parameter") {
statusCode = http.StatusBadRequest
} else if strings.HasPrefix(err.Error(), "Conflict") {
statusCode = http.StatusConflict
} else if strings.HasPrefix(err.Error(), "Impossible") {
statusCode = http.StatusNotAcceptable
} else if strings.HasPrefix(err.Error(), "Wrong login/password") {
statusCode = http.StatusUnauthorized
} else if strings.Contains(err.Error(), "hasn't been activated") {
statusCode = http.StatusForbidden
}
utils.Debugf("[error %d] %s", statusCode, err)
http.Error(w, err.Error(), statusCode)
}
func writeJSON(w http.ResponseWriter, b []byte) {
w.Header().Set("Content-Type", "application/json")
w.Write(b)
}
func getBoolParam(value string) (bool, error) {
if value == "" {
return false, nil
}
ret, err := strconv.ParseBool(value)
if err != nil {
return false, fmt.Errorf("Bad parameter")
}
return ret, nil
}
func getAuth(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if version > 1.1 {
w.WriteHeader(http.StatusNotFound)
return nil
}
authConfig, err := auth.LoadConfig(srv.runtime.root)
if err != nil {
if err != auth.ErrConfigFileMissing {
return err
}
authConfig = &auth.AuthConfig{}
}
b, err := json.Marshal(&auth.AuthConfig{Username: authConfig.Username, Email: authConfig.Email})
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func postAuth(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
authConfig := &auth.AuthConfig{}
err := json.NewDecoder(r.Body).Decode(authConfig)
if err != nil {
return err
}
status := ""
if version > 1.1 {
status, err = auth.Login(authConfig, false)
if err != nil {
return err
}
} else {
localAuthConfig, err := auth.LoadConfig(srv.runtime.root)
if err != nil {
if err != auth.ErrConfigFileMissing {
return err
}
}
if authConfig.Username == localAuthConfig.Username {
authConfig.Password = localAuthConfig.Password
}
newAuthConfig := auth.NewAuthConfig(authConfig.Username, authConfig.Password, authConfig.Email, srv.runtime.root)
status, err = auth.Login(newAuthConfig, true)
if err != nil {
return err
}
}
if status != "" {
b, err := json.Marshal(&APIAuth{Status: status})
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func getVersion(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
m := srv.DockerVersion()
b, err := json.Marshal(m)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func postContainersKill(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if err := srv.ContainerKill(name); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func getContainersExport(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if err := srv.ContainerExport(name, w); err != nil {
utils.Debugf("%s", err)
return err
}
return nil
}
func getImagesJSON(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
all, err := getBoolParam(r.Form.Get("all"))
if err != nil {
return err
}
filter := r.Form.Get("filter")
outs, err := srv.Images(all, filter)
if err != nil {
return err
}
b, err := json.Marshal(outs)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func getImagesViz(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := srv.ImagesViz(w); err != nil {
return err
}
return nil
}
func getInfo(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
out := srv.DockerInfo()
b, err := json.Marshal(out)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func getImagesHistory(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
outs, err := srv.ImageHistory(name)
if err != nil {
return err
}
b, err := json.Marshal(outs)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func getContainersChanges(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
changesStr, err := srv.ContainerChanges(name)
if err != nil {
return err
}
b, err := json.Marshal(changesStr)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func getContainersTop(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
procsStr, err := srv.ContainerTop(name)
if err != nil {
return err
}
b, err := json.Marshal(procsStr)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func getContainersJSON(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
all, err := getBoolParam(r.Form.Get("all"))
if err != nil {
return err
}
size, err := getBoolParam(r.Form.Get("size"))
if err != nil {
return err
}
since := r.Form.Get("since")
before := r.Form.Get("before")
n, err := strconv.Atoi(r.Form.Get("limit"))
if err != nil {
n = -1
}
outs := srv.Containers(all, size, n, since, before)
b, err := json.Marshal(outs)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func postImagesTag(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
repo := r.Form.Get("repo")
tag := r.Form.Get("tag")
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
force, err := getBoolParam(r.Form.Get("force"))
if err != nil {
return err
}
if err := srv.ContainerTag(name, repo, tag, force); err != nil {
return err
}
w.WriteHeader(http.StatusCreated)
return nil
}
func postCommit(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
config := &Config{}
if err := json.NewDecoder(r.Body).Decode(config); err != nil {
utils.Debugf("%s", err)
}
repo := r.Form.Get("repo")
tag := r.Form.Get("tag")
container := r.Form.Get("container")
author := r.Form.Get("author")
comment := r.Form.Get("comment")
id, err := srv.ContainerCommit(container, repo, tag, author, comment, config)
if err != nil {
return err
}
b, err := json.Marshal(&APIID{id})
if err != nil {
return err
}
w.WriteHeader(http.StatusCreated)
writeJSON(w, b)
return nil
}
// Creates an image from Pull or from Import
func postImagesCreate(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
src := r.Form.Get("fromSrc")
image := r.Form.Get("fromImage")
tag := r.Form.Get("tag")
repo := r.Form.Get("repo")
if version > 1.0 {
w.Header().Set("Content-Type", "application/json")
}
sf := utils.NewStreamFormatter(version > 1.0)
if image != "" { //pull
if err := srv.ImagePull(image, tag, w, sf, &auth.AuthConfig{}); err != nil {
if sf.Used() {
w.Write(sf.FormatError(err))
return nil
}
return err
}
} else { //import
if err := srv.ImageImport(src, repo, tag, r.Body, w, sf); err != nil {
if sf.Used() {
w.Write(sf.FormatError(err))
return nil
}
return err
}
}
return nil
}
func getImagesSearch(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
term := r.Form.Get("term")
outs, err := srv.ImagesSearch(term)
if err != nil {
return err
}
b, err := json.Marshal(outs)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func postImagesInsert(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
url := r.Form.Get("url")
path := r.Form.Get("path")
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if version > 1.0 {
w.Header().Set("Content-Type", "application/json")
}
sf := utils.NewStreamFormatter(version > 1.0)
imgID, err := srv.ImageInsert(name, url, path, w, sf)
if err != nil {
if sf.Used() {
w.Write(sf.FormatError(err))
return nil
}
}
b, err := json.Marshal(&APIID{ID: imgID})
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func postImagesPush(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
authConfig := &auth.AuthConfig{}
if version > 1.1 {
if err := json.NewDecoder(r.Body).Decode(authConfig); err != nil {
return err
}
} else {
localAuthConfig, err := auth.LoadConfig(srv.runtime.root)
if err != nil && err != auth.ErrConfigFileMissing {
return err
}
authConfig = localAuthConfig
}
if err := parseForm(r); err != nil {
return err
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if version > 1.0 {
w.Header().Set("Content-Type", "application/json")
}
sf := utils.NewStreamFormatter(version > 1.0)
if err := srv.ImagePush(name, w, sf, authConfig); err != nil {
if sf.Used() {
w.Write(sf.FormatError(err))
return nil
}
return err
}
return nil
}
func postContainersCreate(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
config := &Config{}
out := &APIRun{}
if err := json.NewDecoder(r.Body).Decode(config); err != nil {
return err
}
if len(config.Dns) == 0 && len(srv.runtime.Dns) == 0 && utils.CheckLocalDns() {
out.Warnings = append(out.Warnings, fmt.Sprintf("Docker detected local DNS server on resolv.conf. Using default external servers: %v", defaultDns))
config.Dns = defaultDns
}
id, err := srv.ContainerCreate(config)
if err != nil {
return err
}
out.ID = id
if config.Memory > 0 && !srv.runtime.capabilities.MemoryLimit {
log.Println("WARNING: Your kernel does not support memory limit capabilities. Limitation discarded.")
out.Warnings = append(out.Warnings, "Your kernel does not support memory limit capabilities. Limitation discarded.")
}
if config.Memory > 0 && !srv.runtime.capabilities.SwapLimit {
log.Println("WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.")
out.Warnings = append(out.Warnings, "Your kernel does not support memory swap capabilities. Limitation discarded.")
}
b, err := json.Marshal(out)
if err != nil {
return err
}
w.WriteHeader(http.StatusCreated)
writeJSON(w, b)
return nil
}
func postContainersRestart(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
t, err := strconv.Atoi(r.Form.Get("t"))
if err != nil || t < 0 {
t = 10
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if err := srv.ContainerRestart(name, t); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func deleteContainers(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
removeVolume, err := getBoolParam(r.Form.Get("v"))
if err != nil {
return err
}
if err := srv.ContainerDestroy(name, removeVolume); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func deleteImages(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
imgs, err := srv.ImageDelete(name, version > 1.1)
if err != nil {
return err
}
if imgs != nil {
if len(imgs) != 0 {
b, err := json.Marshal(imgs)
if err != nil {
return err
}
writeJSON(w, b)
} else {
return fmt.Errorf("Conflict, %s wasn't deleted", name)
}
} else {
w.WriteHeader(http.StatusNoContent)
}
return nil
}
func postContainersStart(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
hostConfig := &HostConfig{}
// allow a nil body for backwards compatibility
if r.Body != nil {
if r.Header.Get("Content-Type") == "application/json" {
if err := json.NewDecoder(r.Body).Decode(hostConfig); err != nil {
return err
}
}
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if err := srv.ContainerStart(name, hostConfig); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func postContainersStop(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
t, err := strconv.Atoi(r.Form.Get("t"))
if err != nil || t < 0 {
t = 10
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if err := srv.ContainerStop(name, t); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func postContainersWait(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
status, err := srv.ContainerWait(name)
if err != nil {
return err
}
b, err := json.Marshal(&APIWait{StatusCode: status})
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func postContainersResize(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
height, err := strconv.Atoi(r.Form.Get("h"))
if err != nil {
return err
}
width, err := strconv.Atoi(r.Form.Get("w"))
if err != nil {
return err
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if err := srv.ContainerResize(name, height, width); err != nil {
return err
}
return nil
}
func postContainersAttach(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
logs, err := getBoolParam(r.Form.Get("logs"))
if err != nil {
return err
}
stream, err := getBoolParam(r.Form.Get("stream"))
if err != nil {
return err
}
stdin, err := getBoolParam(r.Form.Get("stdin"))
if err != nil {
return err
}
stdout, err := getBoolParam(r.Form.Get("stdout"))
if err != nil {
return err
}
stderr, err := getBoolParam(r.Form.Get("stderr"))
if err != nil {
return err
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if _, err := srv.ContainerInspect(name); err != nil {
return err
}
in, out, err := hijackServer(w)
if err != nil {
return err
}
defer func() {
if tcpc, ok := in.(*net.TCPConn); ok {
tcpc.CloseWrite()
} else {
in.Close()
}
}()
defer func() {
if tcpc, ok := out.(*net.TCPConn); ok {
tcpc.CloseWrite()
} else if closer, ok := out.(io.Closer); ok {
closer.Close()
}
}()
fmt.Fprintf(out, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n")
if err := srv.ContainerAttach(name, logs, stream, stdin, stdout, stderr, in, out); err != nil {
fmt.Fprintf(out, "Error: %s\n", err)
}
return nil
}
func getContainersByName(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
container, err := srv.ContainerInspect(name)
if err != nil {
return err
}
b, err := json.Marshal(container)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func getImagesByName(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
image, err := srv.ImageInspect(name)
if err != nil {
return err
}
b, err := json.Marshal(image)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func postImagesGetCache(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
apiConfig := &APIImageConfig{}
if err := json.NewDecoder(r.Body).Decode(apiConfig); err != nil {
return err
}
image, err := srv.ImageGetCached(apiConfig.ID, apiConfig.Config)
if err != nil {
return err
}
if image == nil {
w.WriteHeader(http.StatusNotFound)
return nil
}
apiID := &APIID{ID: image.ID}
b, err := json.Marshal(apiID)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func postBuild(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if version < 1.3 {
return fmt.Errorf("Multipart upload for build is no longer supported. Please upgrade your docker client.")
}
remoteURL := r.FormValue("remote")
repoName := r.FormValue("t")
rawSuppressOutput := r.FormValue("q")
tag := ""
if strings.Contains(repoName, ":") {
remoteParts := strings.Split(repoName, ":")
tag = remoteParts[1]
repoName = remoteParts[0]
}
var context io.Reader
if remoteURL == "" {
context = r.Body
} else if utils.IsGIT(remoteURL) {
if !strings.HasPrefix(remoteURL, "git://") {
remoteURL = "https://" + remoteURL
}
root, err := ioutil.TempDir("", "docker-build-git")
if err != nil {
return err
}
defer os.RemoveAll(root)
if output, err := exec.Command("git", "clone", remoteURL, root).CombinedOutput(); err != nil {
return fmt.Errorf("Error trying to use git: %s (%s)", err, output)
}
c, err := Tar(root, Bzip2)
if err != nil {
return err
}
context = c
} else if utils.IsURL(remoteURL) {
f, err := utils.Download(remoteURL, ioutil.Discard)
if err != nil {
return err
}
defer f.Body.Close()
dockerFile, err := ioutil.ReadAll(f.Body)
if err != nil {
return err
}
c, err := mkBuildContext(string(dockerFile), nil)
if err != nil {
return err
}
context = c
}
suppressOutput, err := getBoolParam(rawSuppressOutput)
if err != nil {
return err
}
b := NewBuildFile(srv, utils.NewWriteFlusher(w), !suppressOutput)
id, err := b.Build(context)
if err != nil {
fmt.Fprintf(w, "Error build: %s\n", err)
return err
}
if repoName != "" {
srv.runtime.repositories.Set(repoName, tag, id, false)
}
return nil
}
func optionsHandler(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
w.WriteHeader(http.StatusOK)
return nil
}
func writeCorsHeaders(w http.ResponseWriter, r *http.Request) {
w.Header().Add("Access-Control-Allow-Origin", "*")
w.Header().Add("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept")
w.Header().Add("Access-Control-Allow-Methods", "GET, POST, DELETE, PUT, OPTIONS")
}
func createRouter(srv *Server, logging bool) (*mux.Router, error) {
r := mux.NewRouter()
m := map[string]map[string]func(*Server, float64, http.ResponseWriter, *http.Request, map[string]string) error{
"GET": {
"/auth": getAuth,
"/version": getVersion,
"/info": getInfo,
"/images/json": getImagesJSON,
"/images/viz": getImagesViz,
"/images/search": getImagesSearch,
"/images/{name:.*}/history": getImagesHistory,
"/images/{name:.*}/json": getImagesByName,
"/containers/ps": getContainersJSON,
"/containers/json": getContainersJSON,
"/containers/{name:.*}/export": getContainersExport,
"/containers/{name:.*}/changes": getContainersChanges,
"/containers/{name:.*}/json": getContainersByName,
"/containers/{name:.*}/top": getContainersTop,
},
"POST": {
"/auth": postAuth,
"/commit": postCommit,
"/build": postBuild,
"/images/create": postImagesCreate,
"/images/{name:.*}/insert": postImagesInsert,
"/images/{name:.*}/push": postImagesPush,
"/images/{name:.*}/tag": postImagesTag,
"/images/getCache": postImagesGetCache,
"/containers/create": postContainersCreate,
"/containers/{name:.*}/kill": postContainersKill,
"/containers/{name:.*}/restart": postContainersRestart,
"/containers/{name:.*}/start": postContainersStart,
"/containers/{name:.*}/stop": postContainersStop,
"/containers/{name:.*}/wait": postContainersWait,
"/containers/{name:.*}/resize": postContainersResize,
"/containers/{name:.*}/attach": postContainersAttach,
},
"DELETE": {
"/containers/{name:.*}": deleteContainers,
"/images/{name:.*}": deleteImages,
},
"OPTIONS": {
"": optionsHandler,
},
}
for method, routes := range m {
for route, fct := range routes {
utils.Debugf("Registering %s, %s", method, route)
// NOTE: scope issue, make sure the variables are local and won't be changed
localRoute := route
localMethod := method
localFct := fct
f := func(w http.ResponseWriter, r *http.Request) {
utils.Debugf("Calling %s %s from %s", localMethod, localRoute, r.RemoteAddr)
if logging {
log.Println(r.Method, r.RequestURI)
}
if strings.Contains(r.Header.Get("User-Agent"), "Docker-Client/") {
userAgent := strings.Split(r.Header.Get("User-Agent"), "/")
if len(userAgent) == 2 && userAgent[1] != VERSION {
utils.Debugf("Warning: client and server don't have the same version (client: %s, server: %s)", userAgent[1], VERSION)
}
}
version, err := strconv.ParseFloat(mux.Vars(r)["version"], 64)
if err != nil {
version = APIVERSION
}
if srv.enableCors {
writeCorsHeaders(w, r)
}
if version == 0 || version > APIVERSION {
w.WriteHeader(http.StatusNotFound)
return
}
if err := localFct(srv, version, w, r, mux.Vars(r)); err != nil {
httpError(w, err)
}
}
if localRoute == "" {
r.Methods(localMethod).HandlerFunc(f)
} else {
r.Path("/v{version:[0-9.]+}" + localRoute).Methods(localMethod).HandlerFunc(f)
r.Path(localRoute).Methods(localMethod).HandlerFunc(f)
}
}
}
return r, nil
}
func ListenAndServe(proto, addr string, srv *Server, logging bool) error {
log.Printf("Listening for HTTP on %s (%s)\n", addr, proto)
r, err := createRouter(srv, logging)
if err != nil {
return err
}
l, e := net.Listen(proto, addr)
if e != nil {
return e
}
//as the daemon is launched as root, change to permission of the socket to allow non-root to connect
if proto == "unix" {
os.Chmod(addr, 0777)
}
httpSrv := http.Server{Addr: addr, Handler: r}
return httpSrv.Serve(l)
}

View File

@@ -1,87 +0,0 @@
package docker
type APIHistory struct {
ID string `json:"Id"`
Tags []string `json:",omitempty"`
Created int64
CreatedBy string `json:",omitempty"`
}
type APIImages struct {
Repository string `json:",omitempty"`
Tag string `json:",omitempty"`
ID string `json:"Id"`
Created int64
Size int64
VirtualSize int64
}
type APIInfo struct {
Debug bool
Containers int
Images int
NFd int `json:",omitempty"`
NGoroutines int `json:",omitempty"`
MemoryLimit bool `json:",omitempty"`
SwapLimit bool `json:",omitempty"`
}
type APITop struct {
PID string
Tty string
Time string
Cmd string
}
type APIRmi struct {
Deleted string `json:",omitempty"`
Untagged string `json:",omitempty"`
}
type APIContainers struct {
ID string `json:"Id"`
Image string
Command string
Created int64
Status string
Ports string
SizeRw int64
SizeRootFs int64
}
type APISearch struct {
Name string
Description string
}
type APIID struct {
ID string `json:"Id"`
}
type APIRun struct {
ID string `json:"Id"`
Warnings []string `json:",omitempty"`
}
type APIPort struct {
Port string
}
type APIVersion struct {
Version string
GitCommit string `json:",omitempty"`
GoVersion string `json:",omitempty"`
}
type APIWait struct {
StatusCode int
}
type APIAuth struct {
Status string
}
type APIImageConfig struct {
ID string `json:"Id"`
*Config
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,16 +1,11 @@
package docker
import (
"archive/tar"
"bytes"
"fmt"
"github.com/dotcloud/docker/utils"
"errors"
"io"
"io/ioutil"
"os"
"os/exec"
"path"
"path/filepath"
)
type Archive io.Reader
@@ -24,33 +19,6 @@ const (
Xz
)
func DetectCompression(source []byte) Compression {
sourceLen := len(source)
for compression, m := range map[Compression][]byte{
Bzip2: {0x42, 0x5A, 0x68},
Gzip: {0x1F, 0x8B, 0x08},
Xz: {0xFD, 0x37, 0x7A, 0x58, 0x5A, 0x00},
} {
fail := false
if len(m) > sourceLen {
utils.Debugf("Len too short")
continue
}
i := 0
for _, b := range m {
if b != source[i] {
fail = true
break
}
i++
}
if !fail {
return compression
}
}
return Uncompressed
}
func (compression *Compression) Flag() string {
switch *compression {
case Bzip2:
@@ -63,167 +31,21 @@ func (compression *Compression) Flag() string {
return ""
}
func (compression *Compression) Extension() string {
switch *compression {
case Uncompressed:
return "tar"
case Bzip2:
return "tar.bz2"
case Gzip:
return "tar.gz"
case Xz:
return "tar.xz"
}
return ""
}
// Tar creates an archive from the directory at `path`, and returns it as a
// stream of bytes.
func Tar(path string, compression Compression) (io.Reader, error) {
return TarFilter(path, compression, nil)
cmd := exec.Command("bsdtar", "-f", "-", "-C", path, "-c"+compression.Flag(), ".")
return CmdStream(cmd)
}
// Tar creates an archive from the directory at `path`, only including files whose relative
// paths are included in `filter`. If `filter` is nil, then all files are included.
func TarFilter(path string, compression Compression, filter []string) (io.Reader, error) {
args := []string{"tar", "--numeric-owner", "-f", "-", "-C", path}
if filter == nil {
filter = []string{"."}
}
for _, f := range filter {
args = append(args, "-c"+compression.Flag(), f)
}
return CmdStream(exec.Command(args[0], args[1:]...))
}
// Untar reads a stream of bytes from `archive`, parses it as a tar archive,
// and unpacks it into the directory at `path`.
// The archive may be compressed with one of the following algorithgms:
// identity (uncompressed), gzip, bzip2, xz.
// FIXME: specify behavior when target path exists vs. doesn't exist.
func Untar(archive io.Reader, path string) error {
if archive == nil {
return fmt.Errorf("Empty archive")
}
buf := make([]byte, 10)
totalN := 0
for totalN < 10 {
if n, err := archive.Read(buf[totalN:]); err != nil {
if err == io.EOF {
return fmt.Errorf("Tarball too short")
}
return err
} else {
totalN += n
utils.Debugf("[tar autodetect] n: %d", n)
}
}
compression := DetectCompression(buf)
utils.Debugf("Archive compression detected: %s", compression.Extension())
cmd := exec.Command("tar", "--numeric-owner", "-f", "-", "-C", path, "-x"+compression.Flag())
cmd.Stdin = io.MultiReader(bytes.NewReader(buf), archive)
// Hardcode locale environment for predictable outcome regardless of host configuration.
// (see https://github.com/dotcloud/docker/issues/355)
cmd.Env = []string{"LANG=en_US.utf-8", "LC_ALL=en_US.utf-8"}
cmd := exec.Command("bsdtar", "-f", "-", "-C", path, "-x")
cmd.Stdin = archive
output, err := cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("%s: %s", err, output)
return errors.New(err.Error() + ": " + string(output))
}
return nil
}
// TarUntar is a convenience function which calls Tar and Untar, with
// the output of one piped into the other. If either Tar or Untar fails,
// TarUntar aborts and returns the error.
func TarUntar(src string, filter []string, dst string) error {
utils.Debugf("TarUntar(%s %s %s)", src, filter, dst)
archive, err := TarFilter(src, Uncompressed, filter)
if err != nil {
return err
}
return Untar(archive, dst)
}
// UntarPath is a convenience function which looks for an archive
// at filesystem path `src`, and unpacks it at `dst`.
func UntarPath(src, dst string) error {
if archive, err := os.Open(src); err != nil {
return err
} else if err := Untar(archive, dst); err != nil {
return err
}
return nil
}
// CopyWithTar creates a tar archive of filesystem path `src`, and
// unpacks it at filesystem path `dst`.
// The archive is streamed directly with fixed buffering and no
// intermediary disk IO.
//
func CopyWithTar(src, dst string) error {
srcSt, err := os.Stat(src)
if err != nil {
return err
}
if !srcSt.IsDir() {
return CopyFileWithTar(src, dst)
}
// Create dst, copy src's content into it
utils.Debugf("Creating dest directory: %s", dst)
if err := os.MkdirAll(dst, 0700); err != nil && !os.IsExist(err) {
return err
}
utils.Debugf("Calling TarUntar(%s, %s)", src, dst)
return TarUntar(src, nil, dst)
}
// CopyFileWithTar emulates the behavior of the 'cp' command-line
// for a single file. It copies a regular file from path `src` to
// path `dst`, and preserves all its metadata.
//
// If `dst` ends with a trailing slash '/', the final destination path
// will be `dst/base(src)`.
func CopyFileWithTar(src, dst string) error {
utils.Debugf("CopyFileWithTar(%s, %s)", src, dst)
srcSt, err := os.Stat(src)
if err != nil {
return err
}
if srcSt.IsDir() {
return fmt.Errorf("Can't copy a directory")
}
// Clean up the trailing /
if dst[len(dst)-1] == '/' {
dst = path.Join(dst, filepath.Base(src))
}
// Create the holding directory if necessary
if err := os.MkdirAll(filepath.Dir(dst), 0700); err != nil && !os.IsExist(err) {
return err
}
buf := new(bytes.Buffer)
tw := tar.NewWriter(buf)
hdr, err := tar.FileInfoHeader(srcSt, "")
if err != nil {
return err
}
hdr.Name = filepath.Base(dst)
if err := tw.WriteHeader(hdr); err != nil {
return err
}
srcF, err := os.Open(src)
if err != nil {
return err
}
if _, err := io.Copy(tw, srcF); err != nil {
return err
}
tw.Close()
return Untar(buf, filepath.Dir(dst))
}
// CmdStream executes a command, and returns its stdout as a stream.
// If the command fails to run or doesn't complete successfully, an error
// will be returned, including anything written on stderr.
@@ -254,7 +76,7 @@ func CmdStream(cmd *exec.Cmd) (io.Reader, error) {
}
errText := <-errChan
if err := cmd.Wait(); err != nil {
pipeW.CloseWithError(fmt.Errorf("%s: %s", err, errText))
pipeW.CloseWithError(errors.New(err.Error() + ": " + string(errText)))
} else {
pipeW.Close()
}

View File

@@ -1,13 +1,10 @@
package docker
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"os"
"os/exec"
"path"
"testing"
"time"
)
@@ -16,7 +13,7 @@ func TestCmdStreamLargeStderr(t *testing.T) {
cmd := exec.Command("/bin/sh", "-c", "dd if=/dev/zero bs=1k count=1000 of=/dev/stderr; echo hello")
out, err := CmdStream(cmd)
if err != nil {
t.Fatalf("Failed to start command: %s", err)
t.Fatalf("Failed to start command: " + err.Error())
}
errCh := make(chan error)
go func() {
@@ -26,7 +23,7 @@ func TestCmdStreamLargeStderr(t *testing.T) {
select {
case err := <-errCh:
if err != nil {
t.Fatalf("Command should not have failed (err=%.100s...)", err)
t.Fatalf("Command should not have failed (err=%s...)", err.Error()[:100])
}
case <-time.After(5 * time.Second):
t.Fatalf("Command did not complete in 5 seconds; probable deadlock")
@@ -37,12 +34,12 @@ func TestCmdStreamBad(t *testing.T) {
badCmd := exec.Command("/bin/sh", "-c", "echo hello; echo >&2 error couldn\\'t reverse the phase pulser; exit 1")
out, err := CmdStream(badCmd)
if err != nil {
t.Fatalf("Failed to start command: %s", err)
t.Fatalf("Failed to start command: " + err.Error())
}
if output, err := ioutil.ReadAll(out); err == nil {
t.Fatalf("Command should have failed")
} else if err.Error() != "exit status 1: error couldn't reverse the phase pulser\n" {
t.Fatalf("Wrong error value (%s)", err)
t.Fatalf("Wrong error value (%s)", err.Error())
} else if s := string(output); s != "hello\n" {
t.Fatalf("Command output should be '%s', not '%s'", "hello\\n", output)
}
@@ -61,58 +58,20 @@ func TestCmdStreamGood(t *testing.T) {
}
}
func tarUntar(t *testing.T, origin string, compression Compression) error {
archive, err := Tar(origin, compression)
func TestTarUntar(t *testing.T) {
archive, err := Tar(".", Uncompressed)
if err != nil {
t.Fatal(err)
}
buf := make([]byte, 10)
if _, err := archive.Read(buf); err != nil {
return err
}
archive = io.MultiReader(bytes.NewReader(buf), archive)
detectedCompression := DetectCompression(buf)
if detectedCompression.Extension() != compression.Extension() {
return fmt.Errorf("Wrong compression detected. Actual compression: %s, found %s", compression.Extension(), detectedCompression.Extension())
}
tmp, err := ioutil.TempDir("", "docker-test-untar")
if err != nil {
return err
t.Fatal(err)
}
defer os.RemoveAll(tmp)
if err := Untar(archive, tmp); err != nil {
return err
t.Fatal(err)
}
if _, err := os.Stat(tmp); err != nil {
return err
}
return nil
}
func TestTarUntar(t *testing.T) {
origin, err := ioutil.TempDir("", "docker-test-untar-origin")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(origin)
if err := ioutil.WriteFile(path.Join(origin, "1"), []byte("hello world"), 0700); err != nil {
t.Fatal(err)
}
if err := ioutil.WriteFile(path.Join(origin, "2"), []byte("welcome!"), 0700); err != nil {
t.Fatal(err)
}
for _, c := range []Compression{
Uncompressed,
Gzip,
Bzip2,
Xz,
} {
if err := tarUntar(t, origin, c); err != nil {
t.Fatalf("Error tar/untar for compression %s: %s", c.Extension(), err)
}
t.Fatalf("Error stating %s: %s", tmp, err.Error())
}
}

View File

@@ -1 +0,0 @@
../registry/MAINTAINERS

View File

@@ -15,20 +15,14 @@ import (
// Where we store the config file
const CONFIGFILE = ".dockercfg"
// Only used for user auth + account creation
const INDEXSERVER = "https://index.docker.io/v1/"
//const INDEXSERVER = "http://indexstaging-docker.dotcloud.com/"
var (
ErrConfigFileMissing = errors.New("The Auth config file is missing")
)
// the registry server we want to login against
const REGISTRY_SERVER = "https://registry.docker.io"
type AuthConfig struct {
Username string `json:"username"`
Password string `json:"password"`
Email string `json:"email"`
rootPath string
rootPath string `json:-`
}
func NewAuthConfig(username, password, email, rootPath string) *AuthConfig {
@@ -40,12 +34,8 @@ func NewAuthConfig(username, password, email, rootPath string) *AuthConfig {
}
}
func IndexServerAddress() string {
return INDEXSERVER
}
// create a base64 encoded auth string to store in config
func encodeAuth(authConfig *AuthConfig) string {
func EncodeAuth(authConfig *AuthConfig) string {
authStr := authConfig.Username + ":" + authConfig.Password
msg := []byte(authStr)
encoded := make([]byte, base64.StdEncoding.EncodedLen(len(msg)))
@@ -54,7 +44,7 @@ func encodeAuth(authConfig *AuthConfig) string {
}
// decode the auth string
func decodeAuth(authStr string) (*AuthConfig, error) {
func DecodeAuth(authStr string) (*AuthConfig, error) {
decLen := base64.StdEncoding.DecodedLen(len(authStr))
decoded := make([]byte, decLen)
authByte := []byte(authStr)
@@ -71,6 +61,7 @@ func decodeAuth(authStr string) (*AuthConfig, error) {
}
password := strings.Trim(arr[1], "\x00")
return &AuthConfig{Username: arr[0], Password: password}, nil
}
// load up the auth config information and return values
@@ -78,19 +69,16 @@ func decodeAuth(authStr string) (*AuthConfig, error) {
func LoadConfig(rootPath string) (*AuthConfig, error) {
confFile := path.Join(rootPath, CONFIGFILE)
if _, err := os.Stat(confFile); err != nil {
return &AuthConfig{rootPath: rootPath}, ErrConfigFileMissing
return &AuthConfig{}, fmt.Errorf("The Auth config file is missing")
}
b, err := ioutil.ReadFile(confFile)
if err != nil {
return nil, err
}
arr := strings.Split(string(b), "\n")
if len(arr) < 2 {
return nil, fmt.Errorf("The Auth config file is empty")
}
origAuth := strings.Split(arr[0], " = ")
origEmail := strings.Split(arr[1], " = ")
authConfig, err := decodeAuth(origAuth[1])
authConfig, err := DecodeAuth(origAuth[1])
if err != nil {
return nil, err
}
@@ -100,15 +88,10 @@ func LoadConfig(rootPath string) (*AuthConfig, error) {
}
// save the auth config
func SaveConfig(authConfig *AuthConfig) error {
confFile := path.Join(authConfig.rootPath, CONFIGFILE)
if len(authConfig.Email) == 0 {
os.Remove(confFile)
return nil
}
lines := "auth = " + encodeAuth(authConfig) + "\n" + "email = " + authConfig.Email + "\n"
func saveConfig(rootPath, authStr string, email string) error {
lines := "auth = " + authStr + "\n" + "email = " + email + "\n"
b := []byte(lines)
err := ioutil.WriteFile(confFile, b, 0600)
err := ioutil.WriteFile(path.Join(rootPath, CONFIGFILE), b, 0600)
if err != nil {
return err
}
@@ -116,40 +99,42 @@ func SaveConfig(authConfig *AuthConfig) error {
}
// try to register/login to the registry server
func Login(authConfig *AuthConfig, store bool) (string, error) {
func Login(authConfig *AuthConfig) (string, error) {
storeConfig := false
client := &http.Client{}
reqStatusCode := 0
var status string
var errMsg string
var reqBody []byte
jsonBody, err := json.Marshal(authConfig)
if err != nil {
return "", fmt.Errorf("Config Error: %s", err)
errMsg = fmt.Sprintf("Config Error: %s", err)
return "", errors.New(errMsg)
}
// using `bytes.NewReader(jsonBody)` here causes the server to respond with a 411 status.
b := strings.NewReader(string(jsonBody))
req1, err := http.Post(IndexServerAddress()+"users/", "application/json; charset=utf-8", b)
req1, err := http.Post(REGISTRY_SERVER+"/v1/users", "application/json; charset=utf-8", b)
if err != nil {
return "", fmt.Errorf("Server Error: %s", err)
errMsg = fmt.Sprintf("Server Error: %s", err)
return "", errors.New(errMsg)
}
reqStatusCode = req1.StatusCode
defer req1.Body.Close()
reqBody, err = ioutil.ReadAll(req1.Body)
if err != nil {
return "", fmt.Errorf("Server Error: [%#v] %s", reqStatusCode, err)
errMsg = fmt.Sprintf("Server Error: [%#v] %s", reqStatusCode, err)
return "", errors.New(errMsg)
}
if reqStatusCode == 201 {
status = "Account created. Please use the confirmation link we sent" +
" to your e-mail to activate it."
status = "Account Created\n"
storeConfig = true
} else if reqStatusCode == 403 {
return "", fmt.Errorf("Login: Your account hasn't been activated. " +
"Please check your e-mail for a confirmation link.")
} else if reqStatusCode == 400 {
if string(reqBody) == "\"Username or email already exists\"" {
req, err := http.NewRequest("GET", IndexServerAddress()+"users/", nil)
// FIXME: This should be 'exists', not 'exist'. Need to change on the server first.
if string(reqBody) == "Username or email already exist" {
client := &http.Client{}
req, err := http.NewRequest("GET", REGISTRY_SERVER+"/v1/users", nil)
req.SetBasicAuth(authConfig.Username, authConfig.Password)
resp, err := client.Do(req)
if err != nil {
@@ -161,30 +146,23 @@ func Login(authConfig *AuthConfig, store bool) (string, error) {
return "", err
}
if resp.StatusCode == 200 {
status = "Login Succeeded"
status = "Login Succeeded\n"
storeConfig = true
} else if resp.StatusCode == 401 {
if store {
authConfig.Email = ""
if err := SaveConfig(authConfig); err != nil {
return "", err
}
}
return "", fmt.Errorf("Wrong login/password, please try again")
} else {
return "", fmt.Errorf("Login: %s (Code: %d; Headers: %s)", body,
resp.StatusCode, resp.Header)
status = fmt.Sprintf("Login: %s", body)
return "", errors.New(status)
}
} else {
return "", fmt.Errorf("Registration: %s", reqBody)
status = fmt.Sprintf("Registration: %s", reqBody)
return "", errors.New(status)
}
} else {
return "", fmt.Errorf("Unexpected status code [%d] : %s", reqStatusCode, reqBody)
status = fmt.Sprintf("[%s] : %s", reqStatusCode, reqBody)
return "", errors.New(status)
}
if storeConfig && store {
if err := SaveConfig(authConfig); err != nil {
return "", err
}
if storeConfig {
authStr := EncodeAuth(authConfig)
saveConfig(authConfig.rootPath, authStr, authConfig.Email)
}
return status, nil
}

View File

@@ -1,17 +1,13 @@
package auth
import (
"crypto/rand"
"encoding/hex"
"os"
"strings"
"testing"
)
func TestEncodeAuth(t *testing.T) {
newAuthConfig := &AuthConfig{Username: "ken", Password: "test", Email: "test@example.com"}
authStr := encodeAuth(newAuthConfig)
decAuthConfig, err := decodeAuth(authStr)
authStr := EncodeAuth(newAuthConfig)
decAuthConfig, err := DecodeAuth(authStr)
if err != nil {
t.Fatal(err)
}
@@ -25,49 +21,3 @@ func TestEncodeAuth(t *testing.T) {
t.Fatal("AuthString encoding isn't correct.")
}
}
func TestLogin(t *testing.T) {
os.Setenv("DOCKER_INDEX_URL", "https://indexstaging-docker.dotcloud.com")
defer os.Setenv("DOCKER_INDEX_URL", "")
authConfig := NewAuthConfig("unittester", "surlautrerivejetattendrai", "noise+unittester@dotcloud.com", "/tmp")
status, err := Login(authConfig, false)
if err != nil {
t.Fatal(err)
}
if status != "Login Succeeded" {
t.Fatalf("Expected status \"Login Succeeded\", found \"%s\" instead", status)
}
}
func TestCreateAccount(t *testing.T) {
os.Setenv("DOCKER_INDEX_URL", "https://indexstaging-docker.dotcloud.com")
defer os.Setenv("DOCKER_INDEX_URL", "")
tokenBuffer := make([]byte, 16)
_, err := rand.Read(tokenBuffer)
if err != nil {
t.Fatal(err)
}
token := hex.EncodeToString(tokenBuffer)[:12]
username := "ut" + token
authConfig := NewAuthConfig(username, "test42", "docker-ut+"+token+"@example.com", "/tmp")
status, err := Login(authConfig, false)
if err != nil {
t.Fatal(err)
}
expectedStatus := "Account created. Please use the confirmation link we sent" +
" to your e-mail to activate it."
if status != expectedStatus {
t.Fatalf("Expected status: \"%s\", found \"%s\" instead.", expectedStatus, status)
}
status, err = Login(authConfig, false)
if err == nil {
t.Fatalf("Expected error but found nil instead")
}
expectedError := "Login: Account is not Active"
if !strings.Contains(err.Error(), expectedError) {
t.Fatalf("Expected message \"%s\" but found \"%s\" instead", expectedError, err)
}
}

20
buildbot/README.rst Normal file
View File

@@ -0,0 +1,20 @@
Buildbot
========
Buildbot is a continuous integration system designed to automate the
build/test cycle. By automatically rebuilding and testing the tree each time
something has changed, build problems are pinpointed quickly, before other
developers are inconvenienced by the failure.
When running 'make hack' at the docker root directory, it spawns a virtual
machine in the background running a buildbot instance and adds a git
post-commit hook that automatically run docker tests for you.
You can check your buildbot instance at http://192.168.33.21:8010/waterfall
Buildbot dependencies
---------------------
vagrant, virtualbox packages and python package requests

28
buildbot/Vagrantfile vendored Normal file
View File

@@ -0,0 +1,28 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
$BUILDBOT_IP = '192.168.33.21'
def v10(config)
config.vm.box = "quantal64_3.5.0-25"
config.vm.box_url = "http://get.docker.io/vbox/ubuntu/12.10/quantal64_3.5.0-25.box"
config.vm.share_folder 'v-data', '/data/docker', File.dirname(__FILE__) + '/..'
config.vm.network :hostonly, $BUILDBOT_IP
# Ensure puppet is installed on the instance
config.vm.provision :shell, :inline => 'apt-get -qq update; apt-get install -y puppet'
config.vm.provision :puppet do |puppet|
puppet.manifests_path = '.'
puppet.manifest_file = 'buildbot.pp'
puppet.options = ['--templatedir','.']
end
end
Vagrant::VERSION < '1.1.0' and Vagrant::Config.run do |config|
v10(config)
end
Vagrant::VERSION >= '1.1.0' and Vagrant.configure('1') do |config|
v10(config)
end

View File

@@ -0,0 +1,43 @@
#!/bin/bash
# Auto setup of buildbot configuration. Package installation is being done
# on buildbot.pp
# Dependencies: buildbot, buildbot-slave, supervisor
SLAVE_NAME='buildworker'
SLAVE_SOCKET='localhost:9989'
BUILDBOT_PWD='pass-docker'
USER='vagrant'
ROOT_PATH='/data/buildbot'
DOCKER_PATH='/data/docker'
BUILDBOT_CFG="$DOCKER_PATH/buildbot/buildbot-cfg"
IP=$(grep BUILDBOT_IP /data/docker/buildbot/Vagrantfile | awk -F "'" '{ print $2; }')
function run { su $USER -c "$1"; }
export PATH=/bin:sbin:/usr/bin:/usr/sbin:/usr/local/bin
# Exit if buildbot has already been installed
[ -d "$ROOT_PATH" ] && exit 0
# Setup buildbot
run "mkdir -p ${ROOT_PATH}"
cd ${ROOT_PATH}
run "buildbot create-master master"
run "cp $BUILDBOT_CFG/master.cfg master"
run "sed -i 's/localhost/$IP/' master/master.cfg"
run "buildslave create-slave slave $SLAVE_SOCKET $SLAVE_NAME $BUILDBOT_PWD"
# Allow buildbot subprocesses (docker tests) to properly run in containers,
# in particular with docker -u
run "sed -i 's/^umask = None/umask = 000/' ${ROOT_PATH}/slave/buildbot.tac"
# Setup supervisor
cp $BUILDBOT_CFG/buildbot.conf /etc/supervisor/conf.d/buildbot.conf
sed -i "s/^chmod=0700.*0700./chmod=0770\nchown=root:$USER/" /etc/supervisor/supervisord.conf
kill -HUP `pgrep -f "/usr/bin/python /usr/bin/supervisord"`
# Add git hook
cp $BUILDBOT_CFG/post-commit $DOCKER_PATH/.git/hooks
sed -i "s/localhost/$IP/" $DOCKER_PATH/.git/hooks/post-commit

View File

@@ -13,8 +13,8 @@ TEST_USER = 'buildbot' # Credential to authenticate build triggers
TEST_PWD = 'docker' # Credential to authenticate build triggers
BUILDER_NAME = 'docker'
BUILDPASSWORD = 'pass-docker' # Credential to authenticate buildworkers
GOPATH = '/data/docker'
DOCKER_PATH = '{0}/src/github.com/dotcloud/docker'.format(GOPATH)
DOCKER_PATH = '/data/docker'
c = BuildmasterConfig = {}
@@ -28,7 +28,10 @@ c['slavePortnum'] = PORT_MASTER
c['schedulers'] = [ForceScheduler(name='trigger',builderNames=[BUILDER_NAME])]
# Docker test command
test_cmd = "GOPATH={0} make -C {1} test".format(GOPATH,DOCKER_PATH)
test_cmd = """(
cd {0}/..; rm -rf docker-tmp; git clone docker docker-tmp;
cd docker-tmp; make test; exit_status=$?;
cd ..; rm -rf docker-tmp; exit $exit_status)""".format(DOCKER_PATH)
# Builder
factory = BuildFactory()

32
buildbot/buildbot.pp Normal file
View File

@@ -0,0 +1,32 @@
node default {
$USER = 'vagrant'
$ROOT_PATH = '/data/buildbot'
$DOCKER_PATH = '/data/docker'
exec {'apt_update': command => '/usr/bin/apt-get update' }
Package { require => Exec['apt_update'] }
group {'puppet': ensure => 'present'}
# Install dependencies
Package { ensure => 'installed' }
package { ['python-dev','python-pip','supervisor','lxc','bsdtar','git','golang']: }
file{[ '/data' ]:
owner => $USER, group => $USER, ensure => 'directory' }
file {'/var/tmp/requirements.txt':
content => template('requirements.txt') }
exec {'requirements':
require => [ Package['python-dev'], Package['python-pip'],
File['/var/tmp/requirements.txt'] ],
cwd => '/var/tmp',
command => "/bin/sh -c '(/usr/bin/pip install -r requirements.txt;
rm /var/tmp/requirements.txt)'" }
exec {'buildbot-cfg-sh':
require => [ Package['supervisor'], Exec['requirements']],
path => '/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin',
cwd => '/data',
command => "$DOCKER_PATH/buildbot/buildbot-cfg/buildbot-cfg.sh" }
}

View File

@@ -1,143 +1,169 @@
package docker
import (
"bufio"
"fmt"
"github.com/dotcloud/docker/utils"
"os"
"path"
"time"
"io"
"strings"
)
var defaultDns = []string{"8.8.8.8", "8.8.4.4"}
type Builder struct {
runtime *Runtime
repositories *TagStore
graph *Graph
config *Config
image *Image
runtime *Runtime
}
func NewBuilder(runtime *Runtime) *Builder {
return &Builder{
runtime: runtime,
graph: runtime.graph,
repositories: runtime.repositories,
runtime: runtime,
}
}
func (builder *Builder) Create(config *Config) (*Container, error) {
// Lookup image
img, err := builder.repositories.LookupImage(config.Image)
func (builder *Builder) Run(image *Image, cmd ...string) (*Container, error) {
// FIXME: pass a NopWriter instead of nil
config, err := ParseRun(append([]string{"-d", image.Id}, cmd...), nil, builder.runtime.capabilities)
if config.Image == "" {
return nil, fmt.Errorf("Image not specified")
}
if len(config.Cmd) == 0 {
return nil, fmt.Errorf("Command not specified")
}
if config.Tty {
return nil, fmt.Errorf("The tty mode is not supported within the builder")
}
// Create new container
container, err := builder.runtime.Create(config)
if err != nil {
return nil, err
}
if img.Config != nil {
MergeConfig(config, img.Config)
}
if config.Cmd == nil || len(config.Cmd) == 0 {
return nil, fmt.Errorf("No command specified")
}
// Generate id
id := GenerateID()
// Generate default hostname
// FIXME: the lxc template no longer needs to set a default hostname
if config.Hostname == "" {
config.Hostname = id[:12]
}
var args []string
var entrypoint string
if len(config.Entrypoint) != 0 {
entrypoint = config.Entrypoint[0]
args = append(config.Entrypoint[1:], config.Cmd...)
} else {
entrypoint = config.Cmd[0]
args = config.Cmd[1:]
}
container := &Container{
// FIXME: we should generate the ID here instead of receiving it as an argument
ID: id,
Created: time.Now(),
Path: entrypoint,
Args: args, //FIXME: de-duplicate from config
Config: config,
Image: img.ID, // Always use the resolved image id
NetworkSettings: &NetworkSettings{},
// FIXME: do we need to store this in the container?
SysInitPath: sysInitPath,
}
container.root = builder.runtime.containerRoot(container.ID)
// Step 1: create the container directory.
// This doubles as a barrier to avoid race conditions.
if err := os.Mkdir(container.root, 0700); err != nil {
return nil, err
}
if len(config.Dns) == 0 && len(builder.runtime.Dns) == 0 && utils.CheckLocalDns() {
//"WARNING: Docker detected local DNS server on resolv.conf. Using default external servers: %v", defaultDns
builder.runtime.Dns = defaultDns
}
// If custom dns exists, then create a resolv.conf for the container
if len(config.Dns) > 0 || len(builder.runtime.Dns) > 0 {
var dns []string
if len(config.Dns) > 0 {
dns = config.Dns
} else {
dns = builder.runtime.Dns
}
container.ResolvConfPath = path.Join(container.root, "resolv.conf")
f, err := os.Create(container.ResolvConfPath)
if err != nil {
return nil, err
}
defer f.Close()
for _, dns := range dns {
if _, err := f.Write([]byte("nameserver " + dns + "\n")); err != nil {
return nil, err
}
}
} else {
container.ResolvConfPath = "/etc/resolv.conf"
}
// Step 2: save the container json
if err := container.ToDisk(); err != nil {
return nil, err
}
// Step 3: register the container
if err := builder.runtime.Register(container); err != nil {
if err := container.Start(); err != nil {
return nil, err
}
return container, nil
}
// Commit creates a new filesystem image from the current state of a container.
// The image can optionally be tagged into a repository
func (builder *Builder) Commit(container *Container, repository, tag, comment, author string, config *Config) (*Image, error) {
// FIXME: freeze the container before copying it to avoid data corruption?
// FIXME: this shouldn't be in commands.
rwTar, err := container.ExportRw()
if err != nil {
return nil, err
func (builder *Builder) Commit(container *Container, repository, tag, comment, author string) (*Image, error) {
return builder.runtime.Commit(container.Id, repository, tag, comment, author)
}
func (builder *Builder) clearTmp(containers, images map[string]struct{}) {
for c := range containers {
tmp := builder.runtime.Get(c)
builder.runtime.Destroy(tmp)
Debugf("Removing container %s", c)
}
// Create a new image from the container's base layers + a new layer from container changes
img, err := builder.graph.Create(rwTar, container, comment, author, config)
if err != nil {
return nil, err
for i := range images {
builder.runtime.graph.Delete(i)
Debugf("Removing image %s", i)
}
// Register the image if needed
if repository != "" {
if err := builder.repositories.Set(repository, tag, img.ID, true); err != nil {
return img, err
}
func (builder *Builder) Build(dockerfile io.Reader, stdout io.Writer) error {
var (
image, base *Image
tmpContainers map[string]struct{} = make(map[string]struct{})
tmpImages map[string]struct{} = make(map[string]struct{})
)
defer builder.clearTmp(tmpContainers, tmpImages)
file := bufio.NewReader(dockerfile)
for {
line, err := file.ReadString('\n')
if err != nil {
if err == io.EOF {
break
}
return err
}
line = strings.TrimSpace(line)
// Skip comments and empty line
if len(line) == 0 || line[0] == '#' {
continue
}
tmp := strings.SplitN(line, " ", 2)
if len(tmp) != 2 {
return fmt.Errorf("Invalid Dockerfile format")
}
switch tmp[0] {
case "from":
fmt.Fprintf(stdout, "FROM %s\n", tmp[1])
image, err = builder.runtime.repositories.LookupImage(tmp[1])
if err != nil {
return err
}
break
case "run":
fmt.Fprintf(stdout, "RUN %s\n", tmp[1])
if image == nil {
return fmt.Errorf("Please provide a source image with `from` prior to run")
}
// Create the container and start it
c, err := builder.Run(image, "/bin/sh", "-c", tmp[1])
if err != nil {
return err
}
tmpContainers[c.Id] = struct{}{}
// Wait for it to finish
if result := c.Wait(); result != 0 {
return fmt.Errorf("!!! '%s' return non-zero exit code '%d'. Aborting.", tmp[1], result)
}
// Commit the container
base, err = builder.Commit(c, "", "", "", "")
if err != nil {
return err
}
tmpImages[base.Id] = struct{}{}
fmt.Fprintf(stdout, "===> %s\n", base.ShortId())
break
case "copy":
if image == nil {
return fmt.Errorf("Please provide a source image with `from` prior to copy")
}
tmp2 := strings.SplitN(tmp[1], " ", 2)
if len(tmp) != 2 {
return fmt.Errorf("Invalid COPY format")
}
fmt.Fprintf(stdout, "COPY %s to %s in %s\n", tmp2[0], tmp2[1], base.ShortId())
file, err := Download(tmp2[0], stdout)
if err != nil {
return err
}
defer file.Body.Close()
c, err := builder.Run(base, "echo", "insert", tmp2[0], tmp2[1])
if err != nil {
return err
}
if err := c.Inject(file.Body, tmp2[1]); err != nil {
return err
}
base, err = builder.Commit(c, "", "", "", "")
if err != nil {
return err
}
fmt.Fprintf(stdout, "===> %s\n", base.ShortId())
break
default:
fmt.Fprintf(stdout, "Skipping unknown op %s\n", tmp[0])
}
}
return img, nil
if base != nil {
// The build is successful, keep the temporary containers and images
for i := range tmpImages {
delete(tmpImages, i)
}
for i := range tmpContainers {
delete(tmpContainers, i)
}
fmt.Fprintf(stdout, "Build finished. image id: %s\n", base.ShortId())
} else {
fmt.Fprintf(stdout, "An error occured during the build\n")
}
return nil
}

View File

@@ -1,444 +0,0 @@
package docker
import (
"bufio"
"encoding/json"
"fmt"
"github.com/dotcloud/docker/utils"
"io"
"io/ioutil"
"os"
"path"
"reflect"
"strings"
)
type BuildFile interface {
Build(io.Reader) (string, error)
CmdFrom(string) error
CmdRun(string) error
}
type buildFile struct {
runtime *Runtime
builder *Builder
srv *Server
image string
maintainer string
config *Config
context string
verbose bool
tmpContainers map[string]struct{}
tmpImages map[string]struct{}
out io.Writer
}
func (b *buildFile) clearTmp(containers, images map[string]struct{}) {
for c := range containers {
tmp := b.runtime.Get(c)
b.runtime.Destroy(tmp)
utils.Debugf("Removing container %s", c)
}
for i := range images {
b.runtime.graph.Delete(i)
utils.Debugf("Removing image %s", i)
}
}
func (b *buildFile) CmdFrom(name string) error {
image, err := b.runtime.repositories.LookupImage(name)
if err != nil {
if b.runtime.graph.IsNotExist(err) {
remote, tag := utils.ParseRepositoryTag(name)
if err := b.srv.ImagePull(remote, tag, b.out, utils.NewStreamFormatter(false), nil); err != nil {
return err
}
image, err = b.runtime.repositories.LookupImage(name)
if err != nil {
return err
}
} else {
return err
}
}
b.image = image.ID
b.config = &Config{}
return nil
}
func (b *buildFile) CmdMaintainer(name string) error {
b.maintainer = name
return b.commit("", b.config.Cmd, fmt.Sprintf("MAINTAINER %s", name))
}
func (b *buildFile) CmdRun(args string) error {
if b.image == "" {
return fmt.Errorf("Please provide a source image with `from` prior to run")
}
config, _, _, err := ParseRun([]string{b.image, "/bin/sh", "-c", args}, nil)
if err != nil {
return err
}
cmd := b.config.Cmd
b.config.Cmd = nil
MergeConfig(b.config, config)
utils.Debugf("Command to be executed: %v", b.config.Cmd)
if cache, err := b.srv.ImageGetCached(b.image, b.config); err != nil {
return err
} else if cache != nil {
fmt.Fprintf(b.out, " ---> Using cache\n")
utils.Debugf("[BUILDER] Use cached version")
b.image = cache.ID
return nil
} else {
utils.Debugf("[BUILDER] Cache miss")
}
cid, err := b.run()
if err != nil {
return err
}
if err := b.commit(cid, cmd, "run"); err != nil {
return err
}
b.config.Cmd = cmd
return nil
}
func (b *buildFile) CmdEnv(args string) error {
tmp := strings.SplitN(args, " ", 2)
if len(tmp) != 2 {
return fmt.Errorf("Invalid ENV format")
}
key := strings.Trim(tmp[0], " \t")
value := strings.Trim(tmp[1], " \t")
for i, elem := range b.config.Env {
if strings.HasPrefix(elem, key+"=") {
b.config.Env[i] = key + "=" + value
return nil
}
}
b.config.Env = append(b.config.Env, key+"="+value)
return b.commit("", b.config.Cmd, fmt.Sprintf("ENV %s=%s", key, value))
}
func (b *buildFile) CmdCmd(args string) error {
var cmd []string
if err := json.Unmarshal([]byte(args), &cmd); err != nil {
utils.Debugf("Error unmarshalling: %s, setting cmd to /bin/sh -c", err)
cmd = []string{"/bin/sh", "-c", args}
}
if err := b.commit("", cmd, fmt.Sprintf("CMD %v", cmd)); err != nil {
return err
}
b.config.Cmd = cmd
return nil
}
func (b *buildFile) CmdExpose(args string) error {
ports := strings.Split(args, " ")
b.config.PortSpecs = append(ports, b.config.PortSpecs...)
return b.commit("", b.config.Cmd, fmt.Sprintf("EXPOSE %v", ports))
}
func (b *buildFile) CmdInsert(args string) error {
return fmt.Errorf("INSERT has been deprecated. Please use ADD instead")
}
func (b *buildFile) CmdCopy(args string) error {
return fmt.Errorf("COPY has been deprecated. Please use ADD instead")
}
func (b *buildFile) CmdEntrypoint(args string) error {
if args == "" {
return fmt.Errorf("Entrypoint cannot be empty")
}
var entrypoint []string
if err := json.Unmarshal([]byte(args), &entrypoint); err != nil {
b.config.Entrypoint = []string{"/bin/sh", "-c", args}
} else {
b.config.Entrypoint = entrypoint
}
if err := b.commit("", b.config.Cmd, fmt.Sprintf("ENTRYPOINT %s", args)); err != nil {
return err
}
return nil
}
func (b *buildFile) CmdVolume(args string) error {
if args == "" {
return fmt.Errorf("Volume cannot be empty")
}
var volume []string
if err := json.Unmarshal([]byte(args), &volume); err != nil {
volume = []string{args}
}
if b.config.Volumes == nil {
b.config.Volumes = NewPathOpts()
}
for _, v := range volume {
b.config.Volumes[v] = struct{}{}
}
if err := b.commit("", b.config.Cmd, fmt.Sprintf("VOLUME %s", args)); err != nil {
return err
}
return nil
}
func (b *buildFile) addRemote(container *Container, orig, dest string) error {
file, err := utils.Download(orig, ioutil.Discard)
if err != nil {
return err
}
defer file.Body.Close()
return container.Inject(file.Body, dest)
}
func (b *buildFile) addContext(container *Container, orig, dest string) error {
origPath := path.Join(b.context, orig)
destPath := path.Join(container.RootfsPath(), dest)
// Preserve the trailing '/'
if dest[len(dest)-1] == '/' {
destPath = destPath + "/"
}
fi, err := os.Stat(origPath)
if err != nil {
return err
}
if fi.IsDir() {
if err := CopyWithTar(origPath, destPath); err != nil {
return err
}
// First try to unpack the source as an archive
} else if err := UntarPath(origPath, destPath); err != nil {
utils.Debugf("Couldn't untar %s to %s: %s", origPath, destPath, err)
// If that fails, just copy it as a regular file
if err := os.MkdirAll(path.Dir(destPath), 0700); err != nil {
return err
}
if err := CopyWithTar(origPath, destPath); err != nil {
return err
}
}
return nil
}
func (b *buildFile) CmdAdd(args string) error {
if b.context == "" {
return fmt.Errorf("No context given. Impossible to use ADD")
}
tmp := strings.SplitN(args, " ", 2)
if len(tmp) != 2 {
return fmt.Errorf("Invalid ADD format")
}
orig := strings.Trim(tmp[0], " \t")
dest := strings.Trim(tmp[1], " \t")
cmd := b.config.Cmd
b.config.Cmd = []string{"/bin/sh", "-c", fmt.Sprintf("#(nop) ADD %s in %s", orig, dest)}
b.config.Image = b.image
// Create the container and start it
container, err := b.builder.Create(b.config)
if err != nil {
return err
}
b.tmpContainers[container.ID] = struct{}{}
if err := container.EnsureMounted(); err != nil {
return err
}
defer container.Unmount()
if utils.IsURL(orig) {
if err := b.addRemote(container, orig, dest); err != nil {
return err
}
} else {
if err := b.addContext(container, orig, dest); err != nil {
return err
}
}
if err := b.commit(container.ID, cmd, fmt.Sprintf("ADD %s in %s", orig, dest)); err != nil {
return err
}
b.config.Cmd = cmd
return nil
}
func (b *buildFile) run() (string, error) {
if b.image == "" {
return "", fmt.Errorf("Please provide a source image with `from` prior to run")
}
b.config.Image = b.image
// Create the container and start it
c, err := b.builder.Create(b.config)
if err != nil {
return "", err
}
b.tmpContainers[c.ID] = struct{}{}
fmt.Fprintf(b.out, " ---> Running in %s\n", utils.TruncateID(c.ID))
// override the entry point that may have been picked up from the base image
c.Path = b.config.Cmd[0]
c.Args = b.config.Cmd[1:]
//start the container
hostConfig := &HostConfig{}
if err := c.Start(hostConfig); err != nil {
return "", err
}
if b.verbose {
err = <-c.Attach(nil, nil, b.out, b.out)
if err != nil {
return "", err
}
}
// Wait for it to finish
if ret := c.Wait(); ret != 0 {
return "", fmt.Errorf("The command %v returned a non-zero code: %d", b.config.Cmd, ret)
}
return c.ID, nil
}
// Commit the container <id> with the autorun command <autoCmd>
func (b *buildFile) commit(id string, autoCmd []string, comment string) error {
if b.image == "" {
return fmt.Errorf("Please provide a source image with `from` prior to commit")
}
b.config.Image = b.image
if id == "" {
cmd := b.config.Cmd
b.config.Cmd = []string{"/bin/sh", "-c", "#(nop) " + comment}
defer func(cmd []string) { b.config.Cmd = cmd }(cmd)
if cache, err := b.srv.ImageGetCached(b.image, b.config); err != nil {
return err
} else if cache != nil {
fmt.Fprintf(b.out, " ---> Using cache\n")
utils.Debugf("[BUILDER] Use cached version")
b.image = cache.ID
return nil
} else {
utils.Debugf("[BUILDER] Cache miss")
}
container, err := b.builder.Create(b.config)
if err != nil {
return err
}
b.tmpContainers[container.ID] = struct{}{}
fmt.Fprintf(b.out, " ---> Running in %s\n", utils.TruncateID(container.ID))
id = container.ID
if err := container.EnsureMounted(); err != nil {
return err
}
defer container.Unmount()
}
container := b.runtime.Get(id)
if container == nil {
return fmt.Errorf("An error occured while creating the container")
}
// Note: Actually copy the struct
autoConfig := *b.config
autoConfig.Cmd = autoCmd
// Commit the container
image, err := b.builder.Commit(container, "", "", "", b.maintainer, &autoConfig)
if err != nil {
return err
}
b.tmpImages[image.ID] = struct{}{}
b.image = image.ID
return nil
}
func (b *buildFile) Build(context io.Reader) (string, error) {
// FIXME: @creack any reason for using /tmp instead of ""?
// FIXME: @creack "name" is a terrible variable name
name, err := ioutil.TempDir("/tmp", "docker-build")
if err != nil {
return "", err
}
if err := Untar(context, name); err != nil {
return "", err
}
defer os.RemoveAll(name)
b.context = name
dockerfile, err := os.Open(path.Join(name, "Dockerfile"))
if err != nil {
return "", fmt.Errorf("Can't build a directory with no Dockerfile")
}
// FIXME: "file" is also a terrible variable name ;)
file := bufio.NewReader(dockerfile)
stepN := 0
for {
line, err := file.ReadString('\n')
if err != nil {
if err == io.EOF && line == "" {
break
} else if err != io.EOF {
return "", err
}
}
line = strings.Trim(strings.Replace(line, "\t", " ", -1), " \t\r\n")
// Skip comments and empty line
if len(line) == 0 || line[0] == '#' {
continue
}
tmp := strings.SplitN(line, " ", 2)
if len(tmp) != 2 {
return "", fmt.Errorf("Invalid Dockerfile format")
}
instruction := strings.ToLower(strings.Trim(tmp[0], " "))
arguments := strings.Trim(tmp[1], " ")
stepN += 1
// FIXME: only count known instructions as build steps
fmt.Fprintf(b.out, "Step %d : %s %s\n", stepN, strings.ToUpper(instruction), arguments)
method, exists := reflect.TypeOf(b).MethodByName("Cmd" + strings.ToUpper(instruction[:1]) + strings.ToLower(instruction[1:]))
if !exists {
fmt.Fprintf(b.out, "# Skipping unknown instruction %s\n", strings.ToUpper(instruction))
continue
}
ret := method.Func.Call([]reflect.Value{reflect.ValueOf(b), reflect.ValueOf(arguments)})[0].Interface()
if ret != nil {
return "", ret.(error)
}
fmt.Fprintf(b.out, " ---> %v\n", utils.TruncateID(b.image))
}
if b.image != "" {
fmt.Fprintf(b.out, "Successfully built %s\n", utils.TruncateID(b.image))
return b.image, nil
}
return "", fmt.Errorf("An error occured during the build\n")
}
func NewBuildFile(srv *Server, out io.Writer, verbose bool) BuildFile {
return &buildFile{
builder: NewBuilder(srv.runtime),
runtime: srv.runtime,
srv: srv,
config: &Config{},
out: out,
tmpContainers: make(map[string]struct{}),
tmpImages: make(map[string]struct{}),
verbose: verbose,
}
}

View File

@@ -1,216 +0,0 @@
package docker
import (
"fmt"
"io/ioutil"
"testing"
)
// mkTestContext generates a build context from the contents of the provided dockerfile.
// This context is suitable for use as an argument to BuildFile.Build()
func mkTestContext(dockerfile string, files [][2]string, t *testing.T) Archive {
context, err := mkBuildContext(fmt.Sprintf(dockerfile, unitTestImageID), files)
if err != nil {
t.Fatal(err)
}
return context
}
// A testContextTemplate describes a build context and how to test it
type testContextTemplate struct {
// Contents of the Dockerfile
dockerfile string
// Additional files in the context, eg [][2]string{"./passwd", "gordon"}
files [][2]string
}
// A table of all the contexts to build and test.
// A new docker runtime will be created and torn down for each context.
var testContexts = []testContextTemplate{
{
`
from %s
run sh -c 'echo root:testpass > /tmp/passwd'
run mkdir -p /var/run/sshd
run [ "$(cat /tmp/passwd)" = "root:testpass" ]
run [ "$(ls -d /var/run/sshd)" = "/var/run/sshd" ]
`,
nil,
},
{
`
from %s
add foo /usr/lib/bla/bar
run [ "$(cat /usr/lib/bla/bar)" = 'hello world!' ]
`,
[][2]string{{"foo", "hello world!"}},
},
{
`
from %s
add f /
run [ "$(cat /f)" = "hello" ]
add f /abc
run [ "$(cat /abc)" = "hello" ]
add f /x/y/z
run [ "$(cat /x/y/z)" = "hello" ]
add f /x/y/d/
run [ "$(cat /x/y/d/f)" = "hello" ]
add d /
run [ "$(cat /ga)" = "bu" ]
add d /somewhere
run [ "$(cat /somewhere/ga)" = "bu" ]
add d /anotherplace/
run [ "$(cat /anotherplace/ga)" = "bu" ]
add d /somewheeeere/over/the/rainbooow
run [ "$(cat /somewheeeere/over/the/rainbooow/ga)" = "bu" ]
`,
[][2]string{
{"f", "hello"},
{"d/ga", "bu"},
},
},
{
`
from %s
env FOO BAR
run [ "$FOO" = "BAR" ]
`,
nil,
},
{
`
from %s
ENTRYPOINT /bin/echo
CMD Hello world
`,
nil,
},
{
`
from %s
VOLUME /test
CMD Hello world
`,
nil,
},
}
// FIXME: test building with 2 successive overlapping ADD commands
func TestBuild(t *testing.T) {
for _, ctx := range testContexts {
buildImage(ctx, t)
}
}
func buildImage(context testContextTemplate, t *testing.T) *Image {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{
runtime: runtime,
pullingPool: make(map[string]struct{}),
pushingPool: make(map[string]struct{}),
}
buildfile := NewBuildFile(srv, ioutil.Discard, false)
id, err := buildfile.Build(mkTestContext(context.dockerfile, context.files, t))
if err != nil {
t.Fatal(err)
}
img, err := srv.ImageInspect(id)
if err != nil {
t.Fatal(err)
}
return img
}
func TestVolume(t *testing.T) {
img := buildImage(testContextTemplate{`
from %s
volume /test
cmd Hello world
`, nil}, t)
if len(img.Config.Volumes) == 0 {
t.Fail()
}
for key := range img.Config.Volumes {
if key != "/test" {
t.Fail()
}
}
}
func TestBuildMaintainer(t *testing.T) {
img := buildImage(testContextTemplate{`
from %s
maintainer dockerio
`, nil}, t)
if img.Author != "dockerio" {
t.Fail()
}
}
func TestBuildEnv(t *testing.T) {
img := buildImage(testContextTemplate{`
from %s
env port 4243
`,
nil}, t)
if img.Config.Env[0] != "port=4243" {
t.Fail()
}
}
func TestBuildCmd(t *testing.T) {
img := buildImage(testContextTemplate{`
from %s
cmd ["/bin/echo", "Hello World"]
`,
nil}, t)
if img.Config.Cmd[0] != "/bin/echo" {
t.Log(img.Config.Cmd[0])
t.Fail()
}
if img.Config.Cmd[1] != "Hello World" {
t.Log(img.Config.Cmd[1])
t.Fail()
}
}
func TestBuildExpose(t *testing.T) {
img := buildImage(testContextTemplate{`
from %s
expose 4243
`,
nil}, t)
if img.Config.PortSpecs[0] != "4243" {
t.Fail()
}
}
func TestBuildEntrypoint(t *testing.T) {
img := buildImage(testContextTemplate{`
from %s
entrypoint ["/bin/echo"]
`,
nil}, t)
if img.Config.Entrypoint[0] != "/bin/echo" {
}
}

View File

@@ -65,7 +65,7 @@ func Changes(layers []string, rw string) ([]Change, error) {
file := filepath.Base(path)
// If there is a whiteout, then the file was removed
if strings.HasPrefix(file, ".wh.") {
originalFile := file[len(".wh."):]
originalFile := strings.TrimLeft(file, ".wh.")
change.Path = filepath.Join(filepath.Dir(path), originalFile)
change.Kind = ChangeDelete
} else {

File diff suppressed because it is too large Load Diff

View File

@@ -3,7 +3,7 @@ package docker
import (
"bufio"
"fmt"
"github.com/dotcloud/docker/utils"
"github.com/dotcloud/docker/rcli"
"io"
"io/ioutil"
"strings"
@@ -59,58 +59,233 @@ func assertPipe(input, output string, r io.Reader, w io.Writer, count int) error
return nil
}
func cmdWait(srv *Server, container *Container) error {
stdout, stdoutPipe := io.Pipe()
go func() {
srv.CmdWait(nil, stdoutPipe, container.Id)
}()
if _, err := bufio.NewReader(stdout).ReadString('\n'); err != nil {
return err
}
// Cleanup pipes
return closeWrap(stdout, stdoutPipe)
}
// TestRunHostname checks that 'docker run -h' correctly sets a custom hostname
func TestRunHostname(t *testing.T) {
stdout, stdoutPipe := io.Pipe()
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
cli := NewDockerCli(nil, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
srv := &Server{runtime: runtime}
stdin, _ := io.Pipe()
stdout, stdoutPipe := io.Pipe()
c := make(chan struct{})
go func() {
defer close(c)
if err := cli.CmdRun("-h", "foobar", unitTestImageID, "hostname"); err != nil {
if err := srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-h", "foobar", GetTestImage(runtime).Id, "hostname"); err != nil {
t.Fatal(err)
}
close(c)
}()
utils.Debugf("--")
setTimeout(t, "Reading command output time out", 2*time.Second, func() {
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
if cmdOutput != "foobar\n" {
t.Fatalf("'hostname' should display '%s', not '%s'", "foobar\n", cmdOutput)
}
})
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
if cmdOutput != "foobar\n" {
t.Fatalf("'hostname' should display '%s', not '%s'", "foobar\n", cmdOutput)
}
setTimeout(t, "CmdRun timed out", 5*time.Second, func() {
setTimeout(t, "CmdRun timed out", 2*time.Second, func() {
<-c
cmdWait(srv, srv.runtime.List()[0])
})
}
func TestRunExit(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
c1 := make(chan struct{})
go func() {
srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-i", GetTestImage(runtime).Id, "/bin/cat")
close(c1)
}()
setTimeout(t, "Read/Write assertion timed out", 2*time.Second, func() {
if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 15); err != nil {
t.Fatal(err)
}
})
container := runtime.List()[0]
// Closing /bin/cat stdin, expect it to exit
p, err := container.StdinPipe()
if err != nil {
t.Fatal(err)
}
if err := p.Close(); err != nil {
t.Fatal(err)
}
// as the process exited, CmdRun must finish and unblock. Wait for it
setTimeout(t, "Waiting for CmdRun timed out", 2*time.Second, func() {
<-c1
cmdWait(srv, container)
})
// Make sure that the client has been disconnected
setTimeout(t, "The client should have been disconnected once the remote process exited.", 2*time.Second, func() {
// Expecting pipe i/o error, just check that read does not block
stdin.Read([]byte{})
})
// Cleanup pipes
if err := closeWrap(stdin, stdinPipe, stdout, stdoutPipe); err != nil {
t.Fatal(err)
}
}
// Expected behaviour: the process dies when the client disconnects
func TestRunDisconnect(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
c1 := make(chan struct{})
go func() {
// We're simulating a disconnect so the return value doesn't matter. What matters is the
// fact that CmdRun returns.
srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-i", GetTestImage(runtime).Id, "/bin/cat")
close(c1)
}()
setTimeout(t, "Read/Write assertion timed out", 2*time.Second, func() {
if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 15); err != nil {
t.Fatal(err)
}
})
// Close pipes (simulate disconnect)
if err := closeWrap(stdin, stdinPipe, stdout, stdoutPipe); err != nil {
t.Fatal(err)
}
// as the pipes are close, we expect the process to die,
// therefore CmdRun to unblock. Wait for CmdRun
setTimeout(t, "Waiting for CmdRun timed out", 2*time.Second, func() {
<-c1
})
// Client disconnect after run -i should cause stdin to be closed, which should
// cause /bin/cat to exit.
setTimeout(t, "Waiting for /bin/cat to exit timed out", 2*time.Second, func() {
container := runtime.List()[0]
container.Wait()
if container.State.Running {
t.Fatalf("/bin/cat is still running after closing stdin")
}
})
}
// Expected behaviour: the process dies when the client disconnects
func TestRunDisconnectTty(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
c1 := make(chan struct{})
go func() {
// We're simulating a disconnect so the return value doesn't matter. What matters is the
// fact that CmdRun returns.
srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-i", "-t", GetTestImage(runtime).Id, "/bin/cat")
close(c1)
}()
setTimeout(t, "Waiting for the container to be started timed out", 2*time.Second, func() {
for {
// Client disconnect after run -i should keep stdin out in TTY mode
l := runtime.List()
if len(l) == 1 && l[0].State.Running {
break
}
time.Sleep(10 * time.Millisecond)
}
})
// Client disconnect after run -i should keep stdin out in TTY mode
container := runtime.List()[0]
setTimeout(t, "Read/Write assertion timed out", 2*time.Second, func() {
if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 15); err != nil {
t.Fatal(err)
}
})
// Close pipes (simulate disconnect)
if err := closeWrap(stdin, stdinPipe, stdout, stdoutPipe); err != nil {
t.Fatal(err)
}
// In tty mode, we expect the process to stay alive even after client's stdin closes.
// Do not wait for run to finish
// Give some time to monitor to do his thing
container.WaitTimeout(500 * time.Millisecond)
if !container.State.Running {
t.Fatalf("/bin/cat should still be running after closing stdin (tty mode)")
}
}
// TestAttachStdin checks attaching to stdin without stdout and stderr.
// 'docker run -i -a stdin' should sends the client's stdin to the command,
// then detach from it and print the container id.
func TestRunAttachStdin(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(stdin, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
ch := make(chan struct{})
go func() {
defer close(ch)
cli.CmdRun("-i", "-a", "stdin", unitTestImageID, "sh", "-c", "echo hello && cat")
srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-i", "-a", "stdin", GetTestImage(runtime).Id, "sh", "-c", "echo hello; cat")
close(ch)
}()
// Send input to the command, close stdin
setTimeout(t, "Write timed out", 10*time.Second, func() {
setTimeout(t, "Write timed out", 2*time.Second, func() {
if _, err := stdinPipe.Write([]byte("hi there\n")); err != nil {
t.Fatal(err)
}
@@ -119,27 +294,23 @@ func TestRunAttachStdin(t *testing.T) {
}
})
container := globalRuntime.List()[0]
container := runtime.List()[0]
// Check output
setTimeout(t, "Reading command output time out", 10*time.Second, func() {
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
if cmdOutput != container.ShortID()+"\n" {
t.Fatalf("Wrong output: should be '%s', not '%s'\n", container.ShortID()+"\n", cmdOutput)
}
})
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
if cmdOutput != container.ShortId()+"\n" {
t.Fatalf("Wrong output: should be '%s', not '%s'\n", container.ShortId()+"\n", cmdOutput)
}
// wait for CmdRun to return
setTimeout(t, "Waiting for CmdRun timed out", 5*time.Second, func() {
// Unblock hijack end
stdout.Read([]byte{})
setTimeout(t, "Waiting for CmdRun timed out", 2*time.Second, func() {
<-ch
})
setTimeout(t, "Waiting for command to exit timed out", 5*time.Second, func() {
setTimeout(t, "Waiting for command to exit timed out", 2*time.Second, func() {
container.Wait()
})
@@ -157,3 +328,70 @@ func TestRunAttachStdin(t *testing.T) {
}
}
}
// Expected behaviour, the process stays alive when the client disconnects
func TestAttachDisconnect(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
container, err := runtime.Create(
&Config{
Image: GetTestImage(runtime).Id,
Memory: 33554432,
Cmd: []string{"/bin/cat"},
OpenStdin: true,
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container)
// Start the process
if err := container.Start(); err != nil {
t.Fatal(err)
}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
// Attach to it
c1 := make(chan struct{})
go func() {
// We're simulating a disconnect so the return value doesn't matter. What matters is the
// fact that CmdAttach returns.
srv.CmdAttach(stdin, rcli.NewDockerLocalConn(stdoutPipe), container.Id)
close(c1)
}()
setTimeout(t, "First read/write assertion timed out", 2*time.Second, func() {
if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 15); err != nil {
t.Fatal(err)
}
})
// Close pipes (client disconnects)
if err := closeWrap(stdin, stdinPipe, stdout, stdoutPipe); err != nil {
t.Fatal(err)
}
// Wait for attach to finish, the client disconnected, therefore, Attach finished his job
setTimeout(t, "Waiting for CmdAttach timed out", 2*time.Second, func() {
<-c1
})
// We closed stdin, expect /bin/cat to still be running
// Wait a little bit to make sure container.monitor() did his thing
err = container.WaitTimeout(500 * time.Millisecond)
if err == nil || !container.State.Running {
t.Fatalf("/bin/cat is not running after closing stdin")
}
// Try to avoid the timeoout in destroy. Best effort, don't check error
cStdin, _ := container.StdinPipe()
cStdin.Close()
}

View File

@@ -2,10 +2,8 @@ package docker
import (
"encoding/json"
"flag"
"fmt"
"github.com/dotcloud/docker/term"
"github.com/dotcloud/docker/utils"
"github.com/dotcloud/docker/rcli"
"github.com/kr/pty"
"io"
"io/ioutil"
@@ -13,7 +11,6 @@ import (
"os"
"os/exec"
"path"
"path/filepath"
"sort"
"strconv"
"strings"
@@ -24,7 +21,7 @@ import (
type Container struct {
root string
ID string
Id string
Created time.Time
@@ -42,8 +39,8 @@ type Container struct {
ResolvConfPath string
cmd *exec.Cmd
stdout *utils.WriteBroadcaster
stderr *utils.WriteBroadcaster
stdout *writeBroadcaster
stderr *writeBroadcaster
stdin io.ReadCloser
stdinPipe io.WriteCloser
ptyMaster io.Closer
@@ -51,10 +48,6 @@ type Container struct {
runtime *Runtime
waitLock chan struct{}
Volumes map[string]string
// Store rw/ro in a separate structure to preserve reserve-compatibility on-disk.
// Easier than migrating older container configs :)
VolumesRW map[string]bool
}
type Config struct {
@@ -62,7 +55,6 @@ type Config struct {
User string
Memory int64 // Memory limit (in bytes)
MemorySwap int64 // Total memory usage (memory + swap); set `-1' to disable swap
CpuShares int64 // CPU shares (relative weight vs. other containers)
AttachStdin bool
AttachStdout bool
AttachStderr bool
@@ -74,23 +66,10 @@ type Config struct {
Cmd []string
Dns []string
Image string // Name of the image as it was passed by the operator (eg. could be symbolic)
Volumes map[string]struct{}
VolumesFrom string
Entrypoint []string
}
type HostConfig struct {
Binds []string
}
type BindMap struct {
SrcPath string
DstPath string
Mode string
}
func ParseRun(args []string, capabilities *Capabilities) (*Config, *HostConfig, *flag.FlagSet, error) {
cmd := Subcmd("run", "[OPTIONS] IMAGE [COMMAND] [ARG...]", "Run a command in a new container")
func ParseRun(args []string, stdout io.Writer, capabilities *Capabilities) (*Config, error) {
cmd := rcli.Subcmd(stdout, "run", "[OPTIONS] IMAGE COMMAND [ARG...]", "Run a command in a new container")
if len(args) > 0 && args[0] != "--help" {
cmd.SetOutput(ioutil.Discard)
}
@@ -104,13 +83,11 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *HostConfig,
flTty := cmd.Bool("t", false, "Allocate a pseudo-tty")
flMemory := cmd.Int64("m", 0, "Memory limit (in bytes)")
if capabilities != nil && *flMemory > 0 && !capabilities.MemoryLimit {
//fmt.Fprintf(stdout, "WARNING: Your kernel does not support memory limit capabilities. Limitation discarded.\n")
if *flMemory > 0 && !capabilities.MemoryLimit {
fmt.Fprintf(stdout, "WARNING: Your kernel does not support memory limit capabilities. Limitation discarded.\n")
*flMemory = 0
}
flCpuShares := cmd.Int64("c", 0, "CPU shares (relative weight)")
var flPorts ListOpts
cmd.Var(&flPorts, "p", "Expose a container's port to the host (use 'docker port' to see the actual mapping)")
@@ -120,17 +97,11 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *HostConfig,
var flDns ListOpts
cmd.Var(&flDns, "dns", "Set custom dns servers")
flVolumes := NewPathOpts()
cmd.Var(flVolumes, "v", "Bind mount a volume (e.g. from the host: -v /host:/container, from docker: -v /container)")
flVolumesFrom := cmd.String("volumes-from", "", "Mount volumes from the specified container")
flEntrypoint := cmd.String("entrypoint", "", "Overwrite the default entrypoint of the image")
if err := cmd.Parse(args); err != nil {
return nil, nil, cmd, err
return nil, err
}
if *flDetach && len(flAttach) > 0 {
return nil, nil, cmd, fmt.Errorf("Conflicting options: -a and -d")
return nil, fmt.Errorf("Conflicting options: -a and -d")
}
// If neither -d or -a are set, attach to everything by default
if len(flAttach) == 0 && !*flDetach {
@@ -142,23 +113,8 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *HostConfig,
}
}
}
var binds []string
// add any bind targets to the list of container volumes
for bind := range flVolumes {
arr := strings.Split(bind, ":")
if len(arr) > 1 {
dstDir := arr[1]
flVolumes[dstDir] = struct{}{}
binds = append(binds, bind)
delete(flVolumes, bind)
}
}
parsedArgs := cmd.Args()
runCmd := []string{}
entrypoint := []string{}
image := ""
if len(parsedArgs) >= 1 {
image = cmd.Arg(0)
@@ -166,10 +122,6 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *HostConfig,
if len(parsedArgs) > 1 {
runCmd = parsedArgs[1:]
}
if *flEntrypoint != "" {
entrypoint = []string{*flEntrypoint}
}
config := &Config{
Hostname: *flHostname,
PortSpecs: flPorts,
@@ -177,7 +129,6 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *HostConfig,
Tty: *flTty,
OpenStdin: *flStdin,
Memory: *flMemory,
CpuShares: *flCpuShares,
AttachStdin: flAttach.Get("stdin"),
AttachStdout: flAttach.Get("stdout"),
AttachStderr: flAttach.Get("stderr"),
@@ -185,16 +136,10 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *HostConfig,
Cmd: runCmd,
Dns: flDns,
Image: image,
Volumes: flVolumes,
VolumesFrom: *flVolumesFrom,
Entrypoint: entrypoint,
}
hostConfig := &HostConfig{
Binds: binds,
}
if capabilities != nil && *flMemory > 0 && !capabilities.SwapLimit {
//fmt.Fprintf(stdout, "WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.\n")
if *flMemory > 0 && !capabilities.SwapLimit {
fmt.Fprintf(stdout, "WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.\n")
config.MemorySwap = -1
}
@@ -202,28 +147,23 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *HostConfig,
if config.OpenStdin && config.AttachStdin {
config.StdinOnce = true
}
return config, hostConfig, cmd, nil
return config, nil
}
type PortMapping map[string]string
type NetworkSettings struct {
IPAddress string
IPPrefixLen int
IpAddress string
IpPrefixLen int
Gateway string
Bridge string
PortMapping map[string]PortMapping
PortMapping map[string]string
}
// String returns a human-readable description of the port mapping defined in the settings
func (settings *NetworkSettings) PortMappingHuman() string {
var mapping []string
for private, public := range settings.PortMapping["Tcp"] {
for private, public := range settings.PortMapping {
mapping = append(mapping, fmt.Sprintf("%s->%s", public, private))
}
for private, public := range settings.PortMapping["Udp"] {
mapping = append(mapping, fmt.Sprintf("%s->%s/udp", public, private))
}
sort.Strings(mapping)
return strings.Join(mapping, ", ")
}
@@ -273,26 +213,6 @@ func (container *Container) ToDisk() (err error) {
return ioutil.WriteFile(container.jsonPath(), data, 0666)
}
func (container *Container) ReadHostConfig() (*HostConfig, error) {
data, err := ioutil.ReadFile(container.hostConfigPath())
if err != nil {
return &HostConfig{}, err
}
hostConfig := &HostConfig{}
if err := json.Unmarshal(data, hostConfig); err != nil {
return &HostConfig{}, err
}
return hostConfig, nil
}
func (container *Container) SaveHostConfig(hostConfig *HostConfig) (err error) {
data, err := json.Marshal(hostConfig)
if err != nil {
return
}
return ioutil.WriteFile(container.hostConfigPath(), data, 0666)
}
func (container *Container) generateLXCConfig() error {
fo, err := os.Create(container.lxcConfigPath())
if err != nil {
@@ -317,9 +237,9 @@ func (container *Container) startPty() error {
// Copy the PTYs to our broadcasters
go func() {
defer container.stdout.CloseWriters()
utils.Debugf("[startPty] Begin of stdout pipe")
Debugf("[startPty] Begin of stdout pipe")
io.Copy(container.stdout, ptyMaster)
utils.Debugf("[startPty] End of stdout pipe")
Debugf("[startPty] End of stdout pipe")
}()
// stdin
@@ -328,9 +248,9 @@ func (container *Container) startPty() error {
container.cmd.SysProcAttr = &syscall.SysProcAttr{Setctty: true, Setsid: true}
go func() {
defer container.stdin.Close()
utils.Debugf("[startPty] Begin of stdin pipe")
Debugf("[startPty] Begin of stdin pipe")
io.Copy(ptyMaster, container.stdin)
utils.Debugf("[startPty] End of stdin pipe")
Debugf("[startPty] End of stdin pipe")
}()
}
if err := container.cmd.Start(); err != nil {
@@ -350,9 +270,9 @@ func (container *Container) start() error {
}
go func() {
defer stdin.Close()
utils.Debugf("Begin of stdin pipe [start]")
Debugf("Begin of stdin pipe [start]")
io.Copy(stdin, container.stdin)
utils.Debugf("End of stdin pipe [start]")
Debugf("End of stdin pipe [start]")
}()
}
return container.cmd.Start()
@@ -369,8 +289,8 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
errors <- err
} else {
go func() {
utils.Debugf("[start] attach stdin\n")
defer utils.Debugf("[end] attach stdin\n")
Debugf("[start] attach stdin\n")
defer Debugf("[end] attach stdin\n")
// No matter what, when stdin is closed (io.Copy unblock), close stdout and stderr
if cStdout != nil {
defer cStdout.Close()
@@ -382,12 +302,12 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
defer cStdin.Close()
}
if container.Config.Tty {
_, err = utils.CopyEscapable(cStdin, stdin)
_, err = CopyEscapable(cStdin, stdin)
} else {
_, err = io.Copy(cStdin, stdin)
}
if err != nil {
utils.Debugf("[error] attach stdin: %s\n", err)
Debugf("[error] attach stdin: %s\n", err)
}
// Discard error, expecting pipe error
errors <- nil
@@ -401,8 +321,8 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
} else {
cStdout = p
go func() {
utils.Debugf("[start] attach stdout\n")
defer utils.Debugf("[end] attach stdout\n")
Debugf("[start] attach stdout\n")
defer Debugf("[end] attach stdout\n")
// If we are in StdinOnce mode, then close stdin
if container.Config.StdinOnce {
if stdin != nil {
@@ -414,23 +334,11 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
}
_, err := io.Copy(stdout, cStdout)
if err != nil {
utils.Debugf("[error] attach stdout: %s\n", err)
Debugf("[error] attach stdout: %s\n", err)
}
errors <- err
}()
}
} else {
go func() {
if stdinCloser != nil {
defer stdinCloser.Close()
}
if cStdout, err := container.StdoutPipe(); err != nil {
utils.Debugf("Error stdout pipe")
} else {
io.Copy(&utils.NopWriter{}, cStdout)
}
}()
}
if stderr != nil {
nJobs += 1
@@ -439,8 +347,8 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
} else {
cStderr = p
go func() {
utils.Debugf("[start] attach stderr\n")
defer utils.Debugf("[end] attach stderr\n")
Debugf("[start] attach stderr\n")
defer Debugf("[end] attach stderr\n")
// If we are in StdinOnce mode, then close stdin
if container.Config.StdinOnce {
if stdin != nil {
@@ -452,26 +360,13 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
}
_, err := io.Copy(stderr, cStderr)
if err != nil {
utils.Debugf("[error] attach stderr: %s\n", err)
Debugf("[error] attach stderr: %s\n", err)
}
errors <- err
}()
}
} else {
go func() {
if stdinCloser != nil {
defer stdinCloser.Close()
}
if cStderr, err := container.StderrPipe(); err != nil {
utils.Debugf("Error stdout pipe")
} else {
io.Copy(&utils.NopWriter{}, cStderr)
}
}()
}
return utils.Go(func() error {
return Go(func() error {
if cStdout != nil {
defer cStdout.Close()
}
@@ -481,28 +376,24 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
// FIXME: how do clean up the stdin goroutine without the unwanted side effect
// of closing the passed stdin? Add an intermediary io.Pipe?
for i := 0; i < nJobs; i += 1 {
utils.Debugf("Waiting for job %d/%d\n", i+1, nJobs)
Debugf("Waiting for job %d/%d\n", i+1, nJobs)
if err := <-errors; err != nil {
utils.Debugf("Job %d returned error %s. Aborting all jobs\n", i+1, err)
Debugf("Job %d returned error %s. Aborting all jobs\n", i+1, err)
return err
}
utils.Debugf("Job %d completed successfully\n", i+1)
Debugf("Job %d completed successfully\n", i+1)
}
utils.Debugf("All jobs completed successfully\n")
Debugf("All jobs completed successfully\n")
return nil
})
}
func (container *Container) Start(hostConfig *HostConfig) error {
container.State.Lock()
defer container.State.Unlock()
if len(hostConfig.Binds) == 0 {
hostConfig, _ = container.ReadHostConfig()
}
func (container *Container) Start() error {
container.State.lock()
defer container.State.unlock()
if container.State.Running {
return fmt.Errorf("The container %s is already running.", container.ID)
return fmt.Errorf("The container %s is already running.", container.Id)
}
if err := container.EnsureMounted(); err != nil {
return err
@@ -520,99 +411,12 @@ func (container *Container) Start(hostConfig *HostConfig) error {
log.Printf("WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.\n")
container.Config.MemorySwap = -1
}
container.Volumes = make(map[string]string)
container.VolumesRW = make(map[string]bool)
// Create the requested bind mounts
binds := make(map[string]BindMap)
// Define illegal container destinations
illegalDsts := []string{"/", "."}
for _, bind := range hostConfig.Binds {
// FIXME: factorize bind parsing in parseBind
var src, dst, mode string
arr := strings.Split(bind, ":")
if len(arr) == 2 {
src = arr[0]
dst = arr[1]
mode = "rw"
} else if len(arr) == 3 {
src = arr[0]
dst = arr[1]
mode = arr[2]
} else {
return fmt.Errorf("Invalid bind specification: %s", bind)
}
// Bail if trying to mount to an illegal destination
for _, illegal := range illegalDsts {
if dst == illegal {
return fmt.Errorf("Illegal bind destination: %s", dst)
}
}
bindMap := BindMap{
SrcPath: src,
DstPath: dst,
Mode: mode,
}
binds[path.Clean(dst)] = bindMap
}
// FIXME: evaluate volumes-from before individual volumes, so that the latter can override the former.
// Create the requested volumes volumes
for volPath := range container.Config.Volumes {
volPath = path.Clean(volPath)
// If an external bind is defined for this volume, use that as a source
if bindMap, exists := binds[volPath]; exists {
container.Volumes[volPath] = bindMap.SrcPath
if strings.ToLower(bindMap.Mode) == "rw" {
container.VolumesRW[volPath] = true
}
// Otherwise create an directory in $ROOT/volumes/ and use that
} else {
c, err := container.runtime.volumes.Create(nil, container, "", "", nil)
if err != nil {
return err
}
srcPath, err := c.layer()
if err != nil {
return err
}
container.Volumes[volPath] = srcPath
container.VolumesRW[volPath] = true // RW by default
}
// Create the mountpoint
if err := os.MkdirAll(path.Join(container.RootfsPath(), volPath), 0755); err != nil {
return nil
}
}
if container.Config.VolumesFrom != "" {
c := container.runtime.Get(container.Config.VolumesFrom)
if c == nil {
return fmt.Errorf("Container %s not found. Impossible to mount its volumes", container.ID)
}
for volPath, id := range c.Volumes {
if _, exists := container.Volumes[volPath]; exists {
return fmt.Errorf("The requested volume %s overlap one of the volume of the container %s", volPath, c.ID)
}
if err := os.MkdirAll(path.Join(container.RootfsPath(), volPath), 0755); err != nil {
return nil
}
container.Volumes[volPath] = id
if isRW, exists := c.VolumesRW[volPath]; exists {
container.VolumesRW[volPath] = isRW
}
}
}
if err := container.generateLXCConfig(); err != nil {
return err
}
params := []string{
"-n", container.ID,
"-n", container.Id,
"-f", container.lxcConfigPath(),
"--",
"/sbin/init",
@@ -669,16 +473,13 @@ func (container *Container) Start(hostConfig *HostConfig) error {
// Init the lock
container.waitLock = make(chan struct{})
container.ToDisk()
container.SaveHostConfig(hostConfig)
go container.monitor()
return nil
}
func (container *Container) Run() error {
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
if err := container.Start(); err != nil {
return err
}
container.Wait()
@@ -691,8 +492,7 @@ func (container *Container) Output() (output []byte, err error) {
return nil, err
}
defer pipe.Close()
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
if err := container.Start(); err != nil {
return nil, err
}
output, err = ioutil.ReadAll(pipe)
@@ -710,13 +510,13 @@ func (container *Container) StdinPipe() (io.WriteCloser, error) {
func (container *Container) StdoutPipe() (io.ReadCloser, error) {
reader, writer := io.Pipe()
container.stdout.AddWriter(writer)
return utils.NewBufReader(reader), nil
return newBufReader(reader), nil
}
func (container *Container) StderrPipe() (io.ReadCloser, error) {
reader, writer := io.Pipe()
container.stderr.AddWriter(writer)
return utils.NewBufReader(reader), nil
return newBufReader(reader), nil
}
func (container *Container) allocateNetwork() error {
@@ -724,23 +524,19 @@ func (container *Container) allocateNetwork() error {
if err != nil {
return err
}
container.NetworkSettings.PortMapping = make(map[string]PortMapping)
container.NetworkSettings.PortMapping["Tcp"] = make(PortMapping)
container.NetworkSettings.PortMapping["Udp"] = make(PortMapping)
container.NetworkSettings.PortMapping = make(map[string]string)
for _, spec := range container.Config.PortSpecs {
nat, err := iface.AllocatePort(spec)
if err != nil {
if nat, err := iface.AllocatePort(spec); err != nil {
iface.Release()
return err
} else {
container.NetworkSettings.PortMapping[strconv.Itoa(nat.Backend)] = strconv.Itoa(nat.Frontend)
}
proto := strings.Title(nat.Proto)
backend, frontend := strconv.Itoa(nat.Backend), strconv.Itoa(nat.Frontend)
container.NetworkSettings.PortMapping[proto][backend] = frontend
}
container.network = iface
container.NetworkSettings.Bridge = container.runtime.networkManager.bridgeIface
container.NetworkSettings.IPAddress = iface.IPNet.IP.String()
container.NetworkSettings.IPPrefixLen, _ = iface.IPNet.Mask.Size()
container.NetworkSettings.IpAddress = iface.IPNet.IP.String()
container.NetworkSettings.IpPrefixLen, _ = iface.IPNet.Mask.Size()
container.NetworkSettings.Gateway = iface.Gateway.String()
return nil
}
@@ -754,35 +550,36 @@ func (container *Container) releaseNetwork() {
// FIXME: replace this with a control socket within docker-init
func (container *Container) waitLxc() error {
for {
output, err := exec.Command("lxc-info", "-n", container.ID).CombinedOutput()
if err != nil {
if output, err := exec.Command("lxc-info", "-n", container.Id).CombinedOutput(); err != nil {
return err
}
if !strings.Contains(string(output), "RUNNING") {
return nil
} else {
if !strings.Contains(string(output), "RUNNING") {
return nil
}
}
time.Sleep(500 * time.Millisecond)
}
return nil
}
func (container *Container) monitor() {
// Wait for the program to exit
utils.Debugf("Waiting for process")
Debugf("Waiting for process")
// If the command does not exists, try to wait via lxc
if container.cmd == nil {
if err := container.waitLxc(); err != nil {
utils.Debugf("%s: Process: %s", container.ID, err)
Debugf("%s: Process: %s", container.Id, err)
}
} else {
if err := container.cmd.Wait(); err != nil {
// Discard the error as any signals or non 0 returns will generate an error
utils.Debugf("%s: Process: %s", container.ID, err)
Debugf("%s: Process: %s", container.Id, err)
}
}
utils.Debugf("Process finished")
Debugf("Process finished")
exitCode := -1
var exitCode int = -1
if container.cmd != nil {
exitCode = container.cmd.ProcessState.Sys().(syscall.WaitStatus).ExitStatus()
}
@@ -791,24 +588,24 @@ func (container *Container) monitor() {
container.releaseNetwork()
if container.Config.OpenStdin {
if err := container.stdin.Close(); err != nil {
utils.Debugf("%s: Error close stdin: %s", container.ID, err)
Debugf("%s: Error close stdin: %s", container.Id, err)
}
}
if err := container.stdout.CloseWriters(); err != nil {
utils.Debugf("%s: Error close stdout: %s", container.ID, err)
Debugf("%s: Error close stdout: %s", container.Id, err)
}
if err := container.stderr.CloseWriters(); err != nil {
utils.Debugf("%s: Error close stderr: %s", container.ID, err)
Debugf("%s: Error close stderr: %s", container.Id, err)
}
if container.ptyMaster != nil {
if err := container.ptyMaster.Close(); err != nil {
utils.Debugf("%s: Error closing Pty master: %s", container.ID, err)
Debugf("%s: Error closing Pty master: %s", container.Id, err)
}
}
if err := container.Unmount(); err != nil {
log.Printf("%v: Failed to umount filesystem: %v", container.ID, err)
log.Printf("%v: Failed to umount filesystem: %v", container.Id, err)
}
// Re-create a brand new stdin pipe once the container exited
@@ -829,7 +626,7 @@ func (container *Container) monitor() {
// This is because State.setStopped() has already been called, and has caused Wait()
// to return.
// FIXME: why are we serializing running state to disk in the first place?
//log.Printf("%s: Failed to dump configuration to the disk: %s", container.ID, err)
//log.Printf("%s: Failed to dump configuration to the disk: %s", container.Id, err)
}
}
@@ -839,17 +636,17 @@ func (container *Container) kill() error {
}
// Sending SIGKILL to the process via lxc
output, err := exec.Command("lxc-kill", "-n", container.ID, "9").CombinedOutput()
output, err := exec.Command("lxc-kill", "-n", container.Id, "9").CombinedOutput()
if err != nil {
log.Printf("error killing container %s (%s, %s)", container.ID, output, err)
log.Printf("error killing container %s (%s, %s)", container.Id, output, err)
}
// 2. Wait for the process to die, in last resort, try to kill the process directly
if err := container.WaitTimeout(10 * time.Second); err != nil {
if container.cmd == nil {
return fmt.Errorf("lxc-kill failed, impossible to kill the container %s", container.ID)
return fmt.Errorf("lxc-kill failed, impossible to kill the container %s", container.Id)
}
log.Printf("Container %s failed to exit within 10 seconds of lxc SIGKILL - trying direct SIGKILL", container.ID)
log.Printf("Container %s failed to exit within 10 seconds of lxc SIGKILL - trying direct SIGKILL", container.Id)
if err := container.cmd.Process.Kill(); err != nil {
return err
}
@@ -861,8 +658,8 @@ func (container *Container) kill() error {
}
func (container *Container) Kill() error {
container.State.Lock()
defer container.State.Unlock()
container.State.lock()
defer container.State.unlock()
if !container.State.Running {
return nil
}
@@ -870,14 +667,14 @@ func (container *Container) Kill() error {
}
func (container *Container) Stop(seconds int) error {
container.State.Lock()
defer container.State.Unlock()
container.State.lock()
defer container.State.unlock()
if !container.State.Running {
return nil
}
// 1. Send a SIGTERM
if output, err := exec.Command("lxc-kill", "-n", container.ID, "15").CombinedOutput(); err != nil {
if output, err := exec.Command("lxc-kill", "-n", container.Id, "15").CombinedOutput(); err != nil {
log.Print(string(output))
log.Print("Failed to send SIGTERM to the process, force killing")
if err := container.kill(); err != nil {
@@ -887,7 +684,7 @@ func (container *Container) Stop(seconds int) error {
// 2. Wait for the process to exit on its own
if err := container.WaitTimeout(time.Duration(seconds) * time.Second); err != nil {
log.Printf("Container %v failed to exit within %d seconds of SIGTERM - using the force", container.ID, seconds)
log.Printf("Container %v failed to exit within %d seconds of SIGTERM - using the force", container.Id, seconds)
if err := container.kill(); err != nil {
return err
}
@@ -899,8 +696,7 @@ func (container *Container) Restart(seconds int) error {
if err := container.Stop(seconds); err != nil {
return err
}
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
if err := container.Start(); err != nil {
return err
}
return nil
@@ -912,26 +708,10 @@ func (container *Container) Wait() int {
return container.State.ExitCode
}
func (container *Container) Resize(h, w int) error {
pty, ok := container.ptyMaster.(*os.File)
if !ok {
return fmt.Errorf("ptyMaster does not have Fd() method")
}
return term.SetWinsize(pty.Fd(), &term.Winsize{Height: uint16(h), Width: uint16(w)})
}
func (container *Container) ExportRw() (Archive, error) {
return Tar(container.rwPath(), Uncompressed)
}
func (container *Container) RwChecksum() (string, error) {
rwData, err := Tar(container.rwPath(), Xz)
if err != nil {
return "", err
}
return utils.HashData(rwData)
}
func (container *Container) Export() (Archive, error) {
if err := container.EnsureMounted(); err != nil {
return nil, err
@@ -952,6 +732,7 @@ func (container *Container) WaitTimeout(timeout time.Duration) error {
case <-done:
return nil
}
panic("unreachable")
}
func (container *Container) EnsureMounted() error {
@@ -994,26 +775,22 @@ func (container *Container) Unmount() error {
return Unmount(container.RootfsPath())
}
// ShortID returns a shorthand version of the container's id for convenience.
// ShortId returns a shorthand version of the container's id for convenience.
// A collision with other container shorthands is very unlikely, but possible.
// In case of a collision a lookup with Runtime.Get() will fail, and the caller
// will need to use a langer prefix, or the full-length container Id.
func (container *Container) ShortID() string {
return utils.TruncateID(container.ID)
func (container *Container) ShortId() string {
return TruncateId(container.Id)
}
func (container *Container) logPath(name string) string {
return path.Join(container.root, fmt.Sprintf("%s-%s.log", container.ID, name))
return path.Join(container.root, fmt.Sprintf("%s-%s.log", container.Id, name))
}
func (container *Container) ReadLog(name string) (io.Reader, error) {
return os.Open(container.logPath(name))
}
func (container *Container) hostConfigPath() string {
return path.Join(container.root, "hostconfig.json")
}
func (container *Container) jsonPath() string {
return path.Join(container.root, "config.json")
}
@@ -1031,32 +808,9 @@ func (container *Container) rwPath() string {
return path.Join(container.root, "rw")
}
func validateID(id string) error {
func validateId(id string) error {
if id == "" {
return fmt.Errorf("Invalid empty id")
}
return nil
}
// GetSize, return real size, virtual size
func (container *Container) GetSize() (int64, int64) {
var sizeRw, sizeRootfs int64
filepath.Walk(container.rwPath(), func(path string, fileInfo os.FileInfo, err error) error {
if fileInfo != nil {
sizeRw += fileInfo.Size()
}
return nil
})
_, err := os.Stat(container.RootfsPath())
if err == nil {
filepath.Walk(container.RootfsPath(), func(path string, fileInfo os.FileInfo, err error) error {
if fileInfo != nil {
sizeRootfs += fileInfo.Size()
}
return nil
})
}
return sizeRw, sizeRootfs
}

View File

@@ -7,7 +7,6 @@ import (
"io/ioutil"
"math/rand"
"os"
"path"
"regexp"
"sort"
"strings"
@@ -15,35 +14,43 @@ import (
"time"
)
func TestIDFormat(t *testing.T) {
runtime := mkRuntime(t)
func TestIdFormat(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
container1, err := NewBuilder(runtime).Create(
container1, err := runtime.Create(
&Config{
Image: GetTestImage(runtime).ID,
Cmd: []string{"/bin/sh", "-c", "echo hello world"},
Image: GetTestImage(runtime).Id,
Cmd: []string{"/bin/sh", "-c", "echo hello world"},
Memory: 33554432,
},
)
if err != nil {
t.Fatal(err)
}
match, err := regexp.Match("^[0-9a-f]{64}$", []byte(container1.ID))
match, err := regexp.Match("^[0-9a-f]{64}$", []byte(container1.Id))
if err != nil {
t.Fatal(err)
}
if !match {
t.Fatalf("Invalid container ID: %s", container1.ID)
t.Fatalf("Invalid container ID: %s", container1.Id)
}
}
func TestMultipleAttachRestart(t *testing.T) {
runtime := mkRuntime(t)
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(
container, err := runtime.Create(
&Config{
Image: GetTestImage(runtime).ID,
Image: GetTestImage(runtime).Id,
Cmd: []string{"/bin/sh", "-c",
"i=1; while [ $i -le 5 ]; do i=`expr $i + 1`; echo hello; done"},
Memory: 33554432,
},
)
if err != nil {
@@ -65,8 +72,7 @@ func TestMultipleAttachRestart(t *testing.T) {
if err != nil {
t.Fatal(err)
}
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
if err := container.Start(); err != nil {
t.Fatal(err)
}
l1, err := bufio.NewReader(stdout1).ReadString('\n')
@@ -107,11 +113,11 @@ func TestMultipleAttachRestart(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if err := container.Start(hostConfig); err != nil {
if err := container.Start(); err != nil {
t.Fatal(err)
}
setTimeout(t, "Timeout reading from the process", 3*time.Second, func() {
timeout := make(chan bool)
go func() {
l1, err = bufio.NewReader(stdout1).ReadString('\n')
if err != nil {
t.Fatal(err)
@@ -133,20 +139,28 @@ func TestMultipleAttachRestart(t *testing.T) {
if strings.Trim(l3, " \r\n") != "hello" {
t.Fatalf("Unexpected output. Expected [%s], received [%s]", "hello", l3)
}
})
container.Wait()
timeout <- false
}()
go func() {
time.Sleep(3 * time.Second)
timeout <- true
}()
if <-timeout {
t.Fatalf("Timeout reading from the process")
}
}
func TestDiff(t *testing.T) {
runtime := mkRuntime(t)
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
builder := NewBuilder(runtime)
// Create a container and remove a file
container1, err := builder.Create(
container1, err := runtime.Create(
&Config{
Image: GetTestImage(runtime).ID,
Image: GetTestImage(runtime).Id,
Cmd: []string{"/bin/rm", "/etc/passwd"},
},
)
@@ -179,15 +193,15 @@ func TestDiff(t *testing.T) {
if err != nil {
t.Error(err)
}
img, err := runtime.graph.Create(rwTar, container1, "unit test commited image - diff", "", nil)
img, err := runtime.graph.Create(rwTar, container1, "unit test commited image - diff", "")
if err != nil {
t.Error(err)
}
// Create a new container from the commited image
container2, err := builder.Create(
container2, err := runtime.Create(
&Config{
Image: img.ID,
Image: img.Id,
Cmd: []string{"cat", "/etc/passwd"},
},
)
@@ -210,126 +224,19 @@ func TestDiff(t *testing.T) {
t.Fatalf("/etc/passwd should not be present in the diff after commit.")
}
}
// Create a new containere
container3, err := builder.Create(
&Config{
Image: GetTestImage(runtime).ID,
Cmd: []string{"rm", "/bin/httpd"},
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container3)
if err := container3.Run(); err != nil {
t.Fatal(err)
}
// Check the changelog
c, err = container3.Changes()
if err != nil {
t.Fatal(err)
}
success = false
for _, elem := range c {
if elem.Path == "/bin/httpd" && elem.Kind == 2 {
success = true
}
}
if !success {
t.Fatalf("/bin/httpd should be present in the diff after commit.")
}
}
func TestCommitAutoRun(t *testing.T) {
runtime := mkRuntime(t)
defer nuke(runtime)
builder := NewBuilder(runtime)
container1, err := builder.Create(
&Config{
Image: GetTestImage(runtime).ID,
Cmd: []string{"/bin/sh", "-c", "echo hello > /world"},
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container1)
if container1.State.Running {
t.Errorf("Container shouldn't be running")
}
if err := container1.Run(); err != nil {
t.Fatal(err)
}
if container1.State.Running {
t.Errorf("Container shouldn't be running")
}
rwTar, err := container1.ExportRw()
if err != nil {
t.Error(err)
}
img, err := runtime.graph.Create(rwTar, container1, "unit test commited image", "", &Config{Cmd: []string{"cat", "/world"}})
if err != nil {
t.Error(err)
}
// FIXME: Make a TestCommit that stops here and check docker.root/layers/img.id/world
container2, err := builder.Create(
&Config{
Image: img.ID,
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container2)
stdout, err := container2.StdoutPipe()
if err != nil {
t.Fatal(err)
}
stderr, err := container2.StderrPipe()
if err != nil {
t.Fatal(err)
}
hostConfig := &HostConfig{}
if err := container2.Start(hostConfig); err != nil {
t.Fatal(err)
}
container2.Wait()
output, err := ioutil.ReadAll(stdout)
if err != nil {
t.Fatal(err)
}
output2, err := ioutil.ReadAll(stderr)
if err != nil {
t.Fatal(err)
}
if err := stdout.Close(); err != nil {
t.Fatal(err)
}
if err := stderr.Close(); err != nil {
t.Fatal(err)
}
if string(output) != "hello\n" {
t.Fatalf("Unexpected output. Expected %s, received: %s (err: %s)", "hello\n", output, output2)
}
}
func TestCommitRun(t *testing.T) {
runtime := mkRuntime(t)
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
builder := NewBuilder(runtime)
container1, err := builder.Create(
container1, err := runtime.Create(
&Config{
Image: GetTestImage(runtime).ID,
Cmd: []string{"/bin/sh", "-c", "echo hello > /world"},
Image: GetTestImage(runtime).Id,
Cmd: []string{"/bin/sh", "-c", "echo hello > /world"},
Memory: 33554432,
},
)
if err != nil {
@@ -351,17 +258,18 @@ func TestCommitRun(t *testing.T) {
if err != nil {
t.Error(err)
}
img, err := runtime.graph.Create(rwTar, container1, "unit test commited image", "", nil)
img, err := runtime.graph.Create(rwTar, container1, "unit test commited image", "")
if err != nil {
t.Error(err)
}
// FIXME: Make a TestCommit that stops here and check docker.root/layers/img.id/world
container2, err := builder.Create(
container2, err := runtime.Create(
&Config{
Image: img.ID,
Cmd: []string{"cat", "/world"},
Image: img.Id,
Memory: 33554432,
Cmd: []string{"cat", "/world"},
},
)
if err != nil {
@@ -376,8 +284,7 @@ func TestCommitRun(t *testing.T) {
if err != nil {
t.Fatal(err)
}
hostConfig := &HostConfig{}
if err := container2.Start(hostConfig); err != nil {
if err := container2.Start(); err != nil {
t.Fatal(err)
}
container2.Wait()
@@ -401,13 +308,15 @@ func TestCommitRun(t *testing.T) {
}
func TestStart(t *testing.T) {
runtime := mkRuntime(t)
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(
container, err := runtime.Create(
&Config{
Image: GetTestImage(runtime).ID,
Image: GetTestImage(runtime).Id,
Memory: 33554432,
CpuShares: 1000,
Cmd: []string{"/bin/cat"},
OpenStdin: true,
},
@@ -417,13 +326,7 @@ func TestStart(t *testing.T) {
}
defer runtime.Destroy(container)
cStdin, err := container.StdinPipe()
if err != nil {
t.Fatal(err)
}
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
if err := container.Start(); err != nil {
t.Fatal(err)
}
@@ -433,22 +336,27 @@ func TestStart(t *testing.T) {
if !container.State.Running {
t.Errorf("Container should be running")
}
if err := container.Start(hostConfig); err == nil {
if err := container.Start(); err == nil {
t.Fatalf("A running containter should be able to be started")
}
// Try to avoid the timeoout in destroy. Best effort, don't check error
cStdin, _ := container.StdinPipe()
cStdin.Close()
container.WaitTimeout(2 * time.Second)
}
func TestRun(t *testing.T) {
runtime := mkRuntime(t)
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(
container, err := runtime.Create(
&Config{
Image: GetTestImage(runtime).ID,
Cmd: []string{"ls", "-al"},
Image: GetTestImage(runtime).Id,
Memory: 33554432,
Cmd: []string{"ls", "-al"},
},
)
if err != nil {
@@ -468,11 +376,14 @@ func TestRun(t *testing.T) {
}
func TestOutput(t *testing.T) {
runtime := mkRuntime(t)
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(
container, err := runtime.Create(
&Config{
Image: GetTestImage(runtime).ID,
Image: GetTestImage(runtime).Id,
Cmd: []string{"echo", "-n", "foobar"},
},
)
@@ -490,10 +401,13 @@ func TestOutput(t *testing.T) {
}
func TestKillDifferentUser(t *testing.T) {
runtime := mkRuntime(t)
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).ID,
container, err := runtime.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"tail", "-f", "/etc/resolv.conf"},
User: "daemon",
},
@@ -506,20 +420,17 @@ func TestKillDifferentUser(t *testing.T) {
if container.State.Running {
t.Errorf("Container shouldn't be running")
}
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
if err := container.Start(); err != nil {
t.Fatal(err)
}
setTimeout(t, "Waiting for the container to be started timed out", 2*time.Second, func() {
for !container.State.Running {
time.Sleep(10 * time.Millisecond)
}
})
// Even if the state is running, lets give some time to lxc to spawn the process
// Give some time to lxc to spawn the process (setuid might take some time)
container.WaitTimeout(500 * time.Millisecond)
if !container.State.Running {
t.Errorf("Container should be running")
}
if err := container.Kill(); err != nil {
t.Fatal(err)
}
@@ -537,33 +448,15 @@ func TestKillDifferentUser(t *testing.T) {
}
}
// Test that creating a container with a volume doesn't crash. Regression test for #995.
func TestCreateVolume(t *testing.T) {
runtime := mkRuntime(t)
defer nuke(runtime)
config, hc, _, err := ParseRun([]string{"-v", "/var/lib/data", GetTestImage(runtime).ID, "echo", "hello", "world"}, nil)
if err != nil {
t.Fatal(err)
}
c, err := NewBuilder(runtime).Create(config)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(c)
if err := c.Start(hc); err != nil {
t.Fatal(err)
}
c.WaitTimeout(500 * time.Millisecond)
c.Wait()
}
func TestKill(t *testing.T) {
runtime := mkRuntime(t)
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).ID,
Cmd: []string{"sleep", "2"},
container, err := runtime.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"cat", "/dev/zero"},
},
)
if err != nil {
@@ -574,8 +467,7 @@ func TestKill(t *testing.T) {
if container.State.Running {
t.Errorf("Container shouldn't be running")
}
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
if err := container.Start(); err != nil {
t.Fatal(err)
}
@@ -602,13 +494,14 @@ func TestKill(t *testing.T) {
}
func TestExitCode(t *testing.T) {
runtime := mkRuntime(t)
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
builder := NewBuilder(runtime)
trueContainer, err := builder.Create(&Config{
Image: GetTestImage(runtime).ID,
trueContainer, err := runtime.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"/bin/true", ""},
})
if err != nil {
@@ -622,8 +515,8 @@ func TestExitCode(t *testing.T) {
t.Errorf("Unexpected exit code %d (expected 0)", trueContainer.State.ExitCode)
}
falseContainer, err := builder.Create(&Config{
Image: GetTestImage(runtime).ID,
falseContainer, err := runtime.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"/bin/false", ""},
})
if err != nil {
@@ -639,10 +532,13 @@ func TestExitCode(t *testing.T) {
}
func TestRestart(t *testing.T) {
runtime := mkRuntime(t)
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).ID,
container, err := runtime.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"echo", "-n", "foobar"},
},
)
@@ -669,10 +565,13 @@ func TestRestart(t *testing.T) {
}
func TestRestartStdin(t *testing.T) {
runtime := mkRuntime(t)
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).ID,
container, err := runtime.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"cat"},
OpenStdin: true,
@@ -691,8 +590,7 @@ func TestRestartStdin(t *testing.T) {
if err != nil {
t.Fatal(err)
}
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
if err := container.Start(); err != nil {
t.Fatal(err)
}
if _, err := io.WriteString(stdin, "hello world"); err != nil {
@@ -722,7 +620,7 @@ func TestRestartStdin(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if err := container.Start(hostConfig); err != nil {
if err := container.Start(); err != nil {
t.Fatal(err)
}
if _, err := io.WriteString(stdin, "hello world #2"); err != nil {
@@ -745,14 +643,15 @@ func TestRestartStdin(t *testing.T) {
}
func TestUser(t *testing.T) {
runtime := mkRuntime(t)
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
builder := NewBuilder(runtime)
// Default user must be root
container, err := builder.Create(&Config{
Image: GetTestImage(runtime).ID,
container, err := runtime.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"id"},
},
)
@@ -769,8 +668,8 @@ func TestUser(t *testing.T) {
}
// Set a username
container, err = builder.Create(&Config{
Image: GetTestImage(runtime).ID,
container, err = runtime.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"id"},
User: "root",
@@ -789,8 +688,8 @@ func TestUser(t *testing.T) {
}
// Set a UID
container, err = builder.Create(&Config{
Image: GetTestImage(runtime).ID,
container, err = runtime.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"id"},
User: "0",
@@ -809,8 +708,8 @@ func TestUser(t *testing.T) {
}
// Set a different user by uid
container, err = builder.Create(&Config{
Image: GetTestImage(runtime).ID,
container, err = runtime.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"id"},
User: "1",
@@ -831,8 +730,8 @@ func TestUser(t *testing.T) {
}
// Set a different user by username
container, err = builder.Create(&Config{
Image: GetTestImage(runtime).ID,
container, err = runtime.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"id"},
User: "daemon",
@@ -849,34 +748,18 @@ func TestUser(t *testing.T) {
if !strings.Contains(string(output), "uid=1(daemon) gid=1(daemon)") {
t.Error(string(output))
}
// Test an wrong username
container, err = builder.Create(&Config{
Image: GetTestImage(runtime).ID,
Cmd: []string{"id"},
User: "unkownuser",
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container)
output, err = container.Output()
if container.State.ExitCode == 0 {
t.Fatal("Starting container with wrong uid should fail but it passed.")
}
}
func TestMultipleContainers(t *testing.T) {
runtime := mkRuntime(t)
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
builder := NewBuilder(runtime)
container1, err := builder.Create(&Config{
Image: GetTestImage(runtime).ID,
Cmd: []string{"sleep", "2"},
container1, err := runtime.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"cat", "/dev/zero"},
},
)
if err != nil {
@@ -884,9 +767,9 @@ func TestMultipleContainers(t *testing.T) {
}
defer runtime.Destroy(container1)
container2, err := builder.Create(&Config{
Image: GetTestImage(runtime).ID,
Cmd: []string{"sleep", "2"},
container2, err := runtime.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"cat", "/dev/zero"},
},
)
if err != nil {
@@ -895,11 +778,10 @@ func TestMultipleContainers(t *testing.T) {
defer runtime.Destroy(container2)
// Start both containers
hostConfig := &HostConfig{}
if err := container1.Start(hostConfig); err != nil {
if err := container1.Start(); err != nil {
t.Fatal(err)
}
if err := container2.Start(hostConfig); err != nil {
if err := container2.Start(); err != nil {
t.Fatal(err)
}
@@ -926,10 +808,13 @@ func TestMultipleContainers(t *testing.T) {
}
func TestStdin(t *testing.T) {
runtime := mkRuntime(t)
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).ID,
container, err := runtime.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"cat"},
OpenStdin: true,
@@ -948,8 +833,7 @@ func TestStdin(t *testing.T) {
if err != nil {
t.Fatal(err)
}
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
if err := container.Start(); err != nil {
t.Fatal(err)
}
defer stdin.Close()
@@ -971,10 +855,13 @@ func TestStdin(t *testing.T) {
}
func TestTty(t *testing.T) {
runtime := mkRuntime(t)
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).ID,
container, err := runtime.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"cat"},
OpenStdin: true,
@@ -993,8 +880,7 @@ func TestTty(t *testing.T) {
if err != nil {
t.Fatal(err)
}
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
if err := container.Start(); err != nil {
t.Fatal(err)
}
defer stdin.Close()
@@ -1016,11 +902,14 @@ func TestTty(t *testing.T) {
}
func TestEnv(t *testing.T) {
runtime := mkRuntime(t)
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).ID,
Cmd: []string{"env"},
container, err := runtime.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"/usr/bin/env"},
},
)
if err != nil {
@@ -1033,8 +922,7 @@ func TestEnv(t *testing.T) {
t.Fatal(err)
}
defer stdout.Close()
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
if err := container.Start(); err != nil {
t.Fatal(err)
}
container.Wait()
@@ -1062,29 +950,6 @@ func TestEnv(t *testing.T) {
}
}
func TestEntrypoint(t *testing.T) {
runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(
&Config{
Image: GetTestImage(runtime).ID,
Entrypoint: []string{"/bin/echo"},
Cmd: []string{"-n", "foobar"},
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container)
output, err := container.Output()
if err != nil {
t.Fatal(err)
}
if string(output) != "foobar" {
t.Error(string(output))
}
}
func grepFile(t *testing.T, path string, pattern string) {
f, err := os.Open(path)
if err != nil {
@@ -1106,24 +971,22 @@ func grepFile(t *testing.T, path string, pattern string) {
}
func TestLXCConfig(t *testing.T) {
runtime := mkRuntime(t)
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
// Memory is allocated randomly for testing
rand.Seed(time.Now().UTC().UnixNano())
memMin := 33554432
memMax := 536870912
mem := memMin + rand.Intn(memMax-memMin)
// CPU shares as well
cpuMin := 100
cpuMax := 10000
cpu := cpuMin + rand.Intn(cpuMax-cpuMin)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).ID,
container, err := runtime.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"/bin/true"},
Hostname: "foobar",
Memory: int64(mem),
CpuShares: int64(cpu),
Hostname: "foobar",
Memory: int64(mem),
},
)
if err != nil {
@@ -1139,11 +1002,14 @@ func TestLXCConfig(t *testing.T) {
}
func BenchmarkRunSequencial(b *testing.B) {
runtime := mkRuntime(b)
runtime, err := newTestRuntime()
if err != nil {
b.Fatal(err)
}
defer nuke(runtime)
for i := 0; i < b.N; i++ {
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).ID,
container, err := runtime.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"echo", "-n", "foo"},
},
)
@@ -1165,7 +1031,10 @@ func BenchmarkRunSequencial(b *testing.B) {
}
func BenchmarkRunParallel(b *testing.B) {
runtime := mkRuntime(b)
runtime, err := newTestRuntime()
if err != nil {
b.Fatal(err)
}
defer nuke(runtime)
var tasks []chan error
@@ -1174,8 +1043,8 @@ func BenchmarkRunParallel(b *testing.B) {
complete := make(chan error)
tasks = append(tasks, complete)
go func(i int, complete chan error) {
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).ID,
container, err := runtime.Create(&Config{
Image: GetTestImage(runtime).Id,
Cmd: []string{"echo", "-n", "foo"},
},
)
@@ -1184,8 +1053,7 @@ func BenchmarkRunParallel(b *testing.B) {
return
}
defer runtime.Destroy(container)
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
if err := container.Start(); err != nil {
complete <- err
return
}
@@ -1214,88 +1082,3 @@ func BenchmarkRunParallel(b *testing.B) {
b.Fatal(errors)
}
}
func tempDir(t *testing.T) string {
tmpDir, err := ioutil.TempDir("", "docker-test")
if err != nil {
t.Fatal(err)
}
return tmpDir
}
func TestBindMounts(t *testing.T) {
r := mkRuntime(t)
defer nuke(r)
tmpDir := tempDir(t)
defer os.RemoveAll(tmpDir)
writeFile(path.Join(tmpDir, "touch-me"), "", t)
// Test reading from a read-only bind mount
stdout, _ := runContainer(r, []string{"-v", fmt.Sprintf("%s:/tmp:ro", tmpDir), "_", "ls", "/tmp"}, t)
if !strings.Contains(stdout, "touch-me") {
t.Fatal("Container failed to read from bind mount")
}
// test writing to bind mount
runContainer(r, []string{"-v", fmt.Sprintf("%s:/tmp:rw", tmpDir), "_", "touch", "/tmp/holla"}, t)
readFile(path.Join(tmpDir, "holla"), t) // Will fail if the file doesn't exist
// test mounting to an illegal destination directory
if _, err := runContainer(r, []string{"-v", fmt.Sprintf("%s:.", tmpDir), "ls", "."}, nil); err == nil {
t.Fatal("Container bind mounted illegal directory")
}
}
// Test that VolumesRW values are copied to the new container. Regression test for #1201
func TestVolumesFromReadonlyMount(t *testing.T) {
runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(
&Config{
Image: GetTestImage(runtime).ID,
Cmd: []string{"/bin/echo", "-n", "foobar"},
Volumes: map[string]struct{}{"/test": {}},
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container)
_, err = container.Output()
if err != nil {
t.Fatal(err)
}
if !container.VolumesRW["/test"] {
t.Fail()
}
container2, err := NewBuilder(runtime).Create(
&Config{
Image: GetTestImage(runtime).ID,
Cmd: []string{"/bin/echo", "-n", "foobar"},
VolumesFrom: container.ID,
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container2)
_, err = container2.Output()
if err != nil {
t.Fatal(err)
}
if container.Volumes["/test"] != container2.Volumes["/test"] {
t.Fail()
}
actual, exists := container2.VolumesRW["/test"]
if !exists {
t.Fail()
}
if container.VolumesRW["/test"] != actual {
t.Fail()
}
}

View File

@@ -1 +0,0 @@
# Maintainer wanted! Enroll on #docker@freenode

View File

@@ -1,23 +1,18 @@
package main
import (
"fmt"
"io"
"log"
"net"
"os"
"os/exec"
"path"
"time"
)
var DOCKERPATH = path.Join(os.Getenv("DOCKERPATH"), "docker")
const DOCKER_PATH = "/home/creack/dotcloud/docker/docker/docker"
// WARNING: this crashTest will 1) crash your host, 2) remove all containers
func runDaemon() (*exec.Cmd, error) {
os.Remove("/var/run/docker.pid")
exec.Command("rm", "-rf", "/var/lib/docker/containers").Run()
cmd := exec.Command(DOCKERPATH, "-d")
cmd := exec.Command(DOCKER_PATH, "-d")
outPipe, err := cmd.StdoutPipe()
if err != nil {
return nil, err
@@ -43,43 +38,19 @@ func crashTest() error {
return err
}
var endpoint string
if ep := os.Getenv("TEST_ENDPOINT"); ep == "" {
endpoint = "192.168.56.1:7979"
} else {
endpoint = ep
}
c := make(chan bool)
var conn io.Writer
go func() {
conn, _ = net.Dial("tcp", endpoint)
c <- false
}()
go func() {
time.Sleep(2 * time.Second)
c <- true
}()
<-c
restartCount := 0
totalTestCount := 1
for {
daemon, err := runDaemon()
if err != nil {
return err
}
restartCount++
// time.Sleep(5000 * time.Millisecond)
var stop bool
go func() error {
stop = false
for i := 0; i < 100 && !stop; {
for i := 0; i < 100 && !stop; i++ {
func() error {
cmd := exec.Command(DOCKERPATH, "run", "base", "echo", fmt.Sprintf("%d", totalTestCount))
i++
totalTestCount++
cmd := exec.Command(DOCKER_PATH, "run", "base", "echo", "hello", "world")
log.Printf("%d", i)
outPipe, err := cmd.StdoutPipe()
if err != nil {
return err
@@ -91,10 +62,9 @@ func crashTest() error {
if err := cmd.Start(); err != nil {
return err
}
if conn != nil {
go io.Copy(conn, outPipe)
}
go func() {
io.Copy(os.Stdout, outPipe)
}()
// Expecting error, do not check
inPipe.Write([]byte("hello world!!!!!\n"))
go inPipe.Write([]byte("hello world!!!!!\n"))
@@ -116,6 +86,7 @@ func crashTest() error {
return err
}
}
return nil
}
func main() {

View File

@@ -0,0 +1,68 @@
# docker-build: build your software with docker
## Description
docker-build is a script to build docker images from source. It will be deprecated once the 'build' feature is incorporated into docker itself (See https://github.com/dotcloud/docker/issues/278)
Author: Solomon Hykes <solomon@dotcloud.com>
## Install
docker-builder requires:
1) A reasonably recent Python setup (tested on 2.7.2).
2) A running docker daemon at version 0.1.4 or more recent (http://www.docker.io/gettingstarted)
## Usage
First create a valid Changefile, which defines a sequence of changes to apply to a base image.
$ cat Changefile
# Start build from a know base image
from base:ubuntu-12.10
# Update ubuntu sources
run echo 'deb http://archive.ubuntu.com/ubuntu quantal main universe multiverse' > /etc/apt/sources.list
run apt-get update
# Install system packages
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q git
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q curl
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q golang
# Insert files from the host (./myscript must be present in the current directory)
copy myscript /usr/local/bin/myscript
Run docker-build, and pass the contents of your Changefile as standard input.
$ IMG=$(./docker-build < Changefile)
This will take a while: for each line of the changefile, docker-build will:
1. Create a new container to execute the given command or insert the given file
2. Wait for the container to complete execution
3. Commit the resulting changes as a new image
4. Use the resulting image as the input of the next step
If all the steps succeed, the result will be an image containing the combined results of each build step.
You can trace back those build steps by inspecting the image's history:
$ docker history $IMG
ID CREATED CREATED BY
1e9e2045de86 A few seconds ago /bin/sh -c cat > /usr/local/bin/myscript; chmod +x /usr/local/bin/git
77db140aa62a A few seconds ago /bin/sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -q golang
77db140aa62a A few seconds ago /bin/sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -q curl
77db140aa62a A few seconds ago /bin/sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -q git
83e85d155451 A few seconds ago /bin/sh -c apt-get update
bfd53b36d9d3 A few seconds ago /bin/sh -c echo 'deb http://archive.ubuntu.com/ubuntu quantal main universe multiverse' > /etc/apt/sources.list
base 2 weeks ago /bin/bash
27cf78414709 2 weeks ago
Note that your build started from 'base', as instructed by your Changefile. But that base image itself seems to have been built in 2 steps - hence the extra step in the history.
You can use this build technique to create any image you want: a database, a web application, or anything else that can be build by a sequence of unix commands - in other words, anything else.

104
contrib/docker-build/docker-build Executable file
View File

@@ -0,0 +1,104 @@
#!/usr/bin/env python
# docker-build is a script to build docker images from source.
# It will be deprecated once the 'build' feature is incorporated into docker itself.
# (See https://github.com/dotcloud/docker/issues/278)
#
# Author: Solomon Hykes <solomon@dotcloud.com>
# First create a valid Changefile, which defines a sequence of changes to apply to a base image.
#
# $ cat Changefile
# # Start build from a know base image
# from base:ubuntu-12.10
# # Update ubuntu sources
# run echo 'deb http://archive.ubuntu.com/ubuntu quantal main universe multiverse' > /etc/apt/sources.list
# run apt-get update
# # Install system packages
# run DEBIAN_FRONTEND=noninteractive apt-get install -y -q git
# run DEBIAN_FRONTEND=noninteractive apt-get install -y -q curl
# run DEBIAN_FRONTEND=noninteractive apt-get install -y -q golang
# # Insert files from the host (./myscript must be present in the current directory)
# copy myscript /usr/local/bin/myscript
#
#
# Run docker-build, and pass the contents of your Changefile as standard input.
#
# $ IMG=$(./docker-build < Changefile)
#
# This will take a while: for each line of the changefile, docker-build will:
#
# 1. Create a new container to execute the given command or insert the given file
# 2. Wait for the container to complete execution
# 3. Commit the resulting changes as a new image
# 4. Use the resulting image as the input of the next step
import sys
import subprocess
import json
import hashlib
def docker(args, stdin=None):
print "# docker " + " ".join(args)
p = subprocess.Popen(["docker"] + list(args), stdin=stdin, stdout=subprocess.PIPE)
return p.stdout
def image_exists(img):
return docker(["inspect", img]).read().strip() != ""
def run_and_commit(img_in, cmd, stdin=None):
run_id = docker(["run"] + (["-i", "-a", "stdin"] if stdin else ["-d"]) + [img_in, "/bin/sh", "-c", cmd], stdin=stdin).read().rstrip()
print "---> Waiting for " + run_id
result=int(docker(["wait", run_id]).read().rstrip())
if result != 0:
print "!!! '{}' return non-zero exit code '{}'. Aborting.".format(cmd, result)
sys.exit(1)
return docker(["commit", run_id]).read().rstrip()
def insert(base, src, dst):
print "COPY {} to {} in {}".format(src, dst, base)
if dst == "":
raise Exception("Missing destination path")
stdin = file(src)
stdin.seek(0)
return run_and_commit(base, "cat > {0}; chmod +x {0}".format(dst), stdin=stdin)
def main():
base=""
steps = []
try:
for line in sys.stdin.readlines():
line = line.strip()
# Skip comments and empty lines
if line == "" or line[0] == "#":
continue
op, param = line.split(" ", 1)
if op == "from":
print "FROM " + param
base = param
steps.append(base)
elif op == "run":
print "RUN " + param
result = run_and_commit(base, param)
steps.append(result)
base = result
print "===> " + base
elif op == "copy":
src, dst = param.split(" ", 1)
result = insert(base, src, dst)
steps.append(result)
base = result
print "===> " + base
else:
print "Skipping uknown op " + op
except:
docker(["rmi"] + steps[1:])
raise
print base
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,11 @@
# Start build from a know base image
from base:ubuntu-12.10
# Update ubuntu sources
run echo 'deb http://archive.ubuntu.com/ubuntu quantal main universe multiverse' > /etc/apt/sources.list
run apt-get update
# Install system packages
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q git
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q curl
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q golang
# Insert files from the host (./myscript must be present in the current directory)
copy myscript /usr/local/bin/myscript

View File

@@ -8,7 +8,7 @@
echo "Ensuring basic dependencies are installed..."
apt-get -qq update
apt-get -qq install lxc wget
apt-get -qq install lxc wget bsdtar
echo "Looking in /proc/filesystems to see if we have AUFS support..."
if grep -q aufs /proc/filesystems
@@ -36,9 +36,9 @@ else
fi
echo "Downloading docker binary and uncompressing into /usr/local/bin..."
curl -s http://get.docker.io/builds/$(uname -s)/$(uname -m)/docker-latest.tgz |
curl -s http://get.docker.io/builds/$(uname -s)/$(uname -m)/docker-master.tgz |
tar -C /usr/local/bin --strip-components=1 -zxf- \
docker-latest/docker
docker-master/docker
if [ -f /etc/init/dockerd.conf ]
then

View File

@@ -1,55 +0,0 @@
#!/bin/bash
set -e
# these should match the names found at http://www.debian.org/releases/
stableSuite='wheezy'
testingSuite='jessie'
unstableSuite='sid'
variant='minbase'
include='iproute,iputils-ping'
repo="$1"
suite="${2:-$stableSuite}"
mirror="${3:-}" # stick to the default debootstrap mirror if one is not provided
if [ ! "$repo" ]; then
echo >&2 "usage: $0 repo [suite [mirror]]"
echo >&2 " ie: $0 tianon/debian squeeze"
exit 1
fi
target="/tmp/docker-rootfs-debian-$suite-$$-$RANDOM"
cd "$(dirname "$(readlink -f "$BASH_SOURCE")")"
returnTo="$(pwd -P)"
set -x
# bootstrap
mkdir -p "$target"
sudo debootstrap --verbose --variant="$variant" --include="$include" "$suite" "$target" "$mirror"
cd "$target"
# create the image
img=$(sudo tar -c . | docker import -)
# tag suite
docker tag $img $repo $suite
# test the image
docker run -i -t $repo:$suite echo success
if [ "$suite" = "$stableSuite" -o "$suite" = 'stable' ]; then
# tag latest
docker tag $img $repo latest
# tag the specific debian release version
ver=$(docker run $repo:$suite cat /etc/debian_version)
docker tag $img $repo $ver
fi
# cleanup
cd "$returnTo"
sudo rm -rf "$target"

View File

@@ -1,49 +0,0 @@
#!/bin/bash
# Generate a very minimal filesystem based on busybox-static,
# and load it into the local docker under the name "docker-ut".
missing_pkg() {
echo "Sorry, I could not locate $1"
echo "Try 'apt-get install ${2:-$1}'?"
exit 1
}
BUSYBOX=$(which busybox)
[ "$BUSYBOX" ] || missing_pkg busybox busybox-static
SOCAT=$(which socat)
[ "$SOCAT" ] || missing_pkg socat
shopt -s extglob
set -ex
ROOTFS=`mktemp -d /tmp/rootfs-busybox.XXXXXXXXXX`
trap "rm -rf $ROOTFS" INT QUIT TERM
cd $ROOTFS
mkdir bin etc dev dev/pts lib proc sys tmp
touch etc/resolv.conf
cp /etc/nsswitch.conf etc/nsswitch.conf
echo root:x:0:0:root:/:/bin/sh > etc/passwd
echo daemon:x:1:1:daemon:/usr/sbin:/bin/sh >> etc/passwd
echo root:x:0: > etc/group
echo daemon:x:1: >> etc/group
ln -s lib lib64
ln -s bin sbin
cp $BUSYBOX $SOCAT bin
for X in $(busybox --list)
do
ln -s busybox bin/$X
done
rm bin/init
ln bin/busybox bin/init
cp -P /lib/x86_64-linux-gnu/lib{pthread*,c*(-*),dl*(-*),nsl*(-*),nss_*,util*(-*),wrap,z}.so* lib
cp /lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 lib
cp -P /usr/lib/x86_64-linux-gnu/lib{crypto,ssl}.so* lib
for X in console null ptmx random stdin stdout stderr tty urandom zero
do
cp -a /dev/$X dev
done
chmod 0755 $ROOTFS # See #486
tar -cf- . | docker import - docker-ut
docker run -i -u root docker-ut /bin/echo Success.
rm -rf $ROOTFS

View File

@@ -4,22 +4,23 @@ import (
"flag"
"fmt"
"github.com/dotcloud/docker"
"github.com/dotcloud/docker/utils"
"github.com/dotcloud/docker/rcli"
"github.com/dotcloud/docker/term"
"io"
"io/ioutil"
"log"
"os"
"os/signal"
"strconv"
"strings"
"syscall"
)
var (
GITCOMMIT string
GIT_COMMIT string
)
func main() {
if utils.SelfPath() == "/sbin/init" {
if docker.SelfPath() == "/sbin/init" {
// Running in init mode
docker.SysInit()
return
@@ -27,22 +28,9 @@ func main() {
// FIXME: Switch d and D ? (to be more sshd like)
flDaemon := flag.Bool("d", false, "Daemon mode")
flDebug := flag.Bool("D", false, "Debug mode")
flAutoRestart := flag.Bool("r", false, "Restart previously running containers")
bridgeName := flag.String("b", "", "Attach containers to a pre-existing network bridge")
pidfile := flag.String("p", "/var/run/docker.pid", "File containing process PID")
flGraphPath := flag.String("g", "/var/lib/docker", "Path to graph storage base dir.")
flEnableCors := flag.Bool("api-enable-cors", false, "Enable CORS requests in the remote api.")
flDns := flag.String("dns", "", "Set custom dns servers")
flHosts := docker.ListOpts{fmt.Sprintf("tcp://%s:%d", docker.DEFAULTHTTPHOST, docker.DEFAULTHTTPPORT)}
flag.Var(&flHosts, "H", "tcp://host:port to bind/connect to or unix://path/to/socket to use")
flag.Parse()
if len(flHosts) > 1 {
flHosts = flHosts[1:] //trick to display a nice defaul value in the usage
}
for i, flHost := range flHosts {
flHosts[i] = utils.ParseHost(docker.DEFAULTHTTPHOST, docker.DEFAULTHTTPPORT, flHost)
}
if *bridgeName != "" {
docker.NetworkBridgeIface = *bridgeName
} else {
@@ -51,25 +39,18 @@ func main() {
if *flDebug {
os.Setenv("DEBUG", "1")
}
docker.GITCOMMIT = GITCOMMIT
docker.GIT_COMMIT = GIT_COMMIT
if *flDaemon {
if flag.NArg() != 0 {
flag.Usage()
return
}
if err := daemon(*pidfile, *flGraphPath, flHosts, *flAutoRestart, *flEnableCors, *flDns); err != nil {
if err := daemon(*pidfile); err != nil {
log.Fatal(err)
os.Exit(-1)
}
} else {
if len(flHosts) > 1 {
log.Fatal("Please specify only one -H")
return
}
protoAddrParts := strings.SplitN(flHosts[0], "://", 2)
if err := docker.ParseCommands(protoAddrParts[0], protoAddrParts[1], flag.Args()...); err != nil {
if err := runCommand(flag.Args()); err != nil {
log.Fatal(err)
os.Exit(-1)
}
}
}
@@ -101,7 +82,7 @@ func removePidFile(pidfile string) {
}
}
func daemon(pidfile string, flGraphPath string, protoAddrs []string, autoRestart, enableCors bool, flDns string) error {
func daemon(pidfile string) error {
if err := createPidFile(pidfile); err != nil {
log.Fatal(err)
}
@@ -115,36 +96,51 @@ func daemon(pidfile string, flGraphPath string, protoAddrs []string, autoRestart
removePidFile(pidfile)
os.Exit(0)
}()
var dns []string
if flDns != "" {
dns = []string{flDns}
}
server, err := docker.NewServer(flGraphPath, autoRestart, enableCors, dns)
service, err := docker.NewServer()
if err != nil {
return err
}
chErrors := make(chan error, len(protoAddrs))
for _, protoAddr := range protoAddrs {
protoAddrParts := strings.SplitN(protoAddr, "://", 2)
if protoAddrParts[0] == "unix" {
syscall.Unlink(protoAddrParts[1])
} else if protoAddrParts[0] == "tcp" {
if !strings.HasPrefix(protoAddrParts[1], "127.0.0.1") {
log.Println("/!\\ DON'T BIND ON ANOTHER IP ADDRESS THAN 127.0.0.1 IF YOU DON'T KNOW WHAT YOU'RE DOING /!\\")
return rcli.ListenAndServe("tcp", "127.0.0.1:4242", service)
}
func runCommand(args []string) error {
// FIXME: we want to use unix sockets here, but net.UnixConn doesn't expose
// CloseWrite(), which we need to cleanly signal that stdin is closed without
// closing the connection.
// See http://code.google.com/p/go/issues/detail?id=3345
if conn, err := rcli.Call("tcp", "127.0.0.1:4242", args...); err == nil {
options := conn.GetOptions()
if options.RawTerminal &&
term.IsTerminal(int(os.Stdin.Fd())) &&
os.Getenv("NORAW") == "" {
if oldState, err := rcli.SetRawTerminal(); err != nil {
return err
} else {
defer rcli.RestoreTerminal(oldState)
}
} else {
log.Fatal("Invalid protocol format.")
os.Exit(-1)
}
go func() {
chErrors <- docker.ListenAndServe(protoAddrParts[0], protoAddrParts[1], server, true)
}()
}
for i := 0; i < len(protoAddrs); i += 1 {
err := <-chErrors
if err != nil {
receiveStdout := docker.Go(func() error {
_, err := io.Copy(os.Stdout, conn)
return err
})
sendStdin := docker.Go(func() error {
_, err := io.Copy(conn, os.Stdin)
if err := conn.CloseWrite(); err != nil {
log.Printf("Couldn't send EOF: " + err.Error())
}
return err
})
if err := <-receiveStdout; err != nil {
return err
}
if !term.IsTerminal(int(os.Stdin.Fd())) {
if err := <-sendStdin; err != nil {
return err
}
}
} else {
return fmt.Errorf("Can't connect to docker daemon. Is 'docker -d' running on this host?")
}
return nil
}

View File

@@ -1,2 +0,0 @@
Andy Rothfusz <andy@dotcloud.com>
Ken Cochrane <ken@dotcloud.com>

View File

@@ -6,7 +6,6 @@ SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
PYTHON = python
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
@@ -39,37 +38,38 @@ help:
# @echo " linkcheck to check all external links for integrity"
# @echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " docs to build the docs and copy the static files to the outputdir"
@echo " server to serve the docs in your browser under \`http://localhost:8000\`"
@echo " publish to publish the app to dotcloud"
clean:
-rm -rf $(BUILDDIR)/*
docs:
-rm -rf $(BUILDDIR)/*
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/html
cp sources/index.html $(BUILDDIR)/html/
cp -r sources/gettingstarted $(BUILDDIR)/html/
cp sources/dotcloud.yml $(BUILDDIR)/html/
cp sources/CNAME $(BUILDDIR)/html/
cp sources/.nojekyll $(BUILDDIR)/html/
cp sources/nginx.conf $(BUILDDIR)/html/
@echo
@echo "Build finished. The documentation pages are now in $(BUILDDIR)/html."
server: docs
@cd $(BUILDDIR)/html; $(PYTHON) -m SimpleHTTPServer 8000
site:
cp -r website $(BUILDDIR)/
cp -r theme/docker/static/ $(BUILDDIR)/website/
@echo
@echo "The Website pages are in $(BUILDDIR)/site."
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
connect:
@echo connecting dotcloud to www.docker.io website, make sure to use user 1
@echo or create your own "dockerwebsite" app
@cd $(BUILDDIR)/website/ ; \
dotcloud connect dockerwebsite ; \
dotcloud list
@echo pushing changes to staging site
@cd _build/html/ ; \
@dotcloud list ; \
dotcloud connect dockerwebsite
push:
@cd $(BUILDDIR)/website/ ; \
@cd _build/html/ ; \
dotcloud push
github-deploy: docs
rm -fr github-deploy
git clone ssh://git@github.com/dotcloud/docker github-deploy
cd github-deploy && git checkout -f gh-pages && git rm -r * && rsync -avH ../_build/html/ ./ && touch .nojekyll && echo "docker.io" > CNAME && git add * && git commit -m "Updating docs"
$(VERSIONS):
@echo "Hello world"

View File

@@ -14,22 +14,19 @@ Installation
------------
* Work in your own fork of the code, we accept pull requests.
* Install sphinx: `pip install sphinx`
* Mac OS X: `[sudo] pip-2.7 install sphinx`)
* Install sphinx httpdomain contrib package: `pip install sphinxcontrib-httpdomain`
* Mac OS X: `[sudo] pip-2.7 install sphinxcontrib-httpdomain`
* Install sphinx: ``pip install sphinx``
* If pip is not available you can probably install it using your favorite package manager as **python-pip**
Usage
-----
* Change the `.rst` files with your favorite editor to your liking.
* Run `make docs` to clean up old files and generate new ones.
* Your static website can now be found in the `_build` directory.
* To preview what you have generated run `make server` and open <http://localhost:8000/> in your favorite browser.
* change the .rst files with your favorite editor to your liking
* run *make docs* to clean up old files and generate new ones
* your static website can now be found in the _build dir
* to preview what you have generated, cd into _build/html and then run 'python -m SimpleHTTPServer 8000'
Working using GitHub's file editor
Working using github's file editor
----------------------------------
Alternatively, for small changes and typo's you might want to use GitHub's built in file editor. It allows
Alternatively, for small changes and typo's you might want to use github's built in file editor. It allows
you to preview your changes right online. Just be carefull not to create many commits.
Images
@@ -74,4 +71,4 @@ Guides on using sphinx
* Code examples
Start without $, so it's easy to copy and paste.
Start without $, so it's easy to copy and paste.

View File

@@ -1,2 +0,0 @@
Sphinx==1.1.3
sphinxcontrib-httpdomain==1.1.8

0
docs/sources/.nojekyll Normal file
View File

1
docs/sources/CNAME Normal file
View File

@@ -0,0 +1 @@
docker.io

View File

@@ -1 +0,0 @@
Solomon Hykes <solomon@dotcloud.com>

View File

@@ -1,5 +0,0 @@
This directory holds the authoritative specifications of APIs defined and implemented by Docker. Currently this includes:
* The remote API by which a docker node can be queried over HTTP
* The registry API by which a docker node can download and upload container images for storage and sharing
* The index search API by which a docker node can search the public index for images to download

View File

@@ -1,140 +0,0 @@
:title: Remote API
:description: API Documentation for Docker
:keywords: API, Docker, rcli, REST, documentation
=================
Docker Remote API
=================
.. contents:: Table of Contents
1. Brief introduction
=====================
- The Remote API is replacing rcli
- Default port in the docker deamon is 4243
- The API tends to be REST, but for some complex commands, like attach or pull, the HTTP connection is hijacked to transport stdout stdin and stderr
- Since API version 1.2, the auth configuration is now handled client side, so the client has to send the authConfig as POST in /images/(name)/push
2. Versions
===========
The current verson of the API is 1.3
Calling /images/<name>/insert is the same as calling /v1.3/images/<name>/insert
You can still call an old version of the api using /v1.0/images/<name>/insert
:doc:`docker_remote_api_v1.3`
*****************************
What's new
----------
Listing processes (/top):
- List the processes inside a container
Builder (/build):
- Simplify the upload of the build context
- Simply stream a tarball instead of multipart upload with 4 intermediary buffers
- Simpler, less memory usage, less disk usage and faster
.. Note::
The /build improvements are not reverse-compatible. Pre 1.3 clients will break on /build.
List containers (/containers/json):
- You can use size=1 to get the size of the containers
Start containers (/containers/<id>/start):
- You can now pass host-specific configuration (e.g. bind mounts) in the POST body for start calls
:doc:`docker_remote_api_v1.2`
*****************************
docker v0.4.2 2e7649b_
What's new
----------
The auth configuration is now handled by the client.
The client should send it's authConfig as POST on each call of /images/(name)/push
.. http:get:: /auth is now deprecated
.. http:post:: /auth only checks the configuration but doesn't store it on the server
Deleting an image is now improved, will only untag the image if it has chidrens and remove all the untagged parents if has any.
.. http:post:: /images/<name>/delete now returns a JSON with the list of images deleted/untagged
:doc:`docker_remote_api_v1.1`
*****************************
docker v0.4.0 a8ae398_
What's new
----------
.. http:post:: /images/create
.. http:post:: /images/(name)/insert
.. http:post:: /images/(name)/push
Uses json stream instead of HTML hijack, it looks like this:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-Type: application/json
{"status":"Pushing..."}
{"status":"Pushing", "progress":"1/? (n/a)"}
{"error":"Invalid..."}
...
:doc:`docker_remote_api_v1.0`
*****************************
docker v0.3.4 8d73740_
What's new
----------
Initial version
.. _a8ae398: https://github.com/dotcloud/docker/commit/a8ae398bf52e97148ee7bd0d5868de2e15bd297f
.. _8d73740: https://github.com/dotcloud/docker/commit/8d73740343778651c09160cde9661f5f387b36f4
.. _2e7649b: https://github.com/dotcloud/docker/commit/2e7649beda7c820793bd46766cbc2cfeace7b168
==================================
Docker Remote API Client Libraries
==================================
These libraries have been not tested by the Docker Maintainers for
compatibility. Please file issues with the library owners. If you
find more library implementations, please list them in Docker doc bugs
and we will add the libraries here.
+----------------------+----------------+--------------------------------------------+
| Language/Framework | Name | Repository |
+======================+================+============================================+
| Python | docker-py | https://github.com/dotcloud/docker-py |
+----------------------+----------------+--------------------------------------------+
| Ruby | docker-ruby | https://github.com/ActiveState/docker-ruby |
+----------------------+----------------+--------------------------------------------+
| Ruby | docker-client | https://github.com/geku/docker-client |
+----------------------+----------------+--------------------------------------------+
| Ruby | docker-api | https://github.com/swipely/docker-api |
+----------------------+----------------+--------------------------------------------+
| Javascript | docker-js | https://github.com/dgoujard/docker-js |
+----------------------+----------------+--------------------------------------------+
| Javascript (Angular) | dockerui | https://github.com/crosbymichael/dockerui |
| **WebUI** | | |
+----------------------+----------------+--------------------------------------------+
| Java | docker-java | https://github.com/kpelykh/docker-java |
+----------------------+----------------+--------------------------------------------+

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,18 +0,0 @@
:title: API Documentation
:description: docker documentation
:keywords: docker, ipa, documentation
APIs
====
Your programs and scripts can access Docker's functionality via these interfaces:
.. toctree::
:maxdepth: 3
registry_index_spec
registry_api
index_api
docker_remote_api

View File

@@ -1,553 +0,0 @@
:title: Index API
:description: API Documentation for Docker Index
:keywords: API, Docker, index, REST, documentation
=================
Docker Index API
=================
.. contents:: Table of Contents
1. Brief introduction
=====================
- This is the REST API for the Docker index
- Authorization is done with basic auth over SSL
- Not all commands require authentication, only those noted as such.
2. Endpoints
============
2.1 Repository
^^^^^^^^^^^^^^
Repositories
*************
User Repo
~~~~~~~~~
.. http:put:: /v1/repositories/(namespace)/(repo_name)/
Create a user repository with the given ``namespace`` and ``repo_name``.
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foo/bar/ HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
X-Docker-Token: true
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”}]
:parameter namespace: the namespace for the repo
:parameter repo_name: the name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
WWW-Authenticate: Token signature=123abc,repository=”foo/bar”,access=write
X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]
""
:statuscode 200: Created
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active
.. http:delete:: /v1/repositories/(namespace)/(repo_name)/
Delete a user repository with the given ``namespace`` and ``repo_name``.
**Example Request**:
.. sourcecode:: http
DELETE /v1/repositories/foo/bar/ HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
X-Docker-Token: true
""
:parameter namespace: the namespace for the repo
:parameter repo_name: the name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 202
Vary: Accept
Content-Type: application/json
WWW-Authenticate: Token signature=123abc,repository=”foo/bar”,access=delete
X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]
""
:statuscode 200: Deleted
:statuscode 202: Accepted
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active
Library Repo
~~~~~~~~~~~~
.. http:put:: /v1/repositories/(repo_name)/
Create a library repository with the given ``repo_name``.
This is a restricted feature only available to docker admins.
When namespace is missing, it is assumed to be ``library``
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foobar/ HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
X-Docker-Token: true
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”}]
:parameter repo_name: the library name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
WWW-Authenticate: Token signature=123abc,repository=”library/foobar”,access=write
X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]
""
:statuscode 200: Created
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active
.. http:delete:: /v1/repositories/(repo_name)/
Delete a library repository with the given ``repo_name``.
This is a restricted feature only available to docker admins.
When namespace is missing, it is assumed to be ``library``
**Example Request**:
.. sourcecode:: http
DELETE /v1/repositories/foobar/ HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
X-Docker-Token: true
""
:parameter repo_name: the library name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 202
Vary: Accept
Content-Type: application/json
WWW-Authenticate: Token signature=123abc,repository=”library/foobar”,access=delete
X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]
""
:statuscode 200: Deleted
:statuscode 202: Accepted
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active
Repository Images
*****************
User Repo Images
~~~~~~~~~~~~~~~~
.. http:put:: /v1/repositories/(namespace)/(repo_name)/images
Update the images for a user repo.
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foo/bar/images HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”,
“checksum”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”}]
:parameter namespace: the namespace for the repo
:parameter repo_name: the name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 204
Vary: Accept
Content-Type: application/json
""
:statuscode 204: Created
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active or permission denied
.. http:get:: /v1/repositories/(namespace)/(repo_name)/images
get the images for a user repo.
**Example Request**:
.. sourcecode:: http
GET /v1/repositories/foo/bar/images HTTP/1.1
Host: index.docker.io
Accept: application/json
:parameter namespace: the namespace for the repo
:parameter repo_name: the name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”,
“checksum”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”},
{“id”: “ertwetewtwe38722009fe6857087b486531f9a779a0c1dfddgfgsdgdsgds”,
“checksum”: “34t23f23fc17e3ed29dae8f12c4f9e89cc6f0bsdfgfsdgdsgdsgerwgew”}]
:statuscode 200: OK
:statuscode 404: Not found
Library Repo Images
~~~~~~~~~~~~~~~~~~~
.. http:put:: /v1/repositories/(repo_name)/images
Update the images for a library repo.
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foobar/images HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”,
“checksum”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”}]
:parameter repo_name: the library name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 204
Vary: Accept
Content-Type: application/json
""
:statuscode 204: Created
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active or permission denied
.. http:get:: /v1/repositories/(repo_name)/images
get the images for a library repo.
**Example Request**:
.. sourcecode:: http
GET /v1/repositories/foobar/images HTTP/1.1
Host: index.docker.io
Accept: application/json
:parameter repo_name: the library name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”,
“checksum”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”},
{“id”: “ertwetewtwe38722009fe6857087b486531f9a779a0c1dfddgfgsdgdsgds”,
“checksum”: “34t23f23fc17e3ed29dae8f12c4f9e89cc6f0bsdfgfsdgdsgdsgerwgew”}]
:statuscode 200: OK
:statuscode 404: Not found
Repository Authorization
************************
Library Repo
~~~~~~~~~~~~
.. http:put:: /v1/repositories/(repo_name)/auth
authorize a token for a library repo
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foobar/auth HTTP/1.1
Host: index.docker.io
Accept: application/json
Authorization: Token signature=123abc,repository="library/foobar",access=write
:parameter repo_name: the library name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
"OK"
:statuscode 200: OK
:statuscode 403: Permission denied
:statuscode 404: Not found
User Repo
~~~~~~~~~
.. http:put:: /v1/repositories/(namespace)/(repo_name)/auth
authorize a token for a user repo
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foo/bar/auth HTTP/1.1
Host: index.docker.io
Accept: application/json
Authorization: Token signature=123abc,repository="foo/bar",access=write
:parameter namespace: the namespace for the repo
:parameter repo_name: the name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
"OK"
:statuscode 200: OK
:statuscode 403: Permission denied
:statuscode 404: Not found
2.2 Users
^^^^^^^^^
User Login
**********
.. http:get:: /v1/users
If you want to check your login, you can try this endpoint
**Example Request**:
.. sourcecode:: http
GET /v1/users HTTP/1.1
Host: index.docker.io
Accept: application/json
Authorization: Basic akmklmasadalkm==
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Vary: Accept
Content-Type: application/json
OK
:statuscode 200: no error
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active
User Register
*************
.. http:post:: /v1/users
Registering a new account.
**Example request**:
.. sourcecode:: http
POST /v1/users HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
{"email": "sam@dotcloud.com",
"password": "toto42",
"username": "foobar"'}
:jsonparameter email: valid email address, that needs to be confirmed
:jsonparameter username: min 4 character, max 30 characters, must match the regular expression [a-z0-9_].
:jsonparameter password: min 5 characters
**Example Response**:
.. sourcecode:: http
HTTP/1.1 201 OK
Vary: Accept
Content-Type: application/json
"User Created"
:statuscode 201: User Created
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
Update User
***********
.. http:put:: /v1/users/(username)/
Change a password or email address for given user. If you pass in an email,
it will add it to your account, it will not remove the old one. Passwords will
be updated.
It is up to the client to verify that that password that is sent is the one that
they want. Common approach is to have them type it twice.
**Example Request**:
.. sourcecode:: http
PUT /v1/users/fakeuser/ HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
{"email": "sam@dotcloud.com",
"password": "toto42"}
:parameter username: username for the person you want to update
**Example Response**:
.. sourcecode:: http
HTTP/1.1 204
Vary: Accept
Content-Type: application/json
""
:statuscode 204: User Updated
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active
:statuscode 404: User not found
2.3 Search
^^^^^^^^^^
If you need to search the index, this is the endpoint you would use.
Search
******
.. http:get:: /v1/search
Search the Index given a search term. It accepts :http:method:`get` only.
**Example request**:
.. sourcecode:: http
GET /v1/search?q=search_term HTTP/1.1
Host: example.com
Accept: application/json
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Vary: Accept
Content-Type: application/json
{"query":"search_term",
"num_results": 2,
"results" : [
{"name": "dotcloud/base", "description": "A base ubuntu64 image..."},
{"name": "base2", "description": "A base ubuntu64 image..."},
]
}
:query q: what you want to search for
:statuscode 200: no error
:statuscode 500: server error

View File

@@ -1,463 +0,0 @@
:title: Registry API
:description: API Documentation for Docker Registry
:keywords: API, Docker, index, registry, REST, documentation
===================
Docker Registry API
===================
.. contents:: Table of Contents
1. Brief introduction
=====================
- This is the REST API for the Docker Registry
- It stores the images and the graph for a set of repositories
- It does not have user accounts data
- It has no notion of user accounts or authorization
- It delegates authentication and authorization to the Index Auth service using tokens
- It supports different storage backends (S3, cloud files, local FS)
- It doesnt have a local database
- It will be open-sourced at some point
We expect that there will be multiple registries out there. To help to grasp the context, here are some examples of registries:
- **sponsor registry**: such a registry is provided by a third-party hosting infrastructure as a convenience for their customers and the docker community as a whole. Its costs are supported by the third party, but the management and operation of the registry are supported by dotCloud. It features read/write access, and delegates authentication and authorization to the Index.
- **mirror registry**: such a registry is provided by a third-party hosting infrastructure but is targeted at their customers only. Some mechanism (unspecified to date) ensures that public images are pulled from a sponsor registry to the mirror registry, to make sure that the customers of the third-party provider can “docker pull” those images locally.
- **vendor registry**: such a registry is provided by a software vendor, who wants to distribute docker images. It would be operated and managed by the vendor. Only users authorized by the vendor would be able to get write access. Some images would be public (accessible for anyone), others private (accessible only for authorized users). Authentication and authorization would be delegated to the Index. The goal of vendor registries is to let someone do “docker pull basho/riak1.3” and automatically push from the vendor registry (instead of a sponsor registry); i.e. get all the convenience of a sponsor registry, while retaining control on the asset distribution.
- **private registry**: such a registry is located behind a firewall, or protected by an additional security layer (HTTP authorization, SSL client-side certificates, IP address authorization...). The registry is operated by a private entity, outside of dotClouds control. It can optionally delegate additional authorization to the Index, but it is not mandatory.
.. note::
Mirror registries and private registries which do not use the Index dont even need to run the registry code. They can be implemented by any kind of transport implementing HTTP GET and PUT. Read-only registries can be powered by a simple static HTTP server.
.. note::
The latter implies that while HTTP is the protocol of choice for a registry, multiple schemes are possible (and in some cases, trivial):
- HTTP with GET (and PUT for read-write registries);
- local mount point;
- remote docker addressed through SSH.
The latter would only require two new commands in docker, e.g. “registryget” and “registryput”, wrapping access to the local filesystem (and optionally doing consistency checks). Authentication and authorization are then delegated to SSH (e.g. with public keys).
2. Endpoints
============
2.1 Images
----------
Layer
*****
.. http:get:: /v1/images/(image_id)/layer
get image layer for a given ``image_id``
**Example Request**:
.. sourcecode:: http
GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/layer HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Token akmklmasadalkmsdfgsdgdge33
:parameter image_id: the id for the layer you want to get
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
{
id: "088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c",
parent: "aeee6396d62273d180a49c96c62e45438d87c7da4a5cf5d2be6bee4e21bc226f",
created: "2013-04-30T17:46:10.843673+03:00",
container: "8305672a76cc5e3d168f97221106ced35a76ec7ddbb03209b0f0d96bf74f6ef7",
container_config: {
Hostname: "host-test",
User: "",
Memory: 0,
MemorySwap: 0,
AttachStdin: false,
AttachStdout: false,
AttachStderr: false,
PortSpecs: null,
Tty: false,
OpenStdin: false,
StdinOnce: false,
Env: null,
Cmd: [
"/bin/bash",
"-c",
"apt-get -q -yy -f install libevent-dev"
],
Dns: null,
Image: "imagename/blah",
Volumes: { },
VolumesFrom: ""
},
docker_version: "0.1.7"
}
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Image not found
.. http:put:: /v1/images/(image_id)/layer
put image layer for a given ``image_id``
**Example Request**:
.. sourcecode:: http
PUT /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/layer HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Token akmklmasadalkmsdfgsdgdge33
{
id: "088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c",
parent: "aeee6396d62273d180a49c96c62e45438d87c7da4a5cf5d2be6bee4e21bc226f",
created: "2013-04-30T17:46:10.843673+03:00",
container: "8305672a76cc5e3d168f97221106ced35a76ec7ddbb03209b0f0d96bf74f6ef7",
container_config: {
Hostname: "host-test",
User: "",
Memory: 0,
MemorySwap: 0,
AttachStdin: false,
AttachStdout: false,
AttachStderr: false,
PortSpecs: null,
Tty: false,
OpenStdin: false,
StdinOnce: false,
Env: null,
Cmd: [
"/bin/bash",
"-c",
"apt-get -q -yy -f install libevent-dev"
],
Dns: null,
Image: "imagename/blah",
Volumes: { },
VolumesFrom: ""
},
docker_version: "0.1.7"
}
:parameter image_id: the id for the layer you want to get
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
""
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Image not found
Image
*****
.. http:put:: /v1/images/(image_id)/json
put image for a given ``image_id``
**Example Request**:
.. sourcecode:: http
PUT /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/json HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
{
“id”: “088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c”,
“checksum”: “sha256:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”
}
:parameter image_id: the id for the layer you want to get
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
""
:statuscode 200: OK
:statuscode 401: Requires authorization
.. http:get:: /v1/images/(image_id)/json
get image for a given ``image_id``
**Example Request**:
.. sourcecode:: http
GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/json HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
:parameter image_id: the id for the layer you want to get
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
{
“id”: “088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c”,
“checksum”: “sha256:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”
}
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Image not found
Ancestry
********
.. http:get:: /v1/images/(image_id)/ancestry
get ancestry for an image given an ``image_id``
**Example Request**:
.. sourcecode:: http
GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/ancestry HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
:parameter image_id: the id for the layer you want to get
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
["088b4502f51920fbd9b7c503e87c7a2c05aa3adc3d35e79c031fa126b403200f",
"aeee63968d87c7da4a5cf5d2be6bee4e21bc226fd62273d180a49c96c62e4543",
"bfa4c5326bc764280b0863b46a4b20d940bc1897ef9c1dfec060604bdc383280",
"6ab5893c6927c15a15665191f2c6cf751f5056d8b95ceee32e43c5e8a3648544"]
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Image not found
2.2 Tags
--------
.. http:get:: /v1/repositories/(namespace)/(repository)/tags
get all of the tags for the given repo.
**Example Request**:
.. sourcecode:: http
GET /v1/repositories/foo/bar/tags HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
:parameter namespace: namespace for the repo
:parameter repository: name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
{
"latest": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f",
“0.1.1”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”
}
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Repository not found
.. http:get:: /v1/repositories/(namespace)/(repository)/tags/(tag)
get a tag for the given repo.
**Example Request**:
.. sourcecode:: http
GET /v1/repositories/foo/bar/tags/latest HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
:parameter namespace: namespace for the repo
:parameter repository: name for the repo
:parameter tag: name of tag you want to get
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f"
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Tag not found
.. http:delete:: /v1/repositories/(namespace)/(repository)/tags/(tag)
delete the tag for the repo
**Example Request**:
.. sourcecode:: http
DELETE /v1/repositories/foo/bar/tags/latest HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
:parameter namespace: namespace for the repo
:parameter repository: name for the repo
:parameter tag: name of tag you want to delete
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
""
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Tag not found
.. http:put:: /v1/repositories/(namespace)/(repository)/tags/(tag)
put a tag for the given repo.
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foo/bar/tags/latest HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
“9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”
:parameter namespace: namespace for the repo
:parameter repository: name for the repo
:parameter tag: name of tag you want to add
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
""
:statuscode 200: OK
:statuscode 400: Invalid data
:statuscode 401: Requires authorization
:statuscode 404: Image not found
2.3 Repositories
----------------
.. http:delete:: /v1/repositories/(namespace)/(repository)/
delete a repository
**Example Request**:
.. sourcecode:: http
DELETE /v1/repositories/foo/bar/ HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
""
:parameter namespace: namespace for the repo
:parameter repository: name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
""
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Repository not found
3.0 Authorization
=================
This is where we describe the authorization process, including the tokens and cookies.
TODO: add more info.

View File

@@ -1,569 +0,0 @@
:title: Registry Documentation
:description: Documentation for docker Registry and Registry API
:keywords: docker, registry, api, index
=====================
Registry & index Spec
=====================
.. contents:: Table of Contents
1. The 3 roles
===============
1.1 Index
---------
The Index is responsible for centralizing information about:
- User accounts
- Checksums of the images
- Public namespaces
The Index has different components:
- Web UI
- Meta-data store (comments, stars, list public repositories)
- Authentication service
- Tokenization
The index is authoritative for those information.
We expect that there will be only one instance of the index, run and managed by dotCloud.
1.2 Registry
------------
- It stores the images and the graph for a set of repositories
- It does not have user accounts data
- It has no notion of user accounts or authorization
- It delegates authentication and authorization to the Index Auth service using tokens
- It supports different storage backends (S3, cloud files, local FS)
- It doesnt have a local database
- It will be open-sourced at some point
We expect that there will be multiple registries out there. To help to grasp the context, here are some examples of registries:
- **sponsor registry**: such a registry is provided by a third-party hosting infrastructure as a convenience for their customers and the docker community as a whole. Its costs are supported by the third party, but the management and operation of the registry are supported by dotCloud. It features read/write access, and delegates authentication and authorization to the Index.
- **mirror registry**: such a registry is provided by a third-party hosting infrastructure but is targeted at their customers only. Some mechanism (unspecified to date) ensures that public images are pulled from a sponsor registry to the mirror registry, to make sure that the customers of the third-party provider can “docker pull” those images locally.
- **vendor registry**: such a registry is provided by a software vendor, who wants to distribute docker images. It would be operated and managed by the vendor. Only users authorized by the vendor would be able to get write access. Some images would be public (accessible for anyone), others private (accessible only for authorized users). Authentication and authorization would be delegated to the Index. The goal of vendor registries is to let someone do “docker pull basho/riak1.3” and automatically push from the vendor registry (instead of a sponsor registry); i.e. get all the convenience of a sponsor registry, while retaining control on the asset distribution.
- **private registry**: such a registry is located behind a firewall, or protected by an additional security layer (HTTP authorization, SSL client-side certificates, IP address authorization...). The registry is operated by a private entity, outside of dotClouds control. It can optionally delegate additional authorization to the Index, but it is not mandatory.
.. note::
Mirror registries and private registries which do not use the Index dont even need to run the registry code. They can be implemented by any kind of transport implementing HTTP GET and PUT. Read-only registries can be powered by a simple static HTTP server.
.. note::
The latter implies that while HTTP is the protocol of choice for a registry, multiple schemes are possible (and in some cases, trivial):
- HTTP with GET (and PUT for read-write registries);
- local mount point;
- remote docker addressed through SSH.
The latter would only require two new commands in docker, e.g. “registryget” and “registryput”, wrapping access to the local filesystem (and optionally doing consistency checks). Authentication and authorization are then delegated to SSH (e.g. with public keys).
1.3 Docker
----------
On top of being a runtime for LXC, Docker is the Registry client. It supports:
- Push / Pull on the registry
- Client authentication on the Index
2. Workflow
===========
2.1 Pull
--------
.. image:: /static_files/docker_pull_chart.png
1. Contact the Index to know where I should download “samalba/busybox”
2. Index replies:
a. “samalba/busybox” is on Registry A
b. here are the checksums for “samalba/busybox” (for all layers)
c. token
3. Contact Registry A to receive the layers for “samalba/busybox” (all of them to the base image). Registry A is authoritative for “samalba/busybox” but keeps a copy of all inherited layers and serve them all from the same location.
4. registry contacts index to verify if token/user is allowed to download images
5. Index returns true/false lettings registry know if it should proceed or error out
6. Get the payload for all layers
Its possible to run docker pull \https://<registry>/repositories/samalba/busybox. In this case, docker bypasses the Index. However the security is not guaranteed (in case Registry A is corrupted) because there wont be any checksum checks.
Currently registry redirects to s3 urls for downloads, going forward all downloads need to be streamed through the registry. The Registry will then abstract the calls to S3 by a top-level class which implements sub-classes for S3 and local storage.
Token is only returned when the 'X-Docker-Token' header is sent with request.
Basic Auth is required to pull private repos. Basic auth isn't required for pulling public repos, but if one is provided, it needs to be valid and for an active account.
API (pulling repository foo/bar):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. (Docker -> Index) GET /v1/repositories/foo/bar/images
**Headers**:
Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
X-Docker-Token: true
**Action**:
(looking up the foo/bar in db and gets images and checksums for that repo (all if no tag is specified, if tag, only checksums for those tags) see part 4.4.1)
2. (Index -> Docker) HTTP 200 OK
**Headers**:
- Authorization: Token signature=123abc,repository=”foo/bar”,access=write
- X-Docker-Endpoints: registry.docker.io [, registry2.docker.io]
**Body**:
Jsonified checksums (see part 4.4.1)
3. (Docker -> Registry) GET /v1/repositories/foo/bar/tags/latest
**Headers**:
Authorization: Token signature=123abc,repository=”foo/bar”,access=write
4. (Registry -> Index) GET /v1/repositories/foo/bar/images
**Headers**:
Authorization: Token signature=123abc,repository=”foo/bar”,access=read
**Body**:
<ids and checksums in payload>
**Action**:
( Lookup token see if they have access to pull.)
If good:
HTTP 200 OK
Index will invalidate the token
If bad:
HTTP 401 Unauthorized
5. (Docker -> Registry) GET /v1/images/928374982374/ancestry
**Action**:
(for each image id returned in the registry, fetch /json + /layer)
.. note::
If someone makes a second request, then we will always give a new token, never reuse tokens.
2.2 Push
--------
.. image:: /static_files/docker_push_chart.png
1. Contact the index to allocate the repository name “samalba/busybox” (authentication required with user credentials)
2. If authentication works and namespace available, “samalba/busybox” is allocated and a temporary token is returned (namespace is marked as initialized in index)
3. Push the image on the registry (along with the token)
4. Registry A contacts the Index to verify the token (token must corresponds to the repository name)
5. Index validates the token. Registry A starts reading the stream pushed by docker and store the repository (with its images)
6. docker contacts the index to give checksums for upload images
.. note::
**Its possible not to use the Index at all!** In this case, a deployed version of the Registry is deployed to store and serve images. Those images are not authentified and the security is not guaranteed.
.. note::
**Index can be replaced!** For a private Registry deployed, a custom Index can be used to serve and validate token according to different policies.
Docker computes the checksums and submit them to the Index at the end of the push. When a repository name does not have checksums on the Index, it means that the push is in progress (since checksums are submitted at the end).
API (pushing repos foo/bar):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. (Docker -> Index) PUT /v1/repositories/foo/bar/
**Headers**:
Authorization: Basic sdkjfskdjfhsdkjfh==
X-Docker-Token: true
**Action**::
- in index, we allocated a new repository, and set to initialized
**Body**::
(The body contains the list of images that are going to be pushed, with empty checksums. The checksums will be set at the end of the push)::
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”}]
2. (Index -> Docker) 200 Created
**Headers**:
- WWW-Authenticate: Token signature=123abc,repository=”foo/bar”,access=write
- X-Docker-Endpoints: registry.docker.io [, registry2.docker.io]
3. (Docker -> Registry) PUT /v1/images/98765432_parent/json
**Headers**:
Authorization: Token signature=123abc,repository=”foo/bar”,access=write
4. (Registry->Index) GET /v1/repositories/foo/bar/images
**Headers**:
Authorization: Token signature=123abc,repository=”foo/bar”,access=write
**Action**::
- Index:
will invalidate the token.
- Registry:
grants a session (if token is approved) and fetches the images id
5. (Docker -> Registry) PUT /v1/images/98765432_parent/json
**Headers**::
- Authorization: Token signature=123abc,repository=”foo/bar”,access=write
- Cookie: (Cookie provided by the Registry)
6. (Docker -> Registry) PUT /v1/images/98765432/json
**Headers**:
Cookie: (Cookie provided by the Registry)
7. (Docker -> Registry) PUT /v1/images/98765432_parent/layer
**Headers**:
Cookie: (Cookie provided by the Registry)
8. (Docker -> Registry) PUT /v1/images/98765432/layer
**Headers**:
X-Docker-Checksum: sha256:436745873465fdjkhdfjkgh
9. (Docker -> Registry) PUT /v1/repositories/foo/bar/tags/latest
**Headers**:
Cookie: (Cookie provided by the Registry)
**Body**:
“98765432”
10. (Docker -> Index) PUT /v1/repositories/foo/bar/images
**Headers**:
Authorization: Basic 123oislifjsldfj==
X-Docker-Endpoints: registry1.docker.io (no validation on this right now)
**Body**:
(The image, ids, tags and checksums)
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”,
“checksum”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”}]
**Return** HTTP 204
.. note::
If push fails and they need to start again, what happens in the index, there will already be a record for the namespace/name, but it will be initialized. Should we allow it, or mark as name already used? One edge case could be if someone pushes the same thing at the same time with two different shells.
If it's a retry on the Registry, Docker has a cookie (provided by the registry after token validation). So the Index wont have to provide a new token.
2.3 Delete
----------
If you need to delete something from the index or registry, we need a nice clean way to do that. Here is the workflow.
1. Docker contacts the index to request a delete of a repository “samalba/busybox” (authentication required with user credentials)
2. If authentication works and repository is valid, “samalba/busybox” is marked as deleted and a temporary token is returned
3. Send a delete request to the registry for the repository (along with the token)
4. Registry A contacts the Index to verify the token (token must corresponds to the repository name)
5. Index validates the token. Registry A deletes the repository and everything associated to it.
6. docker contacts the index to let it know it was removed from the registry, the index removes all records from the database.
.. note::
The Docker client should present an "Are you sure?" prompt to confirm the deletion before starting the process. Once it starts it can't be undone.
API (deleting repository foo/bar):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. (Docker -> Index) DELETE /v1/repositories/foo/bar/
**Headers**:
Authorization: Basic sdkjfskdjfhsdkjfh==
X-Docker-Token: true
**Action**::
- in index, we make sure it is a valid repository, and set to deleted (logically)
**Body**::
Empty
2. (Index -> Docker) 202 Accepted
**Headers**:
- WWW-Authenticate: Token signature=123abc,repository=”foo/bar”,access=delete
- X-Docker-Endpoints: registry.docker.io [, registry2.docker.io] # list of endpoints where this repo lives.
3. (Docker -> Registry) DELETE /v1/repositories/foo/bar/
**Headers**:
Authorization: Token signature=123abc,repository=”foo/bar”,access=delete
4. (Registry->Index) PUT /v1/repositories/foo/bar/auth
**Headers**:
Authorization: Token signature=123abc,repository=”foo/bar”,access=delete
**Action**::
- Index:
will invalidate the token.
- Registry:
deletes the repository (if token is approved)
5. (Registry -> Docker) 200 OK
200 If success
403 if forbidden
400 if bad request
404 if repository isn't found
6. (Docker -> Index) DELETE /v1/repositories/foo/bar/
**Headers**:
Authorization: Basic 123oislifjsldfj==
X-Docker-Endpoints: registry-1.docker.io (no validation on this right now)
**Body**:
Empty
**Return** HTTP 200
3. How to use the Registry in standalone mode
=============================================
The Index has two main purposes (along with its fancy social features):
- Resolve short names (to avoid passing absolute URLs all the time)
- username/projectname -> \https://registry.docker.io/users/<username>/repositories/<projectname>/
- team/projectname -> \https://registry.docker.io/team/<team>/repositories/<projectname>/
- Authenticate a user as a repos owner (for a central referenced repository)
3.1 Without an Index
--------------------
Using the Registry without the Index can be useful to store the images on a private network without having to rely on an external entity controlled by dotCloud.
In this case, the registry will be launched in a special mode (--standalone? --no-index?). In this mode, the only thing which changes is that Registry will never contact the Index to verify a token. It will be the Registry owner responsibility to authenticate the user who pushes (or even pulls) an image using any mechanism (HTTP auth, IP based, etc...).
In this scenario, the Registry is responsible for the security in case of data corruption since the checksums are not delivered by a trusted entity.
As hinted previously, a standalone registry can also be implemented by any HTTP server handling GET/PUT requests (or even only GET requests if no write access is necessary).
3.2 With an Index
-----------------
The Index data needed by the Registry are simple:
- Serve the checksums
- Provide and authorize a Token
In the scenario of a Registry running on a private network with the need of centralizing and authorizing, its easy to use a custom Index.
The only challenge will be to tell Docker to contact (and trust) this custom Index. Docker will be configurable at some point to use a specific Index, itll be the private entity responsibility (basically the organization who uses Docker in a private environment) to maintain the Index and the Dockers configuration among its consumers.
4. The API
==========
The first version of the api is available here: https://github.com/jpetazzo/docker/blob/acd51ecea8f5d3c02b00a08176171c59442df8b3/docs/images-repositories-push-pull.md
4.1 Images
----------
The format returned in the images is not defined here (for layer and json), basically because Registry stores exactly the same kind of information as Docker uses to manage them.
The format of ancestry is a line-separated list of image ids, in age order. I.e. the images parent is on the last line, the parent of the parent on the next-to-last line, etc.; if the image has no parent, the file is empty.
GET /v1/images/<image_id>/layer
PUT /v1/images/<image_id>/layer
GET /v1/images/<image_id>/json
PUT /v1/images/<image_id>/json
GET /v1/images/<image_id>/ancestry
PUT /v1/images/<image_id>/ancestry
4.2 Users
---------
4.2.1 Create a user (Index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
POST /v1/users
**Body**:
{"email": "sam@dotcloud.com", "password": "toto42", "username": "foobar"'}
**Validation**:
- **username** : min 4 character, max 30 characters, must match the regular expression [a-z0-9_].
- **password**: min 5 characters
**Valid**: return HTTP 200
Errors: HTTP 400 (we should create error codes for possible errors)
- invalid json
- missing field
- wrong format (username, password, email, etc)
- forbidden name
- name already exists
.. note::
A user account will be valid only if the email has been validated (a validation link is sent to the email address).
4.2.2 Update a user (Index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
PUT /v1/users/<username>
**Body**:
{"password": "toto"}
.. note::
We can also update email address, if they do, they will need to reverify their new email address.
4.2.3 Login (Index)
^^^^^^^^^^^^^^^^^^^
Does nothing else but asking for a user authentication. Can be used to validate credentials. HTTP Basic Auth for now, maybe change in future.
GET /v1/users
**Return**:
- Valid: HTTP 200
- Invalid login: HTTP 401
- Account inactive: HTTP 403 Account is not Active
4.3 Tags (Registry)
-------------------
The Registry does not know anything about users. Even though repositories are under usernames, its just a namespace for the registry. Allowing us to implement organizations or different namespaces per user later, without modifying the Registrys API.
The following naming restrictions apply:
- Namespaces must match the same regular expression as usernames (See 4.2.1.)
- Repository names must match the regular expression [a-zA-Z0-9-_.]
4.3.1 Get all tags
^^^^^^^^^^^^^^^^^^
GET /v1/repositories/<namespace>/<repository_name>/tags
**Return**: HTTP 200
{
"latest": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f",
“0.1.1”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”
}
4.3.2 Read the content of a tag (resolve the image id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
GET /v1/repositories/<namespace>/<repo_name>/tags/<tag>
**Return**:
"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f"
4.3.3 Delete a tag (registry)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
DELETE /v1/repositories/<namespace>/<repo_name>/tags/<tag>
4.4 Images (Index)
------------------
For the Index to “resolve” the repository name to a Registry location, it uses the X-Docker-Endpoints header. In other terms, this requests always add a “X-Docker-Endpoints” to indicate the location of the registry which hosts this repository.
4.4.1 Get the images
^^^^^^^^^^^^^^^^^^^^^
GET /v1/repositories/<namespace>/<repo_name>/images
**Return**: HTTP 200
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”, “checksum”: “md5:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”}]
4.4.2 Add/update the images
^^^^^^^^^^^^^^^^^^^^^^^^^^^
You always add images, you never remove them.
PUT /v1/repositories/<namespace>/<repo_name>/images
**Body**:
[ {“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”, “checksum”: “sha256:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”} ]
**Return** 204
4.5 Repositories
----------------
4.5.1 Remove a Repository (Registry)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
DELETE /v1/repositories/<namespace>/<repo_name>
Return 200 OK
4.5.2 Remove a Repository (Index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This starts the delete process. see 2.3 for more details.
DELETE /v1/repositories/<namespace>/<repo_name>
Return 202 OK
5. Chaining Registries
======================
Its possible to chain Registries server for several reasons:
- Load balancing
- Delegate the next request to another server
When a Registry is a reference for a repository, it should host the entire images chain in order to avoid breaking the chain during the download.
The Index and Registry use this mechanism to redirect on one or the other.
Example with an image download:
On every request, a special header can be returned:
X-Docker-Endpoints: server1,server2
On the next request, the client will always pick a server from this list.
6. Authentication & Authorization
=================================
6.1 On the Index
-----------------
The Index supports both “Basic” and “Token” challenges. Usually when there is a “401 Unauthorized”, the Index replies this::
401 Unauthorized
WWW-Authenticate: Basic realm="auth required",Token
You have 3 options:
1. Provide user credentials and ask for a token
**Header**:
- Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
- X-Docker-Token: true
In this case, along with the 200 response, youll get a new token (if user auth is ok):
If authorization isn't correct you get a 401 response.
If account isn't active you will get a 403 response.
**Response**:
- 200 OK
- X-Docker-Token: Token signature=123abc,repository=”foo/bar”,access=read
2. Provide user credentials only
**Header**:
Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
3. Provide Token
**Header**:
Authorization: Token signature=123abc,repository=”foo/bar”,access=read
6.2 On the Registry
-------------------
The Registry only supports the Token challenge::
401 Unauthorized
WWW-Authenticate: Token
The only way is to provide a token on “401 Unauthorized” responses::
Authorization: Token signature=123abc,repository=”foo/bar”,access=read
Usually, the Registry provides a Cookie when a Token verification succeeded. Every time the Registry passes a Cookie, you have to pass it back the same cookie.::
200 OK
Set-Cookie: session="wD/J7LqL5ctqw8haL10vgfhrb2Q=?foo=UydiYXInCnAxCi4=&timestamp=RjEzNjYzMTQ5NDcuNDc0NjQzCi4="; Path=/; HttpOnly
Next request::
GET /(...)
Cookie: session="wD/J7LqL5ctqw8haL10vgfhrb2Q=?foo=UydiYXInCnAxCi4=&timestamp=RjEzNjYzMTQ5NDcuNDc0NjQzCi4="
7.0 Document Version
---------------------
- 1.0 : May 6th 2013 : initial release
- 1.1 : June 1st 2013 : Added Delete Repository and way to handle new source namespace.

View File

@@ -1,10 +1,10 @@
:title: Basic Commands
:title: Base commands
:description: Common usage and commands
:keywords: Examples, Usage, basic commands, docker, documentation, examples
:keywords: Examples, Usage
The Basics
==========
The basics
=============
Starting Docker
---------------
@@ -33,39 +33,6 @@ Running an interactive shell
# allocate a tty, attach stdin and stdout
docker run -i -t base /bin/bash
Bind Docker to another host/port or a unix socket
-------------------------------------------------
With -H it is possible to make the Docker daemon to listen on a specific ip and port. By default, it will listen on 127.0.0.1:4243 to allow only local connections but you can set it to 0.0.0.0:4243 or a specific host ip to give access to everybody.
Similarly, the Docker client can use -H to connect to a custom port.
-H accepts host and port assignment in the following format: tcp://[host][:port] or unix://path
For example:
* tcp://host -> tcp connection on host:4243
* tcp://host:port -> tcp connection on host:port
* tcp://:port -> tcp connection on 127.0.0.1:port
* unix://path/to/socket -> unix socket located at path/to/socket
.. code-block:: bash
# Run docker in daemon mode
sudo <path to>/docker -H 0.0.0.0:5555 &
# Download a base image
docker -H :5555 pull base
You can use multiple -H, for example, if you want to listen
on both tcp and a unix socket
.. code-block:: bash
# Run docker in daemon mode
sudo <path to>/docker -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock
# Download a base image
docker pull base
# OR
docker -H unix:///var/run/docker.sock pull base
Starting a long-running worker process
--------------------------------------
@@ -102,16 +69,15 @@ Expose a service on a TCP port
# Connect to the public port via the host's public address
# Please note that because of how routing works connecting to localhost or 127.0.0.1 $PORT will not work.
# Replace *eth0* according to your local interface name.
IP=$(ip -o -4 addr list eth0 | perl -n -e 'if (m{inet\s([\d\.]+)\/\d+\s}xms) { print $1 }')
IP=$(ifconfig eth0 | perl -n -e 'if (m/inet addr:([\d\.]+)/g) { print $1 }')
echo hello world | nc $IP $PORT
# Verify that the network connection worked
echo "Daemon received: $(docker logs $JOB)"
Committing (saving) a container state
-------------------------------------
Committing (saving) an image
-----------------------------
Save your containers state to a container image, so the state can be re-used.

View File

@@ -4,7 +4,7 @@
.. _cli:
Overview
Command Line Interface
======================
Docker Usage
@@ -14,8 +14,7 @@ To list available commands, either run ``docker`` with no parameters or execute
``docker help``::
$ docker
Usage: docker [OPTIONS] COMMAND [arg...]
-H=[tcp://127.0.0.1:4243]: tcp://host:port to bind/connect to or unix://path/to/socket to use
Usage: docker COMMAND [arg...]
A self-sufficient runtime for linux containers.
@@ -25,10 +24,9 @@ Available Commands
~~~~~~~~~~~~~~~~~~
.. toctree::
:maxdepth: 2
:maxdepth: 1
command/attach
command/build
command/commit
command/diff
command/export
@@ -48,10 +46,8 @@ Available Commands
command/rm
command/rmi
command/run
command/search
command/start
command/stop
command/tag
command/top
command/version
command/wait

View File

@@ -1,7 +1,3 @@
:title: Attach Command
:description: Attach to a running container
:keywords: attach, container, docker, documentation
===========================================
``attach`` -- Attach to a running container
===========================================

View File

@@ -1,44 +0,0 @@
:title: Build Command
:description: Build a new image from the Dockerfile passed via stdin
:keywords: build, docker, container, documentation
================================================
``build`` -- Build a container from a Dockerfile
================================================
::
Usage: docker build [OPTIONS] PATH | URL | -
Build a new container image from the source code at PATH
-t="": Tag to be applied to the resulting image in case of success.
-q=false: Suppress verbose build output.
When a single Dockerfile is given as URL, then no context is set. When a git repository is set as URL, the repository is used as context
Examples
--------
.. code-block:: bash
docker build .
| This will read the Dockerfile from the current directory. It will also send any other files and directories found in the current directory to the docker daemon.
| The contents of this directory would be used by ADD commands found within the Dockerfile.
| This will send a lot of data to the docker daemon if the current directory contains a lot of data.
| If the absolute path is provided instead of '.', only the files and directories required by the ADD commands from the Dockerfile will be added to the context and transferred to the docker daemon.
|
.. code-block:: bash
docker build - < Dockerfile
| This will read a Dockerfile from Stdin without context. Due to the lack of a context, no contents of any local directory will be sent to the docker daemon.
| ADD doesn't work when running in this mode due to the absence of the context, thus having no source files to copy to the container.
.. code-block:: bash
docker build github.com/creack/docker-firefox
| This will clone the github repository and use it as context. The Dockerfile at the root of the repository is used as Dockerfile.
| Note that you can specify an arbitrary git repository by using the 'git://' schema.

View File

@@ -1,7 +1,3 @@
:title: Commit Command
:description: Create a new image from a container's changes
:keywords: commit, docker, container, documentation
===========================================================
``commit`` -- Create a new image from a container's changes
===========================================================
@@ -13,20 +9,3 @@
Create a new image from a container's changes
-m="": Commit message
-author="": Author (eg. "John Hannibal Smith <hannibal@a-team.com>"
-run="": Config automatically applied when the image is run. "+`(ex: {"Cmd": ["cat", "/world"], "PortSpecs": ["22"]}')
Full -run example::
{"Hostname": "",
"User": "",
"CpuShares": 0,
"Memory": 0,
"MemorySwap": 0,
"PortSpecs": ["22", "80", "443"],
"Tty": true,
"OpenStdin": true,
"StdinOnce": true,
"Env": ["FOO=BAR", "FOO2=BAR2"],
"Cmd": ["cat", "-e", "/etc/resolv.conf"],
"Dns": ["8.8.8.8", "8.8.4.4"]}

View File

@@ -1,7 +1,3 @@
:title: Diff Command
:description: Inspect changes on a container's filesystem
:keywords: diff, docker, container, documentation
=======================================================
``diff`` -- Inspect changes on a container's filesystem
=======================================================

View File

@@ -1,7 +1,3 @@
:title: Export Command
:description: Export the contents of a filesystem as a tar archive
:keywords: export, docker, container, documentation
=================================================================
``export`` -- Stream the contents of a container as a tar archive
=================================================================

View File

@@ -1,7 +1,3 @@
:title: History Command
:description: Show the history of an image
:keywords: history, docker, container, documentation
===========================================
``history`` -- Show the history of an image
===========================================

View File

@@ -1,7 +1,3 @@
:title: Images Command
:description: List images
:keywords: images, docker, container, documentation
=========================
``images`` -- List images
=========================
@@ -14,13 +10,3 @@
-a=false: show all images
-q=false: only show numeric IDs
-viz=false: output in graphviz format
Displaying images visually
--------------------------
::
docker images -viz | dot -Tpng -o docker.png
.. image:: images/docker_images.gif

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

View File

@@ -1,40 +1,9 @@
:title: Import Command
:description: Create a new filesystem image from the contents of a tarball
:keywords: import, tarball, docker, url, documentation
==========================================================================
``import`` -- Create a new filesystem image from the contents of a tarball
==========================================================================
::
Usage: docker import URL|- [REPOSITORY [TAG]]
Usage: docker import [OPTIONS] URL|- [REPOSITORY [TAG]]
Create a new filesystem image from the contents of a tarball
At this time, the URL must start with ``http`` and point to a single file archive (.tar, .tar.gz, .bzip)
containing a root filesystem. If you would like to import from a local directory or archive,
you can use the ``-`` parameter to take the data from standard in.
Examples
--------
Import from a remote location
.............................
``$ docker import http://example.com/exampleimage.tgz exampleimagerepo``
Import from a local file
........................
Import to docker via pipe and standard in
``$ cat exampleimage.tgz | docker import - exampleimagelocal``
Import from a local directory
.............................
``$ sudo tar -c . | docker import - exampleimagedir``
Note the ``sudo`` in this example -- you must preserve the ownership of the files (especially root ownership)
during the archiving with tar. If you are not root (or sudo) when you tar, then the ownerships might not get preserved.

View File

@@ -1,7 +1,3 @@
:title: Info Command
:description: Display system-wide information.
:keywords: info, docker, information, documentation
===========================================
``info`` -- Display system-wide information
===========================================

View File

@@ -1,7 +1,3 @@
:title: Inspect Command
:description: Return low-level information on a container
:keywords: inspect, container, docker, documentation
==========================================================
``inspect`` -- Return low-level information on a container
==========================================================

View File

@@ -1,7 +1,3 @@
:title: Kill Command
:description: Kill a running container
:keywords: kill, container, docker, documentation
====================================
``kill`` -- Kill a running container
====================================

View File

@@ -1,17 +1,9 @@
:title: Login Command
:description: Register or Login to the docker registry server
:keywords: login, docker, documentation
============================================================
``login`` -- Register or Login to the docker registry server
============================================================
::
Usage: docker login [OPTIONS]
Usage: docker login
Register or Login to the docker registry server
-e="": email
-p="": password
-u="": username

View File

@@ -1,7 +1,3 @@
:title: Logs Command
:description: Fetch the logs of a container
:keywords: logs, container, docker, documentation
=========================================
``logs`` -- Fetch the logs of a container
=========================================

View File

@@ -1,7 +1,3 @@
:title: Port Command
:description: Lookup the public-facing port which is NAT-ed to PRIVATE_PORT
:keywords: port, docker, container, documentation
=========================================================================
``port`` -- Lookup the public-facing port which is NAT-ed to PRIVATE_PORT
=========================================================================

View File

@@ -1,7 +1,3 @@
:title: Ps Command
:description: List containers
:keywords: ps, docker, documentation, container
=========================
``ps`` -- List containers
=========================

View File

@@ -1,7 +1,3 @@
:title: Pull Command
:description: Pull an image or a repository from the registry
:keywords: pull, image, repo, repository, documentation, docker
=========================================================================
``pull`` -- Pull an image or a repository from the docker registry server
=========================================================================

View File

@@ -1,7 +1,3 @@
:title: Push Command
:description: Push an image or a repository to the registry
:keywords: push, docker, image, repository, documentation, repo
=======================================================================
``push`` -- Push an image or a repository to the docker registry server
=======================================================================

View File

@@ -1,7 +1,3 @@
:title: Restart Command
:description: Restart a running container
:keywords: restart, container, docker, documentation
==========================================
``restart`` -- Restart a running container
==========================================

View File

@@ -1,7 +1,3 @@
:title: Rm Command
:description: Remove a container
:keywords: remove, container, docker, documentation, rm
============================
``rm`` -- Remove a container
============================
@@ -10,4 +6,4 @@
Usage: docker rm [OPTIONS] CONTAINER
Remove one or more containers
Remove a container

View File

@@ -1,13 +1,9 @@
:title: Rmi Command
:description: Remove an image
:keywords: rmi, remove, image, docker, documentation
==========================
``rmi`` -- Remove an image
==========================
::
Usage: docker rmi IMAGE [IMAGE...]
Usage: docker rmimage [OPTIONS] IMAGE
Remove one or more images
Remove an image

View File

@@ -1,19 +1,14 @@
:title: Run Command
:description: Run a command in a new container
:keywords: run, container, docker, documentation
===========================================
``run`` -- Run a command in a new container
===========================================
::
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Usage: docker run [OPTIONS] IMAGE COMMAND [ARG...]
Run a command in a new container
-a=map[]: Attach to stdin, stdout or stderr.
-c=0: CPU shares (relative weight)
-d=false: Detached mode: leave the container running in the background
-e=[]: Set environment variables
-h="": Container host name
@@ -22,7 +17,3 @@
-p=[]: Map a network port to the container
-t=false: Allocate a pseudo-tty
-u="": Username or UID
-d=[]: Set custom dns servers for the container
-v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro]. If "host-dir" is missing, then docker creates a new volume.
-volumes-from="": Mount all volumes from the given container.
-entrypoint="": Overwrite the default entrypoint set by the image.

View File

@@ -1,14 +0,0 @@
:title: Search Command
:description: Searches for the TERM parameter on the Docker index and prints out a list of repositories that match.
:keywords: search, docker, image, documentation
===================================================================
``search`` -- Search for an image in the docker index
===================================================================
::
Usage: docker search TERM
Searches for the TERM parameter on the Docker index and prints out a list of repositories
that match.

View File

@@ -1,7 +1,3 @@
:title: Start Command
:description: Start a stopped container
:keywords: start, docker, container, documentation
======================================
``start`` -- Start a stopped container
======================================

View File

@@ -1,7 +1,3 @@
:title: Stop Command
:description: Stop a running container
:keywords: stop, container, docker, documentation
====================================
``stop`` -- Stop a running container
====================================

View File

@@ -1,7 +1,3 @@
:title: Tag Command
:description: Tag an image into a repository
:keywords: tag, docker, image, repository, documentation, repo
=========================================
``tag`` -- Tag an image into a repository
=========================================

View File

@@ -1,13 +0,0 @@
:title: Top Command
:description: Lookup the running processes of a container
:keywords: top, docker, container, documentation
=======================================================
``top`` -- Lookup the running processes of a container
=======================================================
::
Usage: docker top CONTAINER
Lookup the running processes of a container

View File

@@ -1,7 +1,3 @@
:title: Version Command
:description:
:keywords: version, docker, documentation
==================================================
``version`` -- Show the docker version information
==================================================

View File

@@ -1,7 +1,3 @@
:title: Wait Command
:description: Block until a container stops, then print its exit code.
:keywords: wait, docker, container, documentation
===================================================================
``wait`` -- Block until a container stops, then print its exit code
===================================================================

View File

@@ -1,6 +1,6 @@
:title: Commands
:title: docker documentation
:description: -- todo: change me
:keywords: todo, commands, command line, help, docker, documentation
:keywords: todo: change me
Commands
@@ -9,33 +9,8 @@ Commands
Contents:
.. toctree::
:maxdepth: 1
:maxdepth: 2
cli
attach <command/attach>
build <command/build>
commit <command/commit>
diff <command/diff>
export <command/export>
history <command/history>
images <command/images>
import <command/import>
info <command/info>
inspect <command/inspect>
kill <command/kill>
login <command/login>
logs <command/logs>
port <command/port>
ps <command/ps>
pull <command/pull>
push <command/push>
restart <command/restart>
rm <command/rm>
rmi <command/rmi>
run <command/run>
search <command/search>
start <command/start>
stop <command/stop>
tag <command/tag>
version <command/version>
wait <command/wait>
basics
workingwithrepository
cli

View File

@@ -0,0 +1,42 @@
.. _working_with_the_repository:
Working with the repository
============================
Connecting to the repository
----------------------------
You create a user on the central docker repository by running
.. code-block:: bash
docker login
If your username does not exist it will prompt you to also enter a password and your e-mail address. It will then
automatically log you in.
Committing a container to a named image
---------------------------------------
In order to commit to the repository it is required to have committed your container to an image with your namespace.
.. code-block:: bash
# for example docker commit $CONTAINER_ID dhrp/kickassapp
docker commit <container_id> <your username>/<some_name>
Pushing a container to the repository
-----------------------------------------
In order to push an image to the repository you need to have committed your container to a named image (see above)
Now you can commit this image to the repository
.. code-block:: bash
# for example docker push dhrp/kickassapp
docker push <image-name>

View File

@@ -0,0 +1,25 @@
:title: Building blocks
:description: An introduction to docker and standard containers?
:keywords: containers, lxc, concepts, explanation
Building blocks
===============
.. _images:
Images
------
An original container image. These are stored on disk and are comparable with what you normally expect from a stopped virtual machine image. Images are stored (and retrieved from) repository
Images are stored on your local file system under /var/lib/docker/images
.. _containers:
Containers
----------
A container is a local version of an image. It can be running or stopped, The equivalent would be a virtual machine instance.
Containers are stored on your local file system under /var/lib/docker/containers

View File

@@ -0,0 +1,128 @@
:title: Introduction
:description: An introduction to docker and standard containers?
:keywords: containers, lxc, concepts, explanation
:note: This version of the introduction is temporary, just to make sure we don't break the links from the website when the documentation is updated
Introduction
============
Docker - The Linux container runtime
------------------------------------
Docker complements LXC with a high-level API which operates at the process level. It runs unix processes with strong guarantees of isolation and repeatability across servers.
Docker is a great building block for automating distributed systems: large-scale web deployments, database clusters, continuous deployment systems, private PaaS, service-oriented architectures, etc.
- **Heterogeneous payloads** Any combination of binaries, libraries, configuration files, scripts, virtualenvs, jars, gems, tarballs, you name it. No more juggling between domain-specific tools. Docker can deploy and run them all.
- **Any server** Docker can run on any x64 machine with a modern linux kernel - whether it's a laptop, a bare metal server or a VM. This makes it perfect for multi-cloud deployments.
- **Isolation** docker isolates processes from each other and from the underlying host, using lightweight containers.
- **Repeatability** Because containers are isolated in their own filesystem, they behave the same regardless of where, when, and alongside what they run.
What is a Standard Container?
-----------------------------
Docker defines a unit of software delivery called a Standard Container. The goal of a Standard Container is to encapsulate a software component and all its dependencies in
a format that is self-describing and portable, so that any compliant runtime can run it without extra dependency, regardless of the underlying machine and the contents of the container.
The spec for Standard Containers is currently work in progress, but it is very straightforward. It mostly defines 1) an image format, 2) a set of standard operations, and 3) an execution environment.
A great analogy for this is the shipping container. Just like Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery.
Standard operations
~~~~~~~~~~~~~~~~~~~
Just like shipping containers, Standard Containers define a set of STANDARD OPERATIONS. Shipping containers can be lifted, stacked, locked, loaded, unloaded and labelled. Similarly, standard containers can be started, stopped, copied, snapshotted, downloaded, uploaded and tagged.
Content-agnostic
~~~~~~~~~~~~~~~~~~~
Just like shipping containers, Standard Containers are CONTENT-AGNOSTIC: all standard operations have the same effect regardless of the contents. A shipping container will be stacked in exactly the same way whether it contains Vietnamese powder coffee or spare Maserati parts. Similarly, Standard Containers are started or uploaded in the same way whether they contain a postgres database, a php application with its dependencies and application server, or Java build artifacts.
Infrastructure-agnostic
~~~~~~~~~~~~~~~~~~~~~~~~~~
Both types of containers are INFRASTRUCTURE-AGNOSTIC: they can be transported to thousands of facilities around the world, and manipulated by a wide variety of equipment. A shipping container can be packed in a factory in Ukraine, transported by truck to the nearest routing center, stacked onto a train, loaded into a German boat by an Australian-built crane, stored in a warehouse at a US facility, etc. Similarly, a standard container can be bundled on my laptop, uploaded to S3, downloaded, run and snapshotted by a build server at Equinix in Virginia, uploaded to 10 staging servers in a home-made Openstack cluster, then sent to 30 production instances across 3 EC2 regions.
Designed for automation
~~~~~~~~~~~~~~~~~~~~~~~~~~
Because they offer the same standard operations regardless of content and infrastructure, Standard Containers, just like their physical counterpart, are extremely well-suited for automation. In fact, you could say automation is their secret weapon.
Many things that once required time-consuming and error-prone human effort can now be programmed. Before shipping containers, a bag of powder coffee was hauled, dragged, dropped, rolled and stacked by 10 different people in 10 different locations by the time it reached its destination. 1 out of 50 disappeared. 1 out of 20 was damaged. The process was slow, inefficient and cost a fortune - and was entirely different depending on the facility and the type of goods.
Similarly, before Standard Containers, by the time a software component ran in production, it had been individually built, configured, bundled, documented, patched, vendored, templated, tweaked and instrumented by 10 different people on 10 different computers. Builds failed, libraries conflicted, mirrors crashed, post-it notes were lost, logs were misplaced, cluster updates were half-broken. The process was slow, inefficient and cost a fortune - and was entirely different depending on the language and infrastructure provider.
Industrial-grade delivery
~~~~~~~~~~~~~~~~~~~~~~~~~~
There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded on the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away.
With Standard Containers we can put an end to that embarrassment, by making INDUSTRIAL-GRADE DELIVERY of software a reality.
Standard Container Specification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
(TODO)
Image format
~~~~~~~~~~~~
Standard operations
~~~~~~~~~~~~~~~~~~~
- Copy
- Run
- Stop
- Wait
- Commit
- Attach standard streams
- List filesystem changes
- ...
Execution environment
~~~~~~~~~~~~~~~~~~~~~
Root filesystem
^^^^^^^^^^^^^^^
Environment variables
^^^^^^^^^^^^^^^^^^^^^
Process arguments
^^^^^^^^^^^^^^^^^
Networking
^^^^^^^^^^
Process namespacing
^^^^^^^^^^^^^^^^^^^
Resource limits
^^^^^^^^^^^^^^^
Process monitoring
^^^^^^^^^^^^^^^^^^
Logging
^^^^^^^
Signals
^^^^^^^
Pseudo-terminal allocation
^^^^^^^^^^^^^^^^^^^^^^^^^^
Security
^^^^^^^^

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Some files were not shown because too many files have changed in this diff Show More