mirror of
https://github.com/moby/moby.git
synced 2026-01-14 17:36:01 +00:00
Compare commits
7 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
38b8373434 | ||
|
|
03b5f8a585 | ||
|
|
bc260f0225 | ||
|
|
45dcd1125b | ||
|
|
d2e063d9e1 | ||
|
|
567a484b66 | ||
|
|
5d4b886ad6 |
4
.mailmap
4
.mailmap
@@ -2,7 +2,7 @@
|
||||
<charles.hooper@dotcloud.com> <chooper@plumata.com>
|
||||
<daniel.mizyrycki@dotcloud.com> <daniel@dotcloud.com>
|
||||
<daniel.mizyrycki@dotcloud.com> <mzdaniel@glidelink.net>
|
||||
Guillaume J. Charmes <guillaume.charmes@dotcloud.com> <charmes.guillaume@gmail.com>
|
||||
Guillaume J. Charmes <guillaume.charmes@dotcloud.com> creack <charmes.guillaume@gmail.com>
|
||||
<guillaume.charmes@dotcloud.com> <guillaume@dotcloud.com>
|
||||
<kencochrane@gmail.com> <KenCochrane@gmail.com>
|
||||
<sridharr@activestate.com> <github@srid.name>
|
||||
@@ -16,6 +16,4 @@ Tim Terhorst <mynamewastaken+git@gmail.com>
|
||||
Andy Smith <github@anarkystic.com>
|
||||
<kalessin@kalessin.fr> <louis@dotcloud.com>
|
||||
<victor.vieux@dotcloud.com> <victor@dotcloud.com>
|
||||
<victor.vieux@dotcloud.com> <dev@vvieux.com>
|
||||
<dominik@honnef.co> <dominikh@fork-bomb.org>
|
||||
Thatcher Peskens <thatcher@dotcloud.com>
|
||||
|
||||
15
AUTHORS
15
AUTHORS
@@ -1,34 +1,24 @@
|
||||
Al Tobey <al@ooyala.com>
|
||||
Alexey Shamrin <shamrin@gmail.com>
|
||||
Andrea Luzzardi <aluzzardi@gmail.com>
|
||||
Andy Rothfusz <github@metaliveblog.com>
|
||||
Andy Smith <github@anarkystic.com>
|
||||
Antony Messerli <amesserl@rackspace.com>
|
||||
Barry Allard <barry.allard@gmail.com>
|
||||
Brandon Liu <bdon@bdon.org>
|
||||
Brian McCallister <brianm@skife.org>
|
||||
Bruno Bigras <bigras.bruno@gmail.com>
|
||||
Caleb Spare <cespare@gmail.com>
|
||||
Charles Hooper <charles.hooper@dotcloud.com>
|
||||
Daniel Mizyrycki <daniel.mizyrycki@dotcloud.com>
|
||||
Daniel Robinson <gottagetmac@gmail.com>
|
||||
Daniel Von Fange <daniel@leancoder.com>
|
||||
Dominik Honnef <dominik@honnef.co>
|
||||
Don Spaulding <donspauldingii@gmail.com>
|
||||
Dr Nic Williams <drnicwilliams@gmail.com>
|
||||
Evan Wies <evan@neomantra.net>
|
||||
ezbercih <cem.ezberci@gmail.com>
|
||||
Flavio Castelli <fcastelli@suse.com>
|
||||
Francisco Souza <f@souza.cc>
|
||||
Frederick F. Kautz IV <fkautz@alumni.cmu.edu>
|
||||
Guillaume J. Charmes <guillaume.charmes@dotcloud.com>
|
||||
Harley Laue <losinggeneration@gmail.com>
|
||||
Hunter Blanks <hunter@twilio.com>
|
||||
Jeff Lindsay <progrium@gmail.com>
|
||||
Jeremy Grosser <jeremy@synack.me>
|
||||
Joffrey F <joffrey@dotcloud.com>
|
||||
John Costa <john.costa@gmail.com>
|
||||
Jonas Pfenniger <jonas@pfenniger.name>
|
||||
Jonathan Rudenberg <jonathan@titanous.com>
|
||||
Julien Barbier <write0@gmail.com>
|
||||
Jérôme Petazzoni <jerome.petazzoni@dotcloud.com>
|
||||
@@ -37,11 +27,8 @@ Kevin J. Lynagh <kevin@keminglabs.com>
|
||||
Louis Opter <kalessin@kalessin.fr>
|
||||
Maxim Treskin <zerthurd@gmail.com>
|
||||
Mikhail Sobolev <mss@mawhrin.net>
|
||||
Nate Jones <nate@endot.org>
|
||||
Nelson Chen <crazysim@gmail.com>
|
||||
Niall O'Higgins <niallo@unworkable.org>
|
||||
odk- <github@odkurzacz.org>
|
||||
Paul Bowsher <pbowsher@globalpersonals.co.uk>
|
||||
Paul Hammond <paul@paulhammond.org>
|
||||
Piotr Bogdan <ppbogdan@gmail.com>
|
||||
Robert Obryk <robryk@gmail.com>
|
||||
@@ -51,8 +38,6 @@ Silas Sewell <silas@sewell.org>
|
||||
Solomon Hykes <solomon@dotcloud.com>
|
||||
Sridhar Ratnakumar <sridharr@activestate.com>
|
||||
Thatcher Peskens <thatcher@dotcloud.com>
|
||||
Thomas Bikeev <thomas.bikeev@mac.com>
|
||||
Tianon Gravi <admwiggin@gmail.com>
|
||||
Tim Terhorst <mynamewastaken+git@gmail.com>
|
||||
Troy Howard <thoward37@gmail.com>
|
||||
unclejack <unclejacksons@gmail.com>
|
||||
|
||||
69
CHANGELOG.md
69
CHANGELOG.md
@@ -1,68 +1,6 @@
|
||||
# Changelog
|
||||
|
||||
## 0.3.3 (2013-05-23)
|
||||
- Registry: Fix push regression
|
||||
- Various bugfixes
|
||||
|
||||
## 0.3.2 (2013-05-09)
|
||||
* Runtime: Store the actual archive on commit
|
||||
* Registry: Improve the checksum process
|
||||
* Registry: Use the size to have a good progress bar while pushing
|
||||
* Registry: Use the actual archive if it exists in order to speed up the push
|
||||
- Registry: Fix error 400 on push
|
||||
|
||||
## 0.3.1 (2013-05-08)
|
||||
+ Builder: Implement the autorun capability within docker builder
|
||||
+ Builder: Add caching to docker builder
|
||||
+ Builder: Add support for docker builder with native API as top level command
|
||||
+ Runtime: Add go version to debug infos
|
||||
+ Builder: Implement ENV within docker builder
|
||||
+ Registry: Add docker search top level command in order to search a repository
|
||||
+ Images: output graph of images to dot (graphviz)
|
||||
+ Documentation: new introduction and high-level overview
|
||||
+ Documentation: Add the documentation for docker builder
|
||||
+ Website: new high-level overview
|
||||
- Makefile: Swap "go get" for "go get -d", especially to compile on go1.1rc
|
||||
- Images: fix ByParent function
|
||||
- Builder: Check the command existance prior create and add Unit tests for the case
|
||||
- Registry: Fix pull for official images with specific tag
|
||||
- Registry: Fix issue when login in with a different user and trying to push
|
||||
- Documentation: CSS fix for docker documentation to make REST API docs look better.
|
||||
- Documentation: Fixed CouchDB example page header mistake
|
||||
- Documentation: fixed README formatting
|
||||
* Registry: Improve checksum - async calculation
|
||||
* Runtime: kernel version - don't show the dash if flavor is empty
|
||||
* Documentation: updated www.docker.io website.
|
||||
* Builder: use any whitespaces instead of tabs
|
||||
* Packaging: packaging ubuntu; issue #510: Use goland-stable PPA package to build docker
|
||||
|
||||
## 0.3.0 (2013-05-06)
|
||||
+ Registry: Implement the new registry
|
||||
+ Documentation: new example: sharing data between 2 couchdb databases
|
||||
- Runtime: Fix the command existance check
|
||||
- Runtime: strings.Split may return an empty string on no match
|
||||
- Runtime: Fix an index out of range crash if cgroup memory is not
|
||||
* Documentation: Various improvments
|
||||
* Vagrant: Use only one deb line in /etc/apt
|
||||
|
||||
## 0.2.2 (2013-05-03)
|
||||
+ Support for data volumes ('docker run -v=PATH')
|
||||
+ Share data volumes between containers ('docker run -volumes-from')
|
||||
+ Improved documentation
|
||||
* Upgrade to Go 1.0.3
|
||||
* Various upgrades to the dev environment for contributors
|
||||
|
||||
## 0.2.1 (2013-05-01)
|
||||
+ 'docker commit -run' bundles a layer with default runtime options: command, ports etc.
|
||||
* Improve install process on Vagrant
|
||||
+ New Dockerfile operation: "maintainer"
|
||||
+ New Dockerfile operation: "expose"
|
||||
+ New Dockerfile operation: "cmd"
|
||||
+ Contrib script to build a Debian base layer
|
||||
+ 'docker -d -r': restart crashed containers at daemon startup
|
||||
* Runtime: improve test coverage
|
||||
|
||||
## 0.2.0 (2013-04-23)
|
||||
## 0.2.0 (2012-04-23)
|
||||
- Runtime: ghost containers can be killed and waited for
|
||||
* Documentation: update install intructions
|
||||
- Packaging: fix Vagrantfile
|
||||
@@ -70,12 +8,13 @@
|
||||
+ Add a changelog
|
||||
- Various bugfixes
|
||||
|
||||
|
||||
## 0.1.8 (2013-04-22)
|
||||
- Dynamically detect cgroup capabilities
|
||||
- Issue stability warning on kernels <3.8
|
||||
- 'docker push' buffers on disk instead of memory
|
||||
- Fix 'docker diff' for removed files
|
||||
- Fix 'docker stop' for ghost containers
|
||||
- Fix 'docker stop' for ghost containers
|
||||
- Fix handling of pidfile
|
||||
- Various bugfixes and stability improvements
|
||||
|
||||
@@ -96,7 +35,7 @@
|
||||
- Improve diagnosis of missing system capabilities
|
||||
- Allow disabling memory limits at compile time
|
||||
- Add debian packaging
|
||||
- Documentation: installing on Arch Linux
|
||||
- Documentation: installing on Arch Linux
|
||||
- Documentation: running Redis on docker
|
||||
- Fixed lxc 0.9 compatibility
|
||||
- Automatically load aufs module
|
||||
|
||||
12
Makefile
12
Makefile
@@ -38,15 +38,14 @@ $(DOCKER_BIN): $(DOCKER_DIR)
|
||||
|
||||
$(DOCKER_DIR):
|
||||
@mkdir -p $(dir $@)
|
||||
@if [ -h $@ ]; then rm -f $@; fi; ln -sf $(CURDIR)/ $@
|
||||
@(cd $(DOCKER_MAIN); go get -d $(GO_OPTIONS))
|
||||
@rm -f $@
|
||||
@ln -sf $(CURDIR)/ $@
|
||||
@(cd $(DOCKER_MAIN); go get $(GO_OPTIONS))
|
||||
|
||||
whichrelease:
|
||||
echo $(RELEASE_VERSION)
|
||||
|
||||
release: $(BINRELEASE)
|
||||
s3cmd -P put $(BINRELEASE) s3://get.docker.io/builds/`uname -s`/`uname -m`/docker-$(RELEASE_VERSION).tgz
|
||||
|
||||
srcrelease: $(SRCRELEASE)
|
||||
deps: $(DOCKER_DIR)
|
||||
|
||||
@@ -76,7 +75,4 @@ fmt:
|
||||
@gofmt -s -l -w .
|
||||
|
||||
hack:
|
||||
cd $(CURDIR)/hack && vagrant up
|
||||
|
||||
ssh-dev:
|
||||
cd $(CURDIR)/hack && vagrant ssh
|
||||
cd $(CURDIR)/buildbot && vagrant up
|
||||
|
||||
97
README.md
97
README.md
@@ -1,94 +1,37 @@
|
||||
Docker: the Linux container engine
|
||||
==================================
|
||||
Docker: the Linux container runtime
|
||||
===================================
|
||||
|
||||
Docker is an open-source engine which automates the deployment of applications as highly portable, self-sufficient containers.
|
||||
Docker complements LXC with a high-level API which operates at the process level. It runs unix processes with strong guarantees of isolation and repeatability across servers.
|
||||
|
||||
Docker containers are both *hardware-agnostic* and *platform-agnostic*. This means that they can run anywhere, from your
|
||||
laptop to the largest EC2 compute instance and everything in between - and they don't require that you use a particular
|
||||
language, framework or packaging system. That makes them great building blocks for deploying and scaling web apps, databases
|
||||
and backend services without depending on a particular stack or provider.
|
||||
Docker is a great building block for automating distributed systems: large-scale web deployments, database clusters, continuous deployment systems, private PaaS, service-oriented architectures, etc.
|
||||
|
||||
Docker is an open-source implementation of the deployment engine which powers [dotCloud](http://dotcloud.com), a popular Platform-as-a-Service.
|
||||
It benefits directly from the experience accumulated over several years of large-scale operation and support of hundreds of thousands
|
||||
of applications and databases.
|
||||

|
||||
|
||||

|
||||
* *Heterogeneous payloads*: any combination of binaries, libraries, configuration files, scripts, virtualenvs, jars, gems, tarballs, you name it. No more juggling between domain-specific tools. Docker can deploy and run them all.
|
||||
|
||||
## Better than VMs
|
||||
* *Any server*: docker can run on any x64 machine with a modern linux kernel - whether it's a laptop, a bare metal server or a VM. This makes it perfect for multi-cloud deployments.
|
||||
|
||||
A common method for distributing applications and sandbox their execution is to use virtual machines, or VMs. Typical VM formats
|
||||
are VMWare's vmdk, Oracle Virtualbox's vdi, and Amazon EC2's ami. In theory these formats should allow every developer to
|
||||
automatically package their application into a "machine" for easy distribution and deployment. In practice, that almost never
|
||||
happens, for a few reasons:
|
||||
* *Isolation*: docker isolates processes from each other and from the underlying host, using lightweight containers.
|
||||
|
||||
* *Size*: VMs are very large which makes them impractical to store and transfer.
|
||||
* *Performance*: running VMs consumes significant CPU and memory, which makes them impractical in many scenarios, for example local development of multi-tier applications, and
|
||||
large-scale deployment of cpu and memory-intensive applications on large numbers of machines.
|
||||
* *Portability*: competing VM environments don't play well with each other. Although conversion tools do exist, they are limited and add even more overhead.
|
||||
* *Hardware-centric*: VMs were designed with machine operators in mind, not software developers. As a result, they offer very limited tooling for what developers need most:
|
||||
building, testing and running their software. For example, VMs offer no facilities for application versioning, monitoring, configuration, logging or service discovery.
|
||||
|
||||
By contrast, Docker relies on a different sandboxing method known as *containerization*. Unlike traditional virtualization,
|
||||
containerization takes place at the kernel level. Most modern operating system kernels now support the primitives necessary
|
||||
for containerization, including Linux with [openvz](http://openvz.org), [vserver](http://linux-vserver.org) and more recently [lxc](http://lxc.sourceforge.net),
|
||||
Solaris with [zones](http://docs.oracle.com/cd/E26502_01/html/E29024/preface-1.html#scrolltoc) and FreeBSD with [Jails](http://www.freebsd.org/doc/handbook/jails.html).
|
||||
|
||||
Docker builds on top of these low-level primitives to offer developers a portable format and runtime environment that solves
|
||||
all 4 problems. Docker containers are small (and their transfer can be optimized with layers), they have basically zero memory and cpu overhead,
|
||||
they are completely portable and are designed from the ground up with an application-centric design.
|
||||
|
||||
The best part: because docker operates at the OS level, it can still be run inside a VM!
|
||||
|
||||
## Plays well with others
|
||||
|
||||
Docker does not require that you buy into a particular programming language, framework, packaging system or configuration language.
|
||||
|
||||
Is your application a unix process? Does it use files, tcp connections, environment variables, standard unix streams and command-line
|
||||
arguments as inputs and outputs? Then docker can run it.
|
||||
|
||||
Can your application's build be expressed as a sequence of such commands? Then docker can build it.
|
||||
* *Repeatability*: because containers are isolated in their own filesystem, they behave the same regardless of where, when, and alongside what they run.
|
||||
|
||||
|
||||
## Escape dependency hell
|
||||
Notable features
|
||||
-----------------
|
||||
|
||||
A common problem for developers is the difficulty of managing all their application's dependencies in a simple and automated way.
|
||||
* Filesystem isolation: each process container runs in a completely separate root filesystem.
|
||||
|
||||
This is usually difficult for several reasons:
|
||||
* Resource isolation: system resources like cpu and memory can be allocated differently to each process container, using cgroups.
|
||||
|
||||
* *Cross-platform dependencies*. Modern applications often depend on a combination of system libraries and binaries, language-specific packages, framework-specific modules,
|
||||
internal components developed for another project, etc. These dependencies live in different "worlds" and require different tools - these tools typically don't work
|
||||
well with each other, requiring awkward custom integrations.
|
||||
* Network isolation: each process container runs in its own network namespace, with a virtual interface and IP address of its own.
|
||||
|
||||
* Conflicting dependencies. Different applications may depend on different versions of the same dependency. Packaging tools handle these situations with various degrees of ease -
|
||||
but they all handle them in different and incompatible ways, which again forces the developer to do extra work.
|
||||
|
||||
* Custom dependencies. A developer may need to prepare a custom version of his application's dependency. Some packaging systems can handle custom versions of a dependency,
|
||||
others can't - and all of them handle it differently.
|
||||
* Copy-on-write: root filesystems are created using copy-on-write, which makes deployment extremely fast, memory-cheap and disk-cheap.
|
||||
|
||||
* Logging: the standard streams (stdout/stderr/stdin) of each process container are collected and logged for real-time or batch retrieval.
|
||||
|
||||
Docker solves dependency hell by giving the developer a simple way to express *all* his application's dependencies in one place,
|
||||
and streamline the process of assembling them. If this makes you think of [XKCD 927](http://xkcd.com/927/), don't worry. Docker doesn't
|
||||
*replace* your favorite packaging systems. It simply orchestrates their use in a simple and repeatable way. How does it do that? With layers.
|
||||
|
||||
Docker defines a build as running a sequence of unix commands, one after the other, in the same container. Build commands modify the contents of the container
|
||||
(usually by installing new files on the filesystem), the next command modifies it some more, etc. Since each build command inherits the result of the previous
|
||||
commands, the *order* in which the commands are executed expresses *dependencies*.
|
||||
|
||||
Here's a typical docker build process:
|
||||
|
||||
```bash
|
||||
from ubuntu:12.10
|
||||
run apt-get update
|
||||
run DEBIAN_FRONTEND=noninteractive apt-get install -q -y python
|
||||
run DEBIAN_FRONTEND=noninteractive apt-get install -q -y python-pip
|
||||
run pip install django
|
||||
run DEBIAN_FRONTEND=noninteractive apt-get install -q -y curl
|
||||
run curl -L https://github.com/shykes/helloflask/archive/master.tar.gz | tar -xzv
|
||||
run cd helloflask-master && pip install -r requirements.txt
|
||||
```
|
||||
|
||||
Note that Docker doesn't care *how* dependencies are built - as long as they can be built by running a unix command in a container.
|
||||
* Change management: changes to a container's filesystem can be committed into a new image and re-used to create more containers. No templating or manual configuration required.
|
||||
|
||||
* Interactive shell: docker can allocate a pseudo-tty and attach to the standard input of any container, for example to run a throwaway interactive shell.
|
||||
|
||||
Install instructions
|
||||
==================
|
||||
@@ -293,7 +236,7 @@ a format that is self-describing and portable, so that any compliant runtime can
|
||||
|
||||
The spec for Standard Containers is currently a work in progress, but it is very straightforward. It mostly defines 1) an image format, 2) a set of standard operations, and 3) an execution environment.
|
||||
|
||||
A great analogy for this is the shipping container. Just like how Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery.
|
||||
A great analogy for this is the shipping container. Just like Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery.
|
||||
|
||||
### 1. STANDARD OPERATIONS
|
||||
|
||||
@@ -321,7 +264,7 @@ Similarly, before Standard Containers, by the time a software component ran in p
|
||||
|
||||
### 5. INDUSTRIAL-GRADE DELIVERY
|
||||
|
||||
There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded onto the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away.
|
||||
There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded on the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away.
|
||||
|
||||
With Standard Containers we can put an end to that embarrassment, by making INDUSTRIAL-GRADE DELIVERY of software a reality.
|
||||
|
||||
|
||||
104
Vagrantfile
vendored
104
Vagrantfile
vendored
@@ -1,62 +1,40 @@
|
||||
# -*- mode: ruby -*-
|
||||
# vi: set ft=ruby :
|
||||
|
||||
BOX_NAME = ENV['BOX_NAME'] || "ubuntu"
|
||||
BOX_URI = ENV['BOX_URI'] || "http://files.vagrantup.com/precise64.box"
|
||||
AWS_REGION = ENV['AWS_REGION'] || "us-east-1"
|
||||
AWS_AMI = ENV['AWS_AMI'] || "ami-d0f89fb9"
|
||||
def v10(config)
|
||||
config.vm.box = 'precise64'
|
||||
config.vm.box_url = 'http://files.vagrantup.com/precise64.box'
|
||||
|
||||
Vagrant::Config.run do |config|
|
||||
# Setup virtual machine box. This VM configuration code is always executed.
|
||||
config.vm.box = BOX_NAME
|
||||
config.vm.box_url = BOX_URI
|
||||
|
||||
# Provision docker and new kernel if deployment was not done
|
||||
if Dir.glob("#{File.dirname(__FILE__)}/.vagrant/machines/default/*/id").empty?
|
||||
# Add lxc-docker package
|
||||
pkg_cmd = "apt-get update -qq; apt-get install -q -y python-software-properties; " \
|
||||
"add-apt-repository -y ppa:dotcloud/lxc-docker; apt-get update -qq; " \
|
||||
"apt-get install -q -y lxc-docker; "
|
||||
# Add X.org Ubuntu backported 3.8 kernel
|
||||
pkg_cmd << "add-apt-repository -y ppa:ubuntu-x-swat/r-lts-backport; " \
|
||||
"apt-get update -qq; apt-get install -q -y linux-image-3.8.0-19-generic; "
|
||||
# Add guest additions if local vbox VM
|
||||
is_vbox = true
|
||||
ARGV.each do |arg| is_vbox &&= !arg.downcase.start_with?("--provider") end
|
||||
if is_vbox
|
||||
pkg_cmd << "apt-get install -q -y linux-headers-3.8.0-19-generic dkms; " \
|
||||
"echo 'Downloading VBox Guest Additions...'; " \
|
||||
"wget -q http://dlc.sun.com.edgesuite.net/virtualbox/4.2.12/VBoxGuestAdditions_4.2.12.iso; "
|
||||
# Prepare the VM to add guest additions after reboot
|
||||
pkg_cmd << "echo -e 'mount -o loop,ro /home/vagrant/VBoxGuestAdditions_4.2.12.iso /mnt\n" \
|
||||
"echo yes | /mnt/VBoxLinuxAdditions.run\numount /mnt\n" \
|
||||
"rm /root/guest_additions.sh; ' > /root/guest_additions.sh; " \
|
||||
"chmod 700 /root/guest_additions.sh; " \
|
||||
"sed -i -E 's#^exit 0#[ -x /root/guest_additions.sh ] \\&\\& /root/guest_additions.sh#' /etc/rc.local; " \
|
||||
"echo 'Installation of VBox Guest Additions is proceeding in the background.'; " \
|
||||
"echo '\"vagrant reload\" can be used in about 2 minutes to activate the new guest additions.'; "
|
||||
end
|
||||
# Activate new kernel
|
||||
pkg_cmd << "shutdown -r +1; "
|
||||
config.vm.provision :shell, :inline => pkg_cmd
|
||||
end
|
||||
# Install ubuntu packaging dependencies and create ubuntu packages
|
||||
config.vm.provision :shell, :inline => "echo 'deb http://ppa.launchpad.net/dotcloud/lxc-docker/ubuntu precise main' >>/etc/apt/sources.list"
|
||||
config.vm.provision :shell, :inline => 'export DEBIAN_FRONTEND=noninteractive; apt-get -qq update; apt-get install -qq -y --force-yes lxc-docker'
|
||||
end
|
||||
|
||||
Vagrant::VERSION < "1.1.0" and Vagrant::Config.run do |config|
|
||||
v10(config)
|
||||
end
|
||||
|
||||
Vagrant::VERSION >= "1.1.0" and Vagrant.configure("1") do |config|
|
||||
v10(config)
|
||||
end
|
||||
|
||||
# Providers were added on Vagrant >= 1.1.0
|
||||
Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
|
||||
config.vm.provider :aws do |aws, override|
|
||||
config.vm.provider :aws do |aws|
|
||||
config.vm.box = "dummy"
|
||||
config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
|
||||
aws.access_key_id = ENV["AWS_ACCESS_KEY_ID"]
|
||||
aws.secret_access_key = ENV["AWS_SECRET_ACCESS_KEY"]
|
||||
aws.secret_access_key = ENV["AWS_SECRET_ACCESS_KEY"]
|
||||
aws.keypair_name = ENV["AWS_KEYPAIR_NAME"]
|
||||
override.ssh.private_key_path = ENV["AWS_SSH_PRIVKEY"]
|
||||
override.ssh.username = "ubuntu"
|
||||
aws.region = AWS_REGION
|
||||
aws.ami = AWS_AMI
|
||||
aws.ssh_private_key_path = ENV["AWS_SSH_PRIVKEY"]
|
||||
aws.region = "us-east-1"
|
||||
aws.ami = "ami-d0f89fb9"
|
||||
aws.ssh_username = "ubuntu"
|
||||
aws.instance_type = "t1.micro"
|
||||
end
|
||||
|
||||
config.vm.provider :rackspace do |rs|
|
||||
config.vm.box = "dummy"
|
||||
config.vm.box_url = "https://github.com/mitchellh/vagrant-rackspace/raw/master/dummy.box"
|
||||
config.ssh.private_key_path = ENV["RS_PRIVATE_KEY"]
|
||||
rs.username = ENV["RS_USERNAME"]
|
||||
rs.api_key = ENV["RS_API_KEY"]
|
||||
@@ -66,7 +44,39 @@ Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
|
||||
end
|
||||
|
||||
config.vm.provider :virtualbox do |vb|
|
||||
config.vm.box = BOX_NAME
|
||||
config.vm.box_url = BOX_URI
|
||||
config.vm.box = 'precise64'
|
||||
config.vm.box_url = 'http://files.vagrantup.com/precise64.box'
|
||||
end
|
||||
end
|
||||
|
||||
Vagrant::VERSION >= "1.2.0" and Vagrant.configure("2") do |config|
|
||||
config.vm.provider :aws do |aws, override|
|
||||
config.vm.box = "dummy"
|
||||
config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
|
||||
aws.access_key_id = ENV["AWS_ACCESS_KEY_ID"]
|
||||
aws.secret_access_key = ENV["AWS_SECRET_ACCESS_KEY"]
|
||||
aws.keypair_name = ENV["AWS_KEYPAIR_NAME"]
|
||||
override.ssh.private_key_path = ENV["AWS_SSH_PRIVKEY"]
|
||||
override.ssh.username = "ubuntu"
|
||||
aws.region = "us-east-1"
|
||||
aws.ami = "ami-d0f89fb9"
|
||||
aws.instance_type = "t1.micro"
|
||||
end
|
||||
|
||||
config.vm.provider :rackspace do |rs|
|
||||
config.vm.box = "dummy"
|
||||
config.vm.box_url = "https://github.com/mitchellh/vagrant-rackspace/raw/master/dummy.box"
|
||||
config.ssh.private_key_path = ENV["RS_PRIVATE_KEY"]
|
||||
rs.username = ENV["RS_USERNAME"]
|
||||
rs.api_key = ENV["RS_API_KEY"]
|
||||
rs.public_key_path = ENV["RS_PUBLIC_KEY"]
|
||||
rs.flavor = /512MB/
|
||||
rs.image = /Ubuntu/
|
||||
end
|
||||
|
||||
config.vm.provider :virtualbox do |vb|
|
||||
config.vm.box = 'precise64'
|
||||
config.vm.box_url = 'http://files.vagrantup.com/precise64.box'
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
667
api.go
667
api.go
@@ -1,667 +0,0 @@
|
||||
package docker
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"github.com/dotcloud/docker/auth"
|
||||
"github.com/dotcloud/docker/utils"
|
||||
"github.com/gorilla/mux"
|
||||
"io"
|
||||
"log"
|
||||
"net/http"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
const API_VERSION = 1.0
|
||||
|
||||
func hijackServer(w http.ResponseWriter) (io.ReadCloser, io.Writer, error) {
|
||||
conn, _, err := w.(http.Hijacker).Hijack()
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
// Flush the options to make sure the client sets the raw mode
|
||||
conn.Write([]byte{})
|
||||
return conn, conn, nil
|
||||
}
|
||||
|
||||
//If we don't do this, POST method without Content-type (even with empty body) will fail
|
||||
func parseForm(r *http.Request) error {
|
||||
if err := r.ParseForm(); err != nil && !strings.HasPrefix(err.Error(), "mime:") {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func httpError(w http.ResponseWriter, err error) {
|
||||
if strings.HasPrefix(err.Error(), "No such") {
|
||||
http.Error(w, err.Error(), http.StatusNotFound)
|
||||
} else if strings.HasPrefix(err.Error(), "Bad parameter") {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
} else {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
func writeJson(w http.ResponseWriter, b []byte) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.Write(b)
|
||||
}
|
||||
|
||||
func getBoolParam(value string) (bool, error) {
|
||||
if value == "1" || strings.ToLower(value) == "true" {
|
||||
return true, nil
|
||||
}
|
||||
if value == "" || value == "0" || strings.ToLower(value) == "false" {
|
||||
return false, nil
|
||||
}
|
||||
return false, fmt.Errorf("Bad parameter")
|
||||
}
|
||||
|
||||
func getAuth(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
b, err := json.Marshal(srv.registry.GetAuthConfig())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writeJson(w, b)
|
||||
return nil
|
||||
}
|
||||
|
||||
func postAuth(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
config := &auth.AuthConfig{}
|
||||
if err := json.NewDecoder(r.Body).Decode(config); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if config.Username == srv.registry.GetAuthConfig().Username {
|
||||
config.Password = srv.registry.GetAuthConfig().Password
|
||||
}
|
||||
|
||||
newAuthConfig := auth.NewAuthConfig(config.Username, config.Password, config.Email, srv.runtime.root)
|
||||
status, err := auth.Login(newAuthConfig)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
srv.registry.ResetClient(newAuthConfig)
|
||||
|
||||
if status != "" {
|
||||
b, err := json.Marshal(&ApiAuth{Status: status})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writeJson(w, b)
|
||||
return nil
|
||||
}
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
return nil
|
||||
}
|
||||
|
||||
func getVersion(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
m := srv.DockerVersion()
|
||||
b, err := json.Marshal(m)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writeJson(w, b)
|
||||
return nil
|
||||
}
|
||||
|
||||
func postContainersKill(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if vars == nil {
|
||||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
name := vars["name"]
|
||||
if err := srv.ContainerKill(name); err != nil {
|
||||
return err
|
||||
}
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
return nil
|
||||
}
|
||||
|
||||
func getContainersExport(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if vars == nil {
|
||||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
name := vars["name"]
|
||||
|
||||
if err := srv.ContainerExport(name, w); err != nil {
|
||||
utils.Debugf("%s", err.Error())
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func getImagesJson(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if err := parseForm(r); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
all, err := getBoolParam(r.Form.Get("all"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
filter := r.Form.Get("filter")
|
||||
|
||||
outs, err := srv.Images(all, filter)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
b, err := json.Marshal(outs)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writeJson(w, b)
|
||||
return nil
|
||||
}
|
||||
|
||||
func getImagesViz(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if err := srv.ImagesViz(w); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func getInfo(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
out := srv.DockerInfo()
|
||||
b, err := json.Marshal(out)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writeJson(w, b)
|
||||
return nil
|
||||
}
|
||||
|
||||
func getImagesHistory(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if vars == nil {
|
||||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
name := vars["name"]
|
||||
outs, err := srv.ImageHistory(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
b, err := json.Marshal(outs)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writeJson(w, b)
|
||||
return nil
|
||||
}
|
||||
|
||||
func getContainersChanges(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if vars == nil {
|
||||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
name := vars["name"]
|
||||
changesStr, err := srv.ContainerChanges(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
b, err := json.Marshal(changesStr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writeJson(w, b)
|
||||
return nil
|
||||
}
|
||||
|
||||
func getContainersPs(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if err := parseForm(r); err != nil {
|
||||
return err
|
||||
}
|
||||
all, err := getBoolParam(r.Form.Get("all"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
since := r.Form.Get("since")
|
||||
before := r.Form.Get("before")
|
||||
n, err := strconv.Atoi(r.Form.Get("limit"))
|
||||
if err != nil {
|
||||
n = -1
|
||||
}
|
||||
|
||||
outs := srv.Containers(all, n, since, before)
|
||||
b, err := json.Marshal(outs)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writeJson(w, b)
|
||||
return nil
|
||||
}
|
||||
|
||||
func postImagesTag(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if err := parseForm(r); err != nil {
|
||||
return err
|
||||
}
|
||||
repo := r.Form.Get("repo")
|
||||
tag := r.Form.Get("tag")
|
||||
if vars == nil {
|
||||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
name := vars["name"]
|
||||
force, err := getBoolParam(r.Form.Get("force"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := srv.ContainerTag(name, repo, tag, force); err != nil {
|
||||
return err
|
||||
}
|
||||
w.WriteHeader(http.StatusCreated)
|
||||
return nil
|
||||
}
|
||||
|
||||
func postCommit(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if err := parseForm(r); err != nil {
|
||||
return err
|
||||
}
|
||||
config := &Config{}
|
||||
if err := json.NewDecoder(r.Body).Decode(config); err != nil {
|
||||
utils.Debugf("%s", err.Error())
|
||||
}
|
||||
repo := r.Form.Get("repo")
|
||||
tag := r.Form.Get("tag")
|
||||
container := r.Form.Get("container")
|
||||
author := r.Form.Get("author")
|
||||
comment := r.Form.Get("comment")
|
||||
id, err := srv.ContainerCommit(container, repo, tag, author, comment, config)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
b, err := json.Marshal(&ApiId{id})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
w.WriteHeader(http.StatusCreated)
|
||||
writeJson(w, b)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Creates an image from Pull or from Import
|
||||
func postImagesCreate(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if err := parseForm(r); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
src := r.Form.Get("fromSrc")
|
||||
image := r.Form.Get("fromImage")
|
||||
tag := r.Form.Get("tag")
|
||||
repo := r.Form.Get("repo")
|
||||
|
||||
if image != "" { //pull
|
||||
registry := r.Form.Get("registry")
|
||||
if err := srv.ImagePull(image, tag, registry, w); err != nil {
|
||||
return err
|
||||
}
|
||||
} else { //import
|
||||
if err := srv.ImageImport(src, repo, tag, r.Body, w); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func getImagesSearch(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if err := parseForm(r); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
term := r.Form.Get("term")
|
||||
outs, err := srv.ImagesSearch(term)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
b, err := json.Marshal(outs)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writeJson(w, b)
|
||||
return nil
|
||||
}
|
||||
|
||||
func postImagesInsert(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if err := parseForm(r); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
url := r.Form.Get("url")
|
||||
path := r.Form.Get("path")
|
||||
if vars == nil {
|
||||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
name := vars["name"]
|
||||
|
||||
if err := srv.ImageInsert(name, url, path, w); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func postImagesPush(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if err := parseForm(r); err != nil {
|
||||
return err
|
||||
}
|
||||
registry := r.Form.Get("registry")
|
||||
|
||||
if vars == nil {
|
||||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
name := vars["name"]
|
||||
|
||||
if err := srv.ImagePush(name, registry, w); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func postContainersCreate(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
config := &Config{}
|
||||
if err := json.NewDecoder(r.Body).Decode(config); err != nil {
|
||||
return err
|
||||
}
|
||||
id, err := srv.ContainerCreate(config)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
out := &ApiRun{
|
||||
Id: id,
|
||||
}
|
||||
if config.Memory > 0 && !srv.runtime.capabilities.MemoryLimit {
|
||||
log.Println("WARNING: Your kernel does not support memory limit capabilities. Limitation discarded.")
|
||||
out.Warnings = append(out.Warnings, "Your kernel does not support memory limit capabilities. Limitation discarded.")
|
||||
}
|
||||
if config.Memory > 0 && !srv.runtime.capabilities.SwapLimit {
|
||||
log.Println("WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.")
|
||||
out.Warnings = append(out.Warnings, "Your kernel does not support memory swap capabilities. Limitation discarded.")
|
||||
}
|
||||
b, err := json.Marshal(out)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
w.WriteHeader(http.StatusCreated)
|
||||
writeJson(w, b)
|
||||
return nil
|
||||
}
|
||||
|
||||
func postContainersRestart(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if err := parseForm(r); err != nil {
|
||||
return err
|
||||
}
|
||||
t, err := strconv.Atoi(r.Form.Get("t"))
|
||||
if err != nil || t < 0 {
|
||||
t = 10
|
||||
}
|
||||
if vars == nil {
|
||||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
name := vars["name"]
|
||||
if err := srv.ContainerRestart(name, t); err != nil {
|
||||
return err
|
||||
}
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
return nil
|
||||
}
|
||||
|
||||
func deleteContainers(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if err := parseForm(r); err != nil {
|
||||
return err
|
||||
}
|
||||
if vars == nil {
|
||||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
name := vars["name"]
|
||||
removeVolume, err := getBoolParam(r.Form.Get("v"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := srv.ContainerDestroy(name, removeVolume); err != nil {
|
||||
return err
|
||||
}
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
return nil
|
||||
}
|
||||
|
||||
func deleteImages(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if vars == nil {
|
||||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
name := vars["name"]
|
||||
if err := srv.ImageDelete(name); err != nil {
|
||||
return err
|
||||
}
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
return nil
|
||||
}
|
||||
|
||||
func postContainersStart(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if vars == nil {
|
||||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
name := vars["name"]
|
||||
if err := srv.ContainerStart(name); err != nil {
|
||||
return err
|
||||
}
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
return nil
|
||||
}
|
||||
|
||||
func postContainersStop(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if err := parseForm(r); err != nil {
|
||||
return err
|
||||
}
|
||||
t, err := strconv.Atoi(r.Form.Get("t"))
|
||||
if err != nil || t < 0 {
|
||||
t = 10
|
||||
}
|
||||
|
||||
if vars == nil {
|
||||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
name := vars["name"]
|
||||
|
||||
if err := srv.ContainerStop(name, t); err != nil {
|
||||
return err
|
||||
}
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
return nil
|
||||
}
|
||||
|
||||
func postContainersWait(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if vars == nil {
|
||||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
name := vars["name"]
|
||||
status, err := srv.ContainerWait(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
b, err := json.Marshal(&ApiWait{StatusCode: status})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writeJson(w, b)
|
||||
return nil
|
||||
}
|
||||
|
||||
func postContainersAttach(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if err := parseForm(r); err != nil {
|
||||
return err
|
||||
}
|
||||
logs, err := getBoolParam(r.Form.Get("logs"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
stream, err := getBoolParam(r.Form.Get("stream"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
stdin, err := getBoolParam(r.Form.Get("stdin"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
stdout, err := getBoolParam(r.Form.Get("stdout"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
stderr, err := getBoolParam(r.Form.Get("stderr"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if vars == nil {
|
||||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
name := vars["name"]
|
||||
|
||||
in, out, err := hijackServer(w)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer in.Close()
|
||||
|
||||
fmt.Fprintf(out, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n")
|
||||
if err := srv.ContainerAttach(name, logs, stream, stdin, stdout, stderr, in, out); err != nil {
|
||||
fmt.Fprintf(out, "Error: %s\n", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func getContainersByName(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if vars == nil {
|
||||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
name := vars["name"]
|
||||
|
||||
container, err := srv.ContainerInspect(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
b, err := json.Marshal(container)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writeJson(w, b)
|
||||
return nil
|
||||
}
|
||||
|
||||
func getImagesByName(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if vars == nil {
|
||||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
name := vars["name"]
|
||||
|
||||
image, err := srv.ImageInspect(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
b, err := json.Marshal(image)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writeJson(w, b)
|
||||
return nil
|
||||
}
|
||||
|
||||
func postImagesGetCache(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
apiConfig := &ApiImageConfig{}
|
||||
if err := json.NewDecoder(r.Body).Decode(apiConfig); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
image, err := srv.ImageGetCached(apiConfig.Id, apiConfig.Config)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if image == nil {
|
||||
w.WriteHeader(http.StatusNotFound)
|
||||
return nil
|
||||
}
|
||||
apiId := &ApiId{Id: image.Id}
|
||||
b, err := json.Marshal(apiId)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writeJson(w, b)
|
||||
return nil
|
||||
}
|
||||
|
||||
func ListenAndServe(addr string, srv *Server, logging bool) error {
|
||||
r := mux.NewRouter()
|
||||
log.Printf("Listening for HTTP on %s\n", addr)
|
||||
|
||||
m := map[string]map[string]func(*Server, float64, http.ResponseWriter, *http.Request, map[string]string) error{
|
||||
"GET": {
|
||||
"/auth": getAuth,
|
||||
"/version": getVersion,
|
||||
"/info": getInfo,
|
||||
"/images/json": getImagesJson,
|
||||
"/images/viz": getImagesViz,
|
||||
"/images/search": getImagesSearch,
|
||||
"/images/{name:.*}/history": getImagesHistory,
|
||||
"/images/{name:.*}/json": getImagesByName,
|
||||
"/containers/ps": getContainersPs,
|
||||
"/containers/{name:.*}/export": getContainersExport,
|
||||
"/containers/{name:.*}/changes": getContainersChanges,
|
||||
"/containers/{name:.*}/json": getContainersByName,
|
||||
},
|
||||
"POST": {
|
||||
"/auth": postAuth,
|
||||
"/commit": postCommit,
|
||||
"/images/create": postImagesCreate,
|
||||
"/images/{name:.*}/insert": postImagesInsert,
|
||||
"/images/{name:.*}/push": postImagesPush,
|
||||
"/images/{name:.*}/tag": postImagesTag,
|
||||
"/images/getCache": postImagesGetCache,
|
||||
"/containers/create": postContainersCreate,
|
||||
"/containers/{name:.*}/kill": postContainersKill,
|
||||
"/containers/{name:.*}/restart": postContainersRestart,
|
||||
"/containers/{name:.*}/start": postContainersStart,
|
||||
"/containers/{name:.*}/stop": postContainersStop,
|
||||
"/containers/{name:.*}/wait": postContainersWait,
|
||||
"/containers/{name:.*}/attach": postContainersAttach,
|
||||
},
|
||||
"DELETE": {
|
||||
"/containers/{name:.*}": deleteContainers,
|
||||
"/images/{name:.*}": deleteImages,
|
||||
},
|
||||
}
|
||||
|
||||
for method, routes := range m {
|
||||
for route, fct := range routes {
|
||||
utils.Debugf("Registering %s, %s", method, route)
|
||||
// NOTE: scope issue, make sure the variables are local and won't be changed
|
||||
localRoute := route
|
||||
localMethod := method
|
||||
localFct := fct
|
||||
f := func(w http.ResponseWriter, r *http.Request) {
|
||||
utils.Debugf("Calling %s %s", localMethod, localRoute)
|
||||
if logging {
|
||||
log.Println(r.Method, r.RequestURI)
|
||||
}
|
||||
if strings.Contains(r.Header.Get("User-Agent"), "Docker-Client/") {
|
||||
userAgent := strings.Split(r.Header.Get("User-Agent"), "/")
|
||||
if len(userAgent) == 2 && userAgent[1] != VERSION {
|
||||
utils.Debugf("Warning: client and server don't have the same version (client: %s, server: %s)", userAgent[1], VERSION)
|
||||
}
|
||||
}
|
||||
version, err := strconv.ParseFloat(mux.Vars(r)["version"], 64)
|
||||
if err != nil {
|
||||
version = API_VERSION
|
||||
}
|
||||
if version == 0 || version > API_VERSION {
|
||||
w.WriteHeader(http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
if err := localFct(srv, version, w, r, mux.Vars(r)); err != nil {
|
||||
httpError(w, err)
|
||||
}
|
||||
}
|
||||
r.Path("/v{version:[0-9.]+}" + localRoute).Methods(localMethod).HandlerFunc(f)
|
||||
r.Path(localRoute).Methods(localMethod).HandlerFunc(f)
|
||||
}
|
||||
}
|
||||
|
||||
return http.ListenAndServe(addr, r)
|
||||
}
|
||||
@@ -1,71 +0,0 @@
|
||||
package docker
|
||||
|
||||
type ApiHistory struct {
|
||||
Id string
|
||||
Created int64
|
||||
CreatedBy string
|
||||
}
|
||||
|
||||
type ApiImages struct {
|
||||
Repository string `json:",omitempty"`
|
||||
Tag string `json:",omitempty"`
|
||||
Id string
|
||||
Created int64
|
||||
}
|
||||
|
||||
type ApiInfo struct {
|
||||
Containers int
|
||||
Version string
|
||||
Images int
|
||||
Debug bool
|
||||
GoVersion string
|
||||
NFd int `json:",omitempty"`
|
||||
NGoroutines int `json:",omitempty"`
|
||||
}
|
||||
|
||||
type ApiContainers struct {
|
||||
Id string
|
||||
Image string
|
||||
Command string
|
||||
Created int64
|
||||
Status string
|
||||
Ports string
|
||||
}
|
||||
|
||||
type ApiSearch struct {
|
||||
Name string
|
||||
Description string
|
||||
}
|
||||
|
||||
type ApiId struct {
|
||||
Id string
|
||||
}
|
||||
|
||||
type ApiRun struct {
|
||||
Id string
|
||||
Warnings []string
|
||||
}
|
||||
|
||||
type ApiPort struct {
|
||||
Port string
|
||||
}
|
||||
|
||||
type ApiVersion struct {
|
||||
Version string
|
||||
GitCommit string
|
||||
MemoryLimit bool
|
||||
SwapLimit bool
|
||||
}
|
||||
|
||||
type ApiWait struct {
|
||||
StatusCode int
|
||||
}
|
||||
|
||||
type ApiAuth struct {
|
||||
Status string
|
||||
}
|
||||
|
||||
type ApiImageConfig struct {
|
||||
Id string
|
||||
*Config
|
||||
}
|
||||
1273
api_test.go
1273
api_test.go
File diff suppressed because it is too large
Load Diff
61
auth/auth.go
61
auth/auth.go
@@ -3,6 +3,7 @@ package auth
|
||||
import (
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
@@ -15,13 +16,13 @@ import (
|
||||
const CONFIGFILE = ".dockercfg"
|
||||
|
||||
// the registry server we want to login against
|
||||
const INDEX_SERVER = "https://index.docker.io/v1"
|
||||
const REGISTRY_SERVER = "https://registry.docker.io"
|
||||
|
||||
type AuthConfig struct {
|
||||
Username string `json:"username"`
|
||||
Password string `json:"password"`
|
||||
Email string `json:"email"`
|
||||
rootPath string
|
||||
rootPath string `json:-`
|
||||
}
|
||||
|
||||
func NewAuthConfig(username, password, email, rootPath string) *AuthConfig {
|
||||
@@ -33,13 +34,6 @@ func NewAuthConfig(username, password, email, rootPath string) *AuthConfig {
|
||||
}
|
||||
}
|
||||
|
||||
func IndexServerAddress() string {
|
||||
if os.Getenv("DOCKER_INDEX_URL") != "" {
|
||||
return os.Getenv("DOCKER_INDEX_URL") + "/v1"
|
||||
}
|
||||
return INDEX_SERVER
|
||||
}
|
||||
|
||||
// create a base64 encoded auth string to store in config
|
||||
func EncodeAuth(authConfig *AuthConfig) string {
|
||||
authStr := authConfig.Username + ":" + authConfig.Password
|
||||
@@ -82,9 +76,6 @@ func LoadConfig(rootPath string) (*AuthConfig, error) {
|
||||
return nil, err
|
||||
}
|
||||
arr := strings.Split(string(b), "\n")
|
||||
if len(arr) < 2 {
|
||||
return nil, fmt.Errorf("The Auth config file is empty")
|
||||
}
|
||||
origAuth := strings.Split(arr[0], " = ")
|
||||
origEmail := strings.Split(arr[1], " = ")
|
||||
authConfig, err := DecodeAuth(origAuth[1])
|
||||
@@ -98,14 +89,9 @@ func LoadConfig(rootPath string) (*AuthConfig, error) {
|
||||
|
||||
// save the auth config
|
||||
func saveConfig(rootPath, authStr string, email string) error {
|
||||
confFile := path.Join(rootPath, CONFIGFILE)
|
||||
if len(email) == 0 {
|
||||
os.Remove(confFile)
|
||||
return nil
|
||||
}
|
||||
lines := "auth = " + authStr + "\n" + "email = " + email + "\n"
|
||||
b := []byte(lines)
|
||||
err := ioutil.WriteFile(confFile, b, 0600)
|
||||
err := ioutil.WriteFile(path.Join(rootPath, CONFIGFILE), b, 0600)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -115,38 +101,40 @@ func saveConfig(rootPath, authStr string, email string) error {
|
||||
// try to register/login to the registry server
|
||||
func Login(authConfig *AuthConfig) (string, error) {
|
||||
storeConfig := false
|
||||
client := &http.Client{}
|
||||
reqStatusCode := 0
|
||||
var status string
|
||||
var errMsg string
|
||||
var reqBody []byte
|
||||
jsonBody, err := json.Marshal(authConfig)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("Config Error: %s", err)
|
||||
errMsg = fmt.Sprintf("Config Error: %s", err)
|
||||
return "", errors.New(errMsg)
|
||||
}
|
||||
|
||||
// using `bytes.NewReader(jsonBody)` here causes the server to respond with a 411 status.
|
||||
b := strings.NewReader(string(jsonBody))
|
||||
req1, err := http.Post(IndexServerAddress()+"/users/", "application/json; charset=utf-8", b)
|
||||
req1, err := http.Post(REGISTRY_SERVER+"/v1/users", "application/json; charset=utf-8", b)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("Server Error: %s", err)
|
||||
errMsg = fmt.Sprintf("Server Error: %s", err)
|
||||
return "", errors.New(errMsg)
|
||||
}
|
||||
|
||||
reqStatusCode = req1.StatusCode
|
||||
defer req1.Body.Close()
|
||||
reqBody, err = ioutil.ReadAll(req1.Body)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("Server Error: [%#v] %s", reqStatusCode, err)
|
||||
errMsg = fmt.Sprintf("Server Error: [%#v] %s", reqStatusCode, err)
|
||||
return "", errors.New(errMsg)
|
||||
}
|
||||
|
||||
if reqStatusCode == 201 {
|
||||
status = "Account created. Please use the confirmation link we sent" +
|
||||
" to your e-mail to activate it.\n"
|
||||
status = "Account Created\n"
|
||||
storeConfig = true
|
||||
} else if reqStatusCode == 403 {
|
||||
return "", fmt.Errorf("Login: Your account hasn't been activated. " +
|
||||
"Please check your e-mail for a confirmation link.")
|
||||
} else if reqStatusCode == 400 {
|
||||
if string(reqBody) == "\"Username or email already exists\"" {
|
||||
req, err := http.NewRequest("GET", IndexServerAddress()+"/users/", nil)
|
||||
// FIXME: This should be 'exists', not 'exist'. Need to change on the server first.
|
||||
if string(reqBody) == "Username or email already exist" {
|
||||
client := &http.Client{}
|
||||
req, err := http.NewRequest("GET", REGISTRY_SERVER+"/v1/users", nil)
|
||||
req.SetBasicAuth(authConfig.Username, authConfig.Password)
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
@@ -160,18 +148,17 @@ func Login(authConfig *AuthConfig) (string, error) {
|
||||
if resp.StatusCode == 200 {
|
||||
status = "Login Succeeded\n"
|
||||
storeConfig = true
|
||||
} else if resp.StatusCode == 401 {
|
||||
saveConfig(authConfig.rootPath, "", "")
|
||||
return "", fmt.Errorf("Wrong login/password, please try again")
|
||||
} else {
|
||||
return "", fmt.Errorf("Login: %s (Code: %d; Headers: %s)", body,
|
||||
resp.StatusCode, resp.Header)
|
||||
status = fmt.Sprintf("Login: %s", body)
|
||||
return "", errors.New(status)
|
||||
}
|
||||
} else {
|
||||
return "", fmt.Errorf("Registration: %s", reqBody)
|
||||
status = fmt.Sprintf("Registration: %s", reqBody)
|
||||
return "", errors.New(status)
|
||||
}
|
||||
} else {
|
||||
return "", fmt.Errorf("Unexpected status code [%d] : %s", reqStatusCode, reqBody)
|
||||
status = fmt.Sprintf("[%s] : %s", reqStatusCode, reqBody)
|
||||
return "", errors.New(status)
|
||||
}
|
||||
if storeConfig {
|
||||
authStr := EncodeAuth(authConfig)
|
||||
|
||||
@@ -1,10 +1,6 @@
|
||||
package auth
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"encoding/hex"
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
@@ -25,49 +21,3 @@ func TestEncodeAuth(t *testing.T) {
|
||||
t.Fatal("AuthString encoding isn't correct.")
|
||||
}
|
||||
}
|
||||
|
||||
func TestLogin(t *testing.T) {
|
||||
os.Setenv("DOCKER_INDEX_URL", "https://indexstaging-docker.dotcloud.com")
|
||||
defer os.Setenv("DOCKER_INDEX_URL", "")
|
||||
authConfig := NewAuthConfig("unittester", "surlautrerivejetattendrai", "noise+unittester@dotcloud.com", "/tmp")
|
||||
status, err := Login(authConfig)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if status != "Login Succeeded\n" {
|
||||
t.Fatalf("Expected status \"Login Succeeded\", found \"%s\" instead", status)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCreateAccount(t *testing.T) {
|
||||
os.Setenv("DOCKER_INDEX_URL", "https://indexstaging-docker.dotcloud.com")
|
||||
defer os.Setenv("DOCKER_INDEX_URL", "")
|
||||
tokenBuffer := make([]byte, 16)
|
||||
_, err := rand.Read(tokenBuffer)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
token := hex.EncodeToString(tokenBuffer)[:12]
|
||||
username := "ut" + token
|
||||
authConfig := NewAuthConfig(username, "test42", "docker-ut+"+token+"@example.com", "/tmp")
|
||||
status, err := Login(authConfig)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
expectedStatus := "Account created. Please use the confirmation link we sent" +
|
||||
" to your e-mail to activate it.\n"
|
||||
if status != expectedStatus {
|
||||
t.Fatalf("Expected status: \"%s\", found \"%s\" instead.", expectedStatus, status)
|
||||
}
|
||||
|
||||
status, err = Login(authConfig)
|
||||
if err == nil {
|
||||
t.Fatalf("Expected error but found nil instead")
|
||||
}
|
||||
|
||||
expectedError := "Login: Account is not Active"
|
||||
|
||||
if !strings.Contains(err.Error(), expectedError) {
|
||||
t.Fatalf("Expected message \"%s\" but found \"%s\" instead", expectedError, err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,19 +1,5 @@
|
||||
This directory contains material helpful for hacking on docker.
|
||||
|
||||
make hack
|
||||
=========
|
||||
|
||||
Set up an Ubuntu 13.04 virtual machine for developers including kernel 3.8
|
||||
and buildbot. The environment is setup in a way that can be used through
|
||||
the usual go workflow and/or the root Makefile. You can either edit on
|
||||
your host, or inside the VM (using make ssh-dev) and run and test docker
|
||||
inside the VM.
|
||||
|
||||
dependencies: vagrant, virtualbox packages and python package requests
|
||||
|
||||
|
||||
Buildbot
|
||||
~~~~~~~~
|
||||
========
|
||||
|
||||
Buildbot is a continuous integration system designed to automate the
|
||||
build/test cycle. By automatically rebuilding and testing the tree each time
|
||||
@@ -25,3 +11,10 @@ machine in the background running a buildbot instance and adds a git
|
||||
post-commit hook that automatically run docker tests for you.
|
||||
|
||||
You can check your buildbot instance at http://192.168.33.21:8010/waterfall
|
||||
|
||||
|
||||
Buildbot dependencies
|
||||
---------------------
|
||||
|
||||
vagrant, virtualbox packages and python package requests
|
||||
|
||||
28
buildbot/Vagrantfile
vendored
Normal file
28
buildbot/Vagrantfile
vendored
Normal file
@@ -0,0 +1,28 @@
|
||||
# -*- mode: ruby -*-
|
||||
# vi: set ft=ruby :
|
||||
|
||||
$BUILDBOT_IP = '192.168.33.21'
|
||||
|
||||
def v10(config)
|
||||
config.vm.box = "quantal64_3.5.0-25"
|
||||
config.vm.box_url = "http://get.docker.io/vbox/ubuntu/12.10/quantal64_3.5.0-25.box"
|
||||
config.vm.share_folder 'v-data', '/data/docker', File.dirname(__FILE__) + '/..'
|
||||
config.vm.network :hostonly, $BUILDBOT_IP
|
||||
|
||||
# Ensure puppet is installed on the instance
|
||||
config.vm.provision :shell, :inline => 'apt-get -qq update; apt-get install -y puppet'
|
||||
|
||||
config.vm.provision :puppet do |puppet|
|
||||
puppet.manifests_path = '.'
|
||||
puppet.manifest_file = 'buildbot.pp'
|
||||
puppet.options = ['--templatedir','.']
|
||||
end
|
||||
end
|
||||
|
||||
Vagrant::VERSION < '1.1.0' and Vagrant::Config.run do |config|
|
||||
v10(config)
|
||||
end
|
||||
|
||||
Vagrant::VERSION >= '1.1.0' and Vagrant.configure('1') do |config|
|
||||
v10(config)
|
||||
end
|
||||
43
buildbot/buildbot-cfg/buildbot-cfg.sh
Executable file
43
buildbot/buildbot-cfg/buildbot-cfg.sh
Executable file
@@ -0,0 +1,43 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Auto setup of buildbot configuration. Package installation is being done
|
||||
# on buildbot.pp
|
||||
# Dependencies: buildbot, buildbot-slave, supervisor
|
||||
|
||||
SLAVE_NAME='buildworker'
|
||||
SLAVE_SOCKET='localhost:9989'
|
||||
BUILDBOT_PWD='pass-docker'
|
||||
USER='vagrant'
|
||||
ROOT_PATH='/data/buildbot'
|
||||
DOCKER_PATH='/data/docker'
|
||||
BUILDBOT_CFG="$DOCKER_PATH/buildbot/buildbot-cfg"
|
||||
IP=$(grep BUILDBOT_IP /data/docker/buildbot/Vagrantfile | awk -F "'" '{ print $2; }')
|
||||
|
||||
function run { su $USER -c "$1"; }
|
||||
|
||||
export PATH=/bin:sbin:/usr/bin:/usr/sbin:/usr/local/bin
|
||||
|
||||
# Exit if buildbot has already been installed
|
||||
[ -d "$ROOT_PATH" ] && exit 0
|
||||
|
||||
# Setup buildbot
|
||||
run "mkdir -p ${ROOT_PATH}"
|
||||
cd ${ROOT_PATH}
|
||||
run "buildbot create-master master"
|
||||
run "cp $BUILDBOT_CFG/master.cfg master"
|
||||
run "sed -i 's/localhost/$IP/' master/master.cfg"
|
||||
run "buildslave create-slave slave $SLAVE_SOCKET $SLAVE_NAME $BUILDBOT_PWD"
|
||||
|
||||
# Allow buildbot subprocesses (docker tests) to properly run in containers,
|
||||
# in particular with docker -u
|
||||
run "sed -i 's/^umask = None/umask = 000/' ${ROOT_PATH}/slave/buildbot.tac"
|
||||
|
||||
# Setup supervisor
|
||||
cp $BUILDBOT_CFG/buildbot.conf /etc/supervisor/conf.d/buildbot.conf
|
||||
sed -i "s/^chmod=0700.*0700./chmod=0770\nchown=root:$USER/" /etc/supervisor/supervisord.conf
|
||||
kill -HUP `pgrep -f "/usr/bin/python /usr/bin/supervisord"`
|
||||
|
||||
# Add git hook
|
||||
cp $BUILDBOT_CFG/post-commit $DOCKER_PATH/.git/hooks
|
||||
sed -i "s/localhost/$IP/" $DOCKER_PATH/.git/hooks/post-commit
|
||||
|
||||
@@ -13,8 +13,8 @@ TEST_USER = 'buildbot' # Credential to authenticate build triggers
|
||||
TEST_PWD = 'docker' # Credential to authenticate build triggers
|
||||
BUILDER_NAME = 'docker'
|
||||
BUILDPASSWORD = 'pass-docker' # Credential to authenticate buildworkers
|
||||
GOPATH = '/data/docker'
|
||||
DOCKER_PATH = '{0}/src/github.com/dotcloud/docker'.format(GOPATH)
|
||||
DOCKER_PATH = '/data/docker'
|
||||
|
||||
|
||||
c = BuildmasterConfig = {}
|
||||
|
||||
@@ -28,7 +28,10 @@ c['slavePortnum'] = PORT_MASTER
|
||||
c['schedulers'] = [ForceScheduler(name='trigger',builderNames=[BUILDER_NAME])]
|
||||
|
||||
# Docker test command
|
||||
test_cmd = "GOPATH={0} make -C {1} test".format(GOPATH,DOCKER_PATH)
|
||||
test_cmd = """(
|
||||
cd {0}/..; rm -rf docker-tmp; git clone docker docker-tmp;
|
||||
cd docker-tmp; make test; exit_status=$?;
|
||||
cd ..; rm -rf docker-tmp; exit $exit_status)""".format(DOCKER_PATH)
|
||||
|
||||
# Builder
|
||||
factory = BuildFactory()
|
||||
32
buildbot/buildbot.pp
Normal file
32
buildbot/buildbot.pp
Normal file
@@ -0,0 +1,32 @@
|
||||
node default {
|
||||
$USER = 'vagrant'
|
||||
$ROOT_PATH = '/data/buildbot'
|
||||
$DOCKER_PATH = '/data/docker'
|
||||
|
||||
exec {'apt_update': command => '/usr/bin/apt-get update' }
|
||||
Package { require => Exec['apt_update'] }
|
||||
group {'puppet': ensure => 'present'}
|
||||
|
||||
# Install dependencies
|
||||
Package { ensure => 'installed' }
|
||||
package { ['python-dev','python-pip','supervisor','lxc','bsdtar','git','golang']: }
|
||||
|
||||
file{[ '/data' ]:
|
||||
owner => $USER, group => $USER, ensure => 'directory' }
|
||||
|
||||
file {'/var/tmp/requirements.txt':
|
||||
content => template('requirements.txt') }
|
||||
|
||||
exec {'requirements':
|
||||
require => [ Package['python-dev'], Package['python-pip'],
|
||||
File['/var/tmp/requirements.txt'] ],
|
||||
cwd => '/var/tmp',
|
||||
command => "/bin/sh -c '(/usr/bin/pip install -r requirements.txt;
|
||||
rm /var/tmp/requirements.txt)'" }
|
||||
|
||||
exec {'buildbot-cfg-sh':
|
||||
require => [ Package['supervisor'], Exec['requirements']],
|
||||
path => '/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin',
|
||||
cwd => '/data',
|
||||
command => "$DOCKER_PATH/buildbot/buildbot-cfg/buildbot-cfg.sh" }
|
||||
}
|
||||
233
builder.go
233
builder.go
@@ -1,118 +1,169 @@
|
||||
package docker
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"os"
|
||||
"path"
|
||||
"time"
|
||||
"io"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type Builder struct {
|
||||
runtime *Runtime
|
||||
repositories *TagStore
|
||||
graph *Graph
|
||||
|
||||
config *Config
|
||||
image *Image
|
||||
runtime *Runtime
|
||||
}
|
||||
|
||||
func NewBuilder(runtime *Runtime) *Builder {
|
||||
return &Builder{
|
||||
runtime: runtime,
|
||||
graph: runtime.graph,
|
||||
repositories: runtime.repositories,
|
||||
runtime: runtime,
|
||||
}
|
||||
}
|
||||
|
||||
func (builder *Builder) Create(config *Config) (*Container, error) {
|
||||
// Lookup image
|
||||
img, err := builder.repositories.LookupImage(config.Image)
|
||||
func (builder *Builder) Run(image *Image, cmd ...string) (*Container, error) {
|
||||
// FIXME: pass a NopWriter instead of nil
|
||||
config, err := ParseRun(append([]string{"-d", image.Id}, cmd...), nil, builder.runtime.capabilities)
|
||||
if config.Image == "" {
|
||||
return nil, fmt.Errorf("Image not specified")
|
||||
}
|
||||
if len(config.Cmd) == 0 {
|
||||
return nil, fmt.Errorf("Command not specified")
|
||||
}
|
||||
if config.Tty {
|
||||
return nil, fmt.Errorf("The tty mode is not supported within the builder")
|
||||
}
|
||||
|
||||
// Create new container
|
||||
container, err := builder.runtime.Create(config)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if img.Config != nil {
|
||||
MergeConfig(config, img.Config)
|
||||
}
|
||||
|
||||
if config.Cmd == nil || len(config.Cmd) == 0 {
|
||||
return nil, fmt.Errorf("No command specified")
|
||||
}
|
||||
|
||||
// Generate id
|
||||
id := GenerateId()
|
||||
// Generate default hostname
|
||||
// FIXME: the lxc template no longer needs to set a default hostname
|
||||
if config.Hostname == "" {
|
||||
config.Hostname = id[:12]
|
||||
}
|
||||
|
||||
container := &Container{
|
||||
// FIXME: we should generate the ID here instead of receiving it as an argument
|
||||
Id: id,
|
||||
Created: time.Now(),
|
||||
Path: config.Cmd[0],
|
||||
Args: config.Cmd[1:], //FIXME: de-duplicate from config
|
||||
Config: config,
|
||||
Image: img.Id, // Always use the resolved image id
|
||||
NetworkSettings: &NetworkSettings{},
|
||||
// FIXME: do we need to store this in the container?
|
||||
SysInitPath: sysInitPath,
|
||||
}
|
||||
container.root = builder.runtime.containerRoot(container.Id)
|
||||
// Step 1: create the container directory.
|
||||
// This doubles as a barrier to avoid race conditions.
|
||||
if err := os.Mkdir(container.root, 0700); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// If custom dns exists, then create a resolv.conf for the container
|
||||
if len(config.Dns) > 0 {
|
||||
container.ResolvConfPath = path.Join(container.root, "resolv.conf")
|
||||
f, err := os.Create(container.ResolvConfPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer f.Close()
|
||||
for _, dns := range config.Dns {
|
||||
if _, err := f.Write([]byte("nameserver " + dns + "\n")); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
} else {
|
||||
container.ResolvConfPath = "/etc/resolv.conf"
|
||||
}
|
||||
|
||||
// Step 2: save the container json
|
||||
if err := container.ToDisk(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Step 3: register the container
|
||||
if err := builder.runtime.Register(container); err != nil {
|
||||
if err := container.Start(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return container, nil
|
||||
}
|
||||
|
||||
// Commit creates a new filesystem image from the current state of a container.
|
||||
// The image can optionally be tagged into a repository
|
||||
func (builder *Builder) Commit(container *Container, repository, tag, comment, author string, config *Config) (*Image, error) {
|
||||
// FIXME: freeze the container before copying it to avoid data corruption?
|
||||
// FIXME: this shouldn't be in commands.
|
||||
rwTar, err := container.ExportRw()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
func (builder *Builder) Commit(container *Container, repository, tag, comment, author string) (*Image, error) {
|
||||
return builder.runtime.Commit(container.Id, repository, tag, comment, author)
|
||||
}
|
||||
|
||||
func (builder *Builder) clearTmp(containers, images map[string]struct{}) {
|
||||
for c := range containers {
|
||||
tmp := builder.runtime.Get(c)
|
||||
builder.runtime.Destroy(tmp)
|
||||
Debugf("Removing container %s", c)
|
||||
}
|
||||
// Create a new image from the container's base layers + a new layer from container changes
|
||||
img, err := builder.graph.Create(rwTar, container, comment, author, config)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
for i := range images {
|
||||
builder.runtime.graph.Delete(i)
|
||||
Debugf("Removing image %s", i)
|
||||
}
|
||||
// Register the image if needed
|
||||
if repository != "" {
|
||||
if err := builder.repositories.Set(repository, tag, img.Id, true); err != nil {
|
||||
return img, err
|
||||
}
|
||||
|
||||
func (builder *Builder) Build(dockerfile io.Reader, stdout io.Writer) error {
|
||||
var (
|
||||
image, base *Image
|
||||
tmpContainers map[string]struct{} = make(map[string]struct{})
|
||||
tmpImages map[string]struct{} = make(map[string]struct{})
|
||||
)
|
||||
defer builder.clearTmp(tmpContainers, tmpImages)
|
||||
|
||||
file := bufio.NewReader(dockerfile)
|
||||
for {
|
||||
line, err := file.ReadString('\n')
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
return err
|
||||
}
|
||||
line = strings.TrimSpace(line)
|
||||
// Skip comments and empty line
|
||||
if len(line) == 0 || line[0] == '#' {
|
||||
continue
|
||||
}
|
||||
tmp := strings.SplitN(line, " ", 2)
|
||||
if len(tmp) != 2 {
|
||||
return fmt.Errorf("Invalid Dockerfile format")
|
||||
}
|
||||
switch tmp[0] {
|
||||
case "from":
|
||||
fmt.Fprintf(stdout, "FROM %s\n", tmp[1])
|
||||
image, err = builder.runtime.repositories.LookupImage(tmp[1])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
break
|
||||
case "run":
|
||||
fmt.Fprintf(stdout, "RUN %s\n", tmp[1])
|
||||
if image == nil {
|
||||
return fmt.Errorf("Please provide a source image with `from` prior to run")
|
||||
}
|
||||
|
||||
// Create the container and start it
|
||||
c, err := builder.Run(image, "/bin/sh", "-c", tmp[1])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
tmpContainers[c.Id] = struct{}{}
|
||||
|
||||
// Wait for it to finish
|
||||
if result := c.Wait(); result != 0 {
|
||||
return fmt.Errorf("!!! '%s' return non-zero exit code '%d'. Aborting.", tmp[1], result)
|
||||
}
|
||||
|
||||
// Commit the container
|
||||
base, err = builder.Commit(c, "", "", "", "")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
tmpImages[base.Id] = struct{}{}
|
||||
|
||||
fmt.Fprintf(stdout, "===> %s\n", base.ShortId())
|
||||
break
|
||||
case "copy":
|
||||
if image == nil {
|
||||
return fmt.Errorf("Please provide a source image with `from` prior to copy")
|
||||
}
|
||||
tmp2 := strings.SplitN(tmp[1], " ", 2)
|
||||
if len(tmp) != 2 {
|
||||
return fmt.Errorf("Invalid COPY format")
|
||||
}
|
||||
fmt.Fprintf(stdout, "COPY %s to %s in %s\n", tmp2[0], tmp2[1], base.ShortId())
|
||||
|
||||
file, err := Download(tmp2[0], stdout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer file.Body.Close()
|
||||
|
||||
c, err := builder.Run(base, "echo", "insert", tmp2[0], tmp2[1])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := c.Inject(file.Body, tmp2[1]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
base, err = builder.Commit(c, "", "", "", "")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintf(stdout, "===> %s\n", base.ShortId())
|
||||
break
|
||||
default:
|
||||
fmt.Fprintf(stdout, "Skipping unknown op %s\n", tmp[0])
|
||||
}
|
||||
}
|
||||
return img, nil
|
||||
if base != nil {
|
||||
// The build is successful, keep the temporary containers and images
|
||||
for i := range tmpImages {
|
||||
delete(tmpImages, i)
|
||||
}
|
||||
for i := range tmpContainers {
|
||||
delete(tmpContainers, i)
|
||||
}
|
||||
fmt.Fprintf(stdout, "Build finished. image id: %s\n", base.ShortId())
|
||||
} else {
|
||||
fmt.Fprintf(stdout, "An error occured during the build\n")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -1,305 +0,0 @@
|
||||
package docker
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"github.com/dotcloud/docker/utils"
|
||||
"io"
|
||||
"net/url"
|
||||
"os"
|
||||
"reflect"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type BuilderClient interface {
|
||||
Build(io.Reader) (string, error)
|
||||
CmdFrom(string) error
|
||||
CmdRun(string) error
|
||||
}
|
||||
|
||||
type builderClient struct {
|
||||
cli *DockerCli
|
||||
|
||||
image string
|
||||
maintainer string
|
||||
config *Config
|
||||
|
||||
tmpContainers map[string]struct{}
|
||||
tmpImages map[string]struct{}
|
||||
|
||||
needCommit bool
|
||||
}
|
||||
|
||||
func (b *builderClient) clearTmp(containers, images map[string]struct{}) {
|
||||
for i := range images {
|
||||
if _, _, err := b.cli.call("DELETE", "/images/"+i, nil); err != nil {
|
||||
utils.Debugf("%s", err)
|
||||
}
|
||||
utils.Debugf("Removing image %s", i)
|
||||
}
|
||||
}
|
||||
|
||||
func (b *builderClient) CmdFrom(name string) error {
|
||||
obj, statusCode, err := b.cli.call("GET", "/images/"+name+"/json", nil)
|
||||
if statusCode == 404 {
|
||||
|
||||
remote := name
|
||||
var tag string
|
||||
if strings.Contains(remote, ":") {
|
||||
remoteParts := strings.Split(remote, ":")
|
||||
tag = remoteParts[1]
|
||||
remote = remoteParts[0]
|
||||
}
|
||||
var out io.Writer
|
||||
if os.Getenv("DEBUG") != "" {
|
||||
out = os.Stdout
|
||||
} else {
|
||||
out = &utils.NopWriter{}
|
||||
}
|
||||
if err := b.cli.stream("POST", "/images/create?fromImage="+remote+"&tag="+tag, nil, out); err != nil {
|
||||
return err
|
||||
}
|
||||
obj, _, err = b.cli.call("GET", "/images/"+name+"/json", nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
img := &ApiId{}
|
||||
if err := json.Unmarshal(obj, img); err != nil {
|
||||
return err
|
||||
}
|
||||
b.image = img.Id
|
||||
utils.Debugf("Using image %s", b.image)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *builderClient) CmdMaintainer(name string) error {
|
||||
b.needCommit = true
|
||||
b.maintainer = name
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *builderClient) CmdRun(args string) error {
|
||||
if b.image == "" {
|
||||
return fmt.Errorf("Please provide a source image with `from` prior to run")
|
||||
}
|
||||
config, _, err := ParseRun([]string{b.image, "/bin/sh", "-c", args}, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cmd, env := b.config.Cmd, b.config.Env
|
||||
b.config.Cmd = nil
|
||||
MergeConfig(b.config, config)
|
||||
|
||||
body, statusCode, err := b.cli.call("POST", "/images/getCache", &ApiImageConfig{Id: b.image, Config: b.config})
|
||||
if err != nil {
|
||||
if statusCode != 404 {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if statusCode != 404 {
|
||||
apiId := &ApiId{}
|
||||
if err := json.Unmarshal(body, apiId); err != nil {
|
||||
return err
|
||||
}
|
||||
utils.Debugf("Use cached version")
|
||||
b.image = apiId.Id
|
||||
return nil
|
||||
}
|
||||
cid, err := b.run()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
b.config.Cmd, b.config.Env = cmd, env
|
||||
return b.commit(cid)
|
||||
}
|
||||
|
||||
func (b *builderClient) CmdEnv(args string) error {
|
||||
b.needCommit = true
|
||||
tmp := strings.SplitN(args, " ", 2)
|
||||
if len(tmp) != 2 {
|
||||
return fmt.Errorf("Invalid ENV format")
|
||||
}
|
||||
key := strings.Trim(tmp[0], " ")
|
||||
value := strings.Trim(tmp[1], " ")
|
||||
|
||||
for i, elem := range b.config.Env {
|
||||
if strings.HasPrefix(elem, key+"=") {
|
||||
b.config.Env[i] = key + "=" + value
|
||||
return nil
|
||||
}
|
||||
}
|
||||
b.config.Env = append(b.config.Env, key+"="+value)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *builderClient) CmdCmd(args string) error {
|
||||
b.needCommit = true
|
||||
var cmd []string
|
||||
if err := json.Unmarshal([]byte(args), &cmd); err != nil {
|
||||
utils.Debugf("Error unmarshalling: %s, using /bin/sh -c", err)
|
||||
b.config.Cmd = []string{"/bin/sh", "-c", args}
|
||||
} else {
|
||||
b.config.Cmd = cmd
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *builderClient) CmdExpose(args string) error {
|
||||
ports := strings.Split(args, " ")
|
||||
b.config.PortSpecs = append(ports, b.config.PortSpecs...)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *builderClient) CmdInsert(args string) error {
|
||||
// FIXME: Reimplement this once the remove_hijack branch gets merged.
|
||||
// We need to retrieve the resulting Id
|
||||
return fmt.Errorf("INSERT not implemented")
|
||||
}
|
||||
|
||||
func (b *builderClient) run() (string, error) {
|
||||
if b.image == "" {
|
||||
return "", fmt.Errorf("Please provide a source image with `from` prior to run")
|
||||
}
|
||||
b.config.Image = b.image
|
||||
body, _, err := b.cli.call("POST", "/containers/create", b.config)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
apiRun := &ApiRun{}
|
||||
if err := json.Unmarshal(body, apiRun); err != nil {
|
||||
return "", err
|
||||
}
|
||||
for _, warning := range apiRun.Warnings {
|
||||
fmt.Fprintln(os.Stderr, "WARNING: ", warning)
|
||||
}
|
||||
|
||||
//start the container
|
||||
_, _, err = b.cli.call("POST", "/containers/"+apiRun.Id+"/start", nil)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
b.tmpContainers[apiRun.Id] = struct{}{}
|
||||
|
||||
// Wait for it to finish
|
||||
body, _, err = b.cli.call("POST", "/containers/"+apiRun.Id+"/wait", nil)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
apiWait := &ApiWait{}
|
||||
if err := json.Unmarshal(body, apiWait); err != nil {
|
||||
return "", err
|
||||
}
|
||||
if apiWait.StatusCode != 0 {
|
||||
return "", fmt.Errorf("The command %v returned a non-zero code: %d", b.config.Cmd, apiWait.StatusCode)
|
||||
}
|
||||
|
||||
return apiRun.Id, nil
|
||||
}
|
||||
|
||||
func (b *builderClient) commit(id string) error {
|
||||
if b.image == "" {
|
||||
return fmt.Errorf("Please provide a source image with `from` prior to run")
|
||||
}
|
||||
b.config.Image = b.image
|
||||
|
||||
if id == "" {
|
||||
cmd := b.config.Cmd
|
||||
b.config.Cmd = []string{"true"}
|
||||
if cid, err := b.run(); err != nil {
|
||||
return err
|
||||
} else {
|
||||
id = cid
|
||||
}
|
||||
b.config.Cmd = cmd
|
||||
}
|
||||
|
||||
// Commit the container
|
||||
v := url.Values{}
|
||||
v.Set("container", id)
|
||||
v.Set("author", b.maintainer)
|
||||
|
||||
body, _, err := b.cli.call("POST", "/commit?"+v.Encode(), b.config)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
apiId := &ApiId{}
|
||||
if err := json.Unmarshal(body, apiId); err != nil {
|
||||
return err
|
||||
}
|
||||
b.tmpImages[apiId.Id] = struct{}{}
|
||||
b.image = apiId.Id
|
||||
b.needCommit = false
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *builderClient) Build(dockerfile io.Reader) (string, error) {
|
||||
defer b.clearTmp(b.tmpContainers, b.tmpImages)
|
||||
file := bufio.NewReader(dockerfile)
|
||||
for {
|
||||
line, err := file.ReadString('\n')
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
return "", err
|
||||
}
|
||||
line = strings.Replace(strings.TrimSpace(line), " ", " ", 1)
|
||||
// Skip comments and empty line
|
||||
if len(line) == 0 || line[0] == '#' {
|
||||
continue
|
||||
}
|
||||
tmp := strings.SplitN(line, " ", 2)
|
||||
if len(tmp) != 2 {
|
||||
return "", fmt.Errorf("Invalid Dockerfile format")
|
||||
}
|
||||
instruction := strings.ToLower(strings.Trim(tmp[0], " "))
|
||||
arguments := strings.Trim(tmp[1], " ")
|
||||
|
||||
fmt.Printf("%s %s (%s)\n", strings.ToUpper(instruction), arguments, b.image)
|
||||
|
||||
method, exists := reflect.TypeOf(b).MethodByName("Cmd" + strings.ToUpper(instruction[:1]) + strings.ToLower(instruction[1:]))
|
||||
if !exists {
|
||||
fmt.Printf("Skipping unknown instruction %s\n", strings.ToUpper(instruction))
|
||||
}
|
||||
ret := method.Func.Call([]reflect.Value{reflect.ValueOf(b), reflect.ValueOf(arguments)})[0].Interface()
|
||||
if ret != nil {
|
||||
return "", ret.(error)
|
||||
}
|
||||
|
||||
fmt.Printf("===> %v\n", b.image)
|
||||
}
|
||||
if b.needCommit {
|
||||
if err := b.commit(""); err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
if b.image != "" {
|
||||
// The build is successful, keep the temporary containers and images
|
||||
for i := range b.tmpImages {
|
||||
delete(b.tmpImages, i)
|
||||
}
|
||||
for i := range b.tmpContainers {
|
||||
delete(b.tmpContainers, i)
|
||||
}
|
||||
fmt.Printf("Build finished. image id: %s\n", b.image)
|
||||
return b.image, nil
|
||||
}
|
||||
return "", fmt.Errorf("An error occured during the build\n")
|
||||
}
|
||||
|
||||
func NewBuilderClient(addr string, port int) BuilderClient {
|
||||
return &builderClient{
|
||||
cli: NewDockerCli(addr, port),
|
||||
config: &Config{},
|
||||
tmpContainers: make(map[string]struct{}),
|
||||
tmpImages: make(map[string]struct{}),
|
||||
}
|
||||
}
|
||||
1426
commands.go
1426
commands.go
File diff suppressed because it is too large
Load Diff
@@ -3,8 +3,9 @@ package docker
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"github.com/dotcloud/docker/rcli"
|
||||
"io"
|
||||
_ "io/ioutil"
|
||||
"io/ioutil"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
@@ -58,7 +59,6 @@ func assertPipe(input, output string, r io.Reader, w io.Writer, count int) error
|
||||
return nil
|
||||
}
|
||||
|
||||
/*TODO
|
||||
func cmdWait(srv *Server, container *Container) error {
|
||||
stdout, stdoutPipe := io.Pipe()
|
||||
|
||||
@@ -73,77 +73,6 @@ func cmdWait(srv *Server, container *Container) error {
|
||||
return closeWrap(stdout, stdoutPipe)
|
||||
}
|
||||
|
||||
func cmdImages(srv *Server, args ...string) (string, error) {
|
||||
stdout, stdoutPipe := io.Pipe()
|
||||
|
||||
go func() {
|
||||
if err := srv.CmdImages(nil, stdoutPipe, args...); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// force the pipe closed, so that the code below gets an EOF
|
||||
stdoutPipe.Close()
|
||||
}()
|
||||
|
||||
output, err := ioutil.ReadAll(stdout)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// Cleanup pipes
|
||||
return string(output), closeWrap(stdout, stdoutPipe)
|
||||
}
|
||||
|
||||
// TestImages checks that 'docker images' displays information correctly
|
||||
func TestImages(t *testing.T) {
|
||||
|
||||
runtime, err := newTestRuntime()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer nuke(runtime)
|
||||
|
||||
srv := &Server{runtime: runtime}
|
||||
|
||||
output, err := cmdImages(srv)
|
||||
|
||||
if !strings.Contains(output, "REPOSITORY") {
|
||||
t.Fatal("'images' should have a header")
|
||||
}
|
||||
if !strings.Contains(output, "docker-ut") {
|
||||
t.Fatal("'images' should show the docker-ut image")
|
||||
}
|
||||
if !strings.Contains(output, "e9aa60c60128") {
|
||||
t.Fatal("'images' should show the docker-ut image id")
|
||||
}
|
||||
|
||||
output, err = cmdImages(srv, "-q")
|
||||
|
||||
if strings.Contains(output, "REPOSITORY") {
|
||||
t.Fatal("'images -q' should not have a header")
|
||||
}
|
||||
if strings.Contains(output, "docker-ut") {
|
||||
t.Fatal("'images' should not show the docker-ut image name")
|
||||
}
|
||||
if !strings.Contains(output, "e9aa60c60128") {
|
||||
t.Fatal("'images' should show the docker-ut image id")
|
||||
}
|
||||
|
||||
output, err = cmdImages(srv, "-viz")
|
||||
|
||||
if !strings.HasPrefix(output, "digraph docker {") {
|
||||
t.Fatal("'images -v' should start with the dot header")
|
||||
}
|
||||
if !strings.HasSuffix(output, "}\n") {
|
||||
t.Fatal("'images -v' should end with a '}'")
|
||||
}
|
||||
if !strings.Contains(output, "base -> \"e9aa60c60128\" [style=invis]") {
|
||||
t.Fatal("'images -v' should have the docker-ut image id node")
|
||||
}
|
||||
|
||||
// todo: add checks for -a
|
||||
}
|
||||
|
||||
// TestRunHostname checks that 'docker run -h' correctly sets a custom hostname
|
||||
func TestRunHostname(t *testing.T) {
|
||||
runtime, err := newTestRuntime()
|
||||
@@ -410,10 +339,9 @@ func TestAttachDisconnect(t *testing.T) {
|
||||
|
||||
srv := &Server{runtime: runtime}
|
||||
|
||||
container, err := NewBuilder(runtime).Create(
|
||||
container, err := runtime.Create(
|
||||
&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
CpuShares: 1000,
|
||||
Memory: 33554432,
|
||||
Cmd: []string{"/bin/cat"},
|
||||
OpenStdin: true,
|
||||
@@ -466,6 +394,4 @@ func TestAttachDisconnect(t *testing.T) {
|
||||
// Try to avoid the timeoout in destroy. Best effort, don't check error
|
||||
cStdin, _ := container.StdinPipe()
|
||||
cStdin.Close()
|
||||
container.Wait()
|
||||
}
|
||||
*/
|
||||
|
||||
158
container.go
158
container.go
@@ -2,9 +2,8 @@ package docker
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"flag"
|
||||
"fmt"
|
||||
"github.com/dotcloud/docker/utils"
|
||||
"github.com/dotcloud/docker/rcli"
|
||||
"github.com/kr/pty"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
@@ -40,8 +39,8 @@ type Container struct {
|
||||
ResolvConfPath string
|
||||
|
||||
cmd *exec.Cmd
|
||||
stdout *utils.WriteBroadcaster
|
||||
stderr *utils.WriteBroadcaster
|
||||
stdout *writeBroadcaster
|
||||
stderr *writeBroadcaster
|
||||
stdin io.ReadCloser
|
||||
stdinPipe io.WriteCloser
|
||||
ptyMaster io.Closer
|
||||
@@ -49,7 +48,6 @@ type Container struct {
|
||||
runtime *Runtime
|
||||
|
||||
waitLock chan struct{}
|
||||
Volumes map[string]string
|
||||
}
|
||||
|
||||
type Config struct {
|
||||
@@ -57,7 +55,6 @@ type Config struct {
|
||||
User string
|
||||
Memory int64 // Memory limit (in bytes)
|
||||
MemorySwap int64 // Total memory usage (memory + swap); set `-1' to disable swap
|
||||
CpuShares int64 // CPU shares (relative weight vs. other containers)
|
||||
AttachStdin bool
|
||||
AttachStdout bool
|
||||
AttachStderr bool
|
||||
@@ -69,12 +66,10 @@ type Config struct {
|
||||
Cmd []string
|
||||
Dns []string
|
||||
Image string // Name of the image as it was passed by the operator (eg. could be symbolic)
|
||||
Volumes map[string]struct{}
|
||||
VolumesFrom string
|
||||
}
|
||||
|
||||
func ParseRun(args []string, capabilities *Capabilities) (*Config, *flag.FlagSet, error) {
|
||||
cmd := Subcmd("run", "[OPTIONS] IMAGE COMMAND [ARG...]", "Run a command in a new container")
|
||||
func ParseRun(args []string, stdout io.Writer, capabilities *Capabilities) (*Config, error) {
|
||||
cmd := rcli.Subcmd(stdout, "run", "[OPTIONS] IMAGE COMMAND [ARG...]", "Run a command in a new container")
|
||||
if len(args) > 0 && args[0] != "--help" {
|
||||
cmd.SetOutput(ioutil.Discard)
|
||||
}
|
||||
@@ -88,13 +83,11 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *flag.FlagSet
|
||||
flTty := cmd.Bool("t", false, "Allocate a pseudo-tty")
|
||||
flMemory := cmd.Int64("m", 0, "Memory limit (in bytes)")
|
||||
|
||||
if capabilities != nil && *flMemory > 0 && !capabilities.MemoryLimit {
|
||||
//fmt.Fprintf(stdout, "WARNING: Your kernel does not support memory limit capabilities. Limitation discarded.\n")
|
||||
if *flMemory > 0 && !capabilities.MemoryLimit {
|
||||
fmt.Fprintf(stdout, "WARNING: Your kernel does not support memory limit capabilities. Limitation discarded.\n")
|
||||
*flMemory = 0
|
||||
}
|
||||
|
||||
flCpuShares := cmd.Int64("c", 0, "CPU shares (relative weight)")
|
||||
|
||||
var flPorts ListOpts
|
||||
cmd.Var(&flPorts, "p", "Expose a container's port to the host (use 'docker port' to see the actual mapping)")
|
||||
|
||||
@@ -104,16 +97,11 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *flag.FlagSet
|
||||
var flDns ListOpts
|
||||
cmd.Var(&flDns, "dns", "Set custom dns servers")
|
||||
|
||||
flVolumes := NewPathOpts()
|
||||
cmd.Var(flVolumes, "v", "Attach a data volume")
|
||||
|
||||
flVolumesFrom := cmd.String("volumes-from", "", "Mount volumes from the specified container")
|
||||
|
||||
if err := cmd.Parse(args); err != nil {
|
||||
return nil, cmd, err
|
||||
return nil, err
|
||||
}
|
||||
if *flDetach && len(flAttach) > 0 {
|
||||
return nil, cmd, fmt.Errorf("Conflicting options: -a and -d")
|
||||
return nil, fmt.Errorf("Conflicting options: -a and -d")
|
||||
}
|
||||
// If neither -d or -a are set, attach to everything by default
|
||||
if len(flAttach) == 0 && !*flDetach {
|
||||
@@ -141,7 +129,6 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *flag.FlagSet
|
||||
Tty: *flTty,
|
||||
OpenStdin: *flStdin,
|
||||
Memory: *flMemory,
|
||||
CpuShares: *flCpuShares,
|
||||
AttachStdin: flAttach.Get("stdin"),
|
||||
AttachStdout: flAttach.Get("stdout"),
|
||||
AttachStderr: flAttach.Get("stderr"),
|
||||
@@ -149,12 +136,10 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *flag.FlagSet
|
||||
Cmd: runCmd,
|
||||
Dns: flDns,
|
||||
Image: image,
|
||||
Volumes: flVolumes,
|
||||
VolumesFrom: *flVolumesFrom,
|
||||
}
|
||||
|
||||
if capabilities != nil && *flMemory > 0 && !capabilities.SwapLimit {
|
||||
//fmt.Fprintf(stdout, "WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.\n")
|
||||
if *flMemory > 0 && !capabilities.SwapLimit {
|
||||
fmt.Fprintf(stdout, "WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.\n")
|
||||
config.MemorySwap = -1
|
||||
}
|
||||
|
||||
@@ -162,7 +147,7 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *flag.FlagSet
|
||||
if config.OpenStdin && config.AttachStdin {
|
||||
config.StdinOnce = true
|
||||
}
|
||||
return config, cmd, nil
|
||||
return config, nil
|
||||
}
|
||||
|
||||
type NetworkSettings struct {
|
||||
@@ -252,9 +237,9 @@ func (container *Container) startPty() error {
|
||||
// Copy the PTYs to our broadcasters
|
||||
go func() {
|
||||
defer container.stdout.CloseWriters()
|
||||
utils.Debugf("[startPty] Begin of stdout pipe")
|
||||
Debugf("[startPty] Begin of stdout pipe")
|
||||
io.Copy(container.stdout, ptyMaster)
|
||||
utils.Debugf("[startPty] End of stdout pipe")
|
||||
Debugf("[startPty] End of stdout pipe")
|
||||
}()
|
||||
|
||||
// stdin
|
||||
@@ -263,9 +248,9 @@ func (container *Container) startPty() error {
|
||||
container.cmd.SysProcAttr = &syscall.SysProcAttr{Setctty: true, Setsid: true}
|
||||
go func() {
|
||||
defer container.stdin.Close()
|
||||
utils.Debugf("[startPty] Begin of stdin pipe")
|
||||
Debugf("[startPty] Begin of stdin pipe")
|
||||
io.Copy(ptyMaster, container.stdin)
|
||||
utils.Debugf("[startPty] End of stdin pipe")
|
||||
Debugf("[startPty] End of stdin pipe")
|
||||
}()
|
||||
}
|
||||
if err := container.cmd.Start(); err != nil {
|
||||
@@ -285,9 +270,9 @@ func (container *Container) start() error {
|
||||
}
|
||||
go func() {
|
||||
defer stdin.Close()
|
||||
utils.Debugf("Begin of stdin pipe [start]")
|
||||
Debugf("Begin of stdin pipe [start]")
|
||||
io.Copy(stdin, container.stdin)
|
||||
utils.Debugf("End of stdin pipe [start]")
|
||||
Debugf("End of stdin pipe [start]")
|
||||
}()
|
||||
}
|
||||
return container.cmd.Start()
|
||||
@@ -304,8 +289,8 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
|
||||
errors <- err
|
||||
} else {
|
||||
go func() {
|
||||
utils.Debugf("[start] attach stdin\n")
|
||||
defer utils.Debugf("[end] attach stdin\n")
|
||||
Debugf("[start] attach stdin\n")
|
||||
defer Debugf("[end] attach stdin\n")
|
||||
// No matter what, when stdin is closed (io.Copy unblock), close stdout and stderr
|
||||
if cStdout != nil {
|
||||
defer cStdout.Close()
|
||||
@@ -317,12 +302,12 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
|
||||
defer cStdin.Close()
|
||||
}
|
||||
if container.Config.Tty {
|
||||
_, err = utils.CopyEscapable(cStdin, stdin)
|
||||
_, err = CopyEscapable(cStdin, stdin)
|
||||
} else {
|
||||
_, err = io.Copy(cStdin, stdin)
|
||||
}
|
||||
if err != nil {
|
||||
utils.Debugf("[error] attach stdin: %s\n", err)
|
||||
Debugf("[error] attach stdin: %s\n", err)
|
||||
}
|
||||
// Discard error, expecting pipe error
|
||||
errors <- nil
|
||||
@@ -336,8 +321,8 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
|
||||
} else {
|
||||
cStdout = p
|
||||
go func() {
|
||||
utils.Debugf("[start] attach stdout\n")
|
||||
defer utils.Debugf("[end] attach stdout\n")
|
||||
Debugf("[start] attach stdout\n")
|
||||
defer Debugf("[end] attach stdout\n")
|
||||
// If we are in StdinOnce mode, then close stdin
|
||||
if container.Config.StdinOnce {
|
||||
if stdin != nil {
|
||||
@@ -349,7 +334,7 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
|
||||
}
|
||||
_, err := io.Copy(stdout, cStdout)
|
||||
if err != nil {
|
||||
utils.Debugf("[error] attach stdout: %s\n", err)
|
||||
Debugf("[error] attach stdout: %s\n", err)
|
||||
}
|
||||
errors <- err
|
||||
}()
|
||||
@@ -362,8 +347,8 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
|
||||
} else {
|
||||
cStderr = p
|
||||
go func() {
|
||||
utils.Debugf("[start] attach stderr\n")
|
||||
defer utils.Debugf("[end] attach stderr\n")
|
||||
Debugf("[start] attach stderr\n")
|
||||
defer Debugf("[end] attach stderr\n")
|
||||
// If we are in StdinOnce mode, then close stdin
|
||||
if container.Config.StdinOnce {
|
||||
if stdin != nil {
|
||||
@@ -375,13 +360,13 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
|
||||
}
|
||||
_, err := io.Copy(stderr, cStderr)
|
||||
if err != nil {
|
||||
utils.Debugf("[error] attach stderr: %s\n", err)
|
||||
Debugf("[error] attach stderr: %s\n", err)
|
||||
}
|
||||
errors <- err
|
||||
}()
|
||||
}
|
||||
}
|
||||
return utils.Go(func() error {
|
||||
return Go(func() error {
|
||||
if cStdout != nil {
|
||||
defer cStdout.Close()
|
||||
}
|
||||
@@ -391,14 +376,14 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
|
||||
// FIXME: how do clean up the stdin goroutine without the unwanted side effect
|
||||
// of closing the passed stdin? Add an intermediary io.Pipe?
|
||||
for i := 0; i < nJobs; i += 1 {
|
||||
utils.Debugf("Waiting for job %d/%d\n", i+1, nJobs)
|
||||
Debugf("Waiting for job %d/%d\n", i+1, nJobs)
|
||||
if err := <-errors; err != nil {
|
||||
utils.Debugf("Job %d returned error %s. Aborting all jobs\n", i+1, err)
|
||||
Debugf("Job %d returned error %s. Aborting all jobs\n", i+1, err)
|
||||
return err
|
||||
}
|
||||
utils.Debugf("Job %d completed successfully\n", i+1)
|
||||
Debugf("Job %d completed successfully\n", i+1)
|
||||
}
|
||||
utils.Debugf("All jobs completed successfully\n")
|
||||
Debugf("All jobs completed successfully\n")
|
||||
return nil
|
||||
})
|
||||
}
|
||||
@@ -426,40 +411,10 @@ func (container *Container) Start() error {
|
||||
log.Printf("WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.\n")
|
||||
container.Config.MemorySwap = -1
|
||||
}
|
||||
container.Volumes = make(map[string]string)
|
||||
|
||||
// Create the requested volumes volumes
|
||||
for volPath := range container.Config.Volumes {
|
||||
if c, err := container.runtime.volumes.Create(nil, container, "", "", nil); err != nil {
|
||||
return err
|
||||
} else {
|
||||
if err := os.MkdirAll(path.Join(container.RootfsPath(), volPath), 0755); err != nil {
|
||||
return nil
|
||||
}
|
||||
container.Volumes[volPath] = c.Id
|
||||
}
|
||||
}
|
||||
|
||||
if container.Config.VolumesFrom != "" {
|
||||
c := container.runtime.Get(container.Config.VolumesFrom)
|
||||
if c == nil {
|
||||
return fmt.Errorf("Container %s not found. Impossible to mount its volumes", container.Id)
|
||||
}
|
||||
for volPath, id := range c.Volumes {
|
||||
if _, exists := container.Volumes[volPath]; exists {
|
||||
return fmt.Errorf("The requested volume %s overlap one of the volume of the container %s", volPath, c.Id)
|
||||
}
|
||||
if err := os.MkdirAll(path.Join(container.RootfsPath(), volPath), 0755); err != nil {
|
||||
return nil
|
||||
}
|
||||
container.Volumes[volPath] = id
|
||||
}
|
||||
}
|
||||
|
||||
if err := container.generateLXCConfig(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
params := []string{
|
||||
"-n", container.Id,
|
||||
"-f", container.lxcConfigPath(),
|
||||
@@ -518,7 +473,6 @@ func (container *Container) Start() error {
|
||||
|
||||
// Init the lock
|
||||
container.waitLock = make(chan struct{})
|
||||
|
||||
container.ToDisk()
|
||||
go container.monitor()
|
||||
return nil
|
||||
@@ -556,13 +510,13 @@ func (container *Container) StdinPipe() (io.WriteCloser, error) {
|
||||
func (container *Container) StdoutPipe() (io.ReadCloser, error) {
|
||||
reader, writer := io.Pipe()
|
||||
container.stdout.AddWriter(writer)
|
||||
return utils.NewBufReader(reader), nil
|
||||
return newBufReader(reader), nil
|
||||
}
|
||||
|
||||
func (container *Container) StderrPipe() (io.ReadCloser, error) {
|
||||
reader, writer := io.Pipe()
|
||||
container.stderr.AddWriter(writer)
|
||||
return utils.NewBufReader(reader), nil
|
||||
return newBufReader(reader), nil
|
||||
}
|
||||
|
||||
func (container *Container) allocateNetwork() error {
|
||||
@@ -610,20 +564,20 @@ func (container *Container) waitLxc() error {
|
||||
|
||||
func (container *Container) monitor() {
|
||||
// Wait for the program to exit
|
||||
utils.Debugf("Waiting for process")
|
||||
Debugf("Waiting for process")
|
||||
|
||||
// If the command does not exists, try to wait via lxc
|
||||
if container.cmd == nil {
|
||||
if err := container.waitLxc(); err != nil {
|
||||
utils.Debugf("%s: Process: %s", container.Id, err)
|
||||
Debugf("%s: Process: %s", container.Id, err)
|
||||
}
|
||||
} else {
|
||||
if err := container.cmd.Wait(); err != nil {
|
||||
// Discard the error as any signals or non 0 returns will generate an error
|
||||
utils.Debugf("%s: Process: %s", container.Id, err)
|
||||
Debugf("%s: Process: %s", container.Id, err)
|
||||
}
|
||||
}
|
||||
utils.Debugf("Process finished")
|
||||
Debugf("Process finished")
|
||||
|
||||
var exitCode int = -1
|
||||
if container.cmd != nil {
|
||||
@@ -634,19 +588,19 @@ func (container *Container) monitor() {
|
||||
container.releaseNetwork()
|
||||
if container.Config.OpenStdin {
|
||||
if err := container.stdin.Close(); err != nil {
|
||||
utils.Debugf("%s: Error close stdin: %s", container.Id, err)
|
||||
Debugf("%s: Error close stdin: %s", container.Id, err)
|
||||
}
|
||||
}
|
||||
if err := container.stdout.CloseWriters(); err != nil {
|
||||
utils.Debugf("%s: Error close stdout: %s", container.Id, err)
|
||||
Debugf("%s: Error close stdout: %s", container.Id, err)
|
||||
}
|
||||
if err := container.stderr.CloseWriters(); err != nil {
|
||||
utils.Debugf("%s: Error close stderr: %s", container.Id, err)
|
||||
Debugf("%s: Error close stderr: %s", container.Id, err)
|
||||
}
|
||||
|
||||
if container.ptyMaster != nil {
|
||||
if err := container.ptyMaster.Close(); err != nil {
|
||||
utils.Debugf("%s: Error closing Pty master: %s", container.Id, err)
|
||||
Debugf("%s: Error closing Pty master: %s", container.Id, err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -758,14 +712,6 @@ func (container *Container) ExportRw() (Archive, error) {
|
||||
return Tar(container.rwPath(), Uncompressed)
|
||||
}
|
||||
|
||||
func (container *Container) RwChecksum() (string, error) {
|
||||
rwData, err := Tar(container.rwPath(), Xz)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return utils.HashData(rwData)
|
||||
}
|
||||
|
||||
func (container *Container) Export() (Archive, error) {
|
||||
if err := container.EnsureMounted(); err != nil {
|
||||
return nil, err
|
||||
@@ -834,7 +780,7 @@ func (container *Container) Unmount() error {
|
||||
// In case of a collision a lookup with Runtime.Get() will fail, and the caller
|
||||
// will need to use a langer prefix, or the full-length container Id.
|
||||
func (container *Container) ShortId() string {
|
||||
return utils.TruncateId(container.Id)
|
||||
return TruncateId(container.Id)
|
||||
}
|
||||
|
||||
func (container *Container) logPath(name string) string {
|
||||
@@ -858,22 +804,6 @@ func (container *Container) RootfsPath() string {
|
||||
return path.Join(container.root, "rootfs")
|
||||
}
|
||||
|
||||
func (container *Container) GetVolumes() (map[string]string, error) {
|
||||
ret := make(map[string]string)
|
||||
for volPath, id := range container.Volumes {
|
||||
volume, err := container.runtime.volumes.Get(id)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
root, err := volume.root()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ret[volPath] = path.Join(root, "layer")
|
||||
}
|
||||
return ret, nil
|
||||
}
|
||||
|
||||
func (container *Container) rwPath() string {
|
||||
return path.Join(container.root, "rw")
|
||||
}
|
||||
|
||||
@@ -20,10 +20,11 @@ func TestIdFormat(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer nuke(runtime)
|
||||
container1, err := NewBuilder(runtime).Create(
|
||||
container1, err := runtime.Create(
|
||||
&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"/bin/sh", "-c", "echo hello world"},
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"/bin/sh", "-c", "echo hello world"},
|
||||
Memory: 33554432,
|
||||
},
|
||||
)
|
||||
if err != nil {
|
||||
@@ -44,11 +45,12 @@ func TestMultipleAttachRestart(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer nuke(runtime)
|
||||
container, err := NewBuilder(runtime).Create(
|
||||
container, err := runtime.Create(
|
||||
&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"/bin/sh", "-c",
|
||||
"i=1; while [ $i -le 5 ]; do i=`expr $i + 1`; echo hello; done"},
|
||||
Memory: 33554432,
|
||||
},
|
||||
)
|
||||
if err != nil {
|
||||
@@ -114,8 +116,8 @@ func TestMultipleAttachRestart(t *testing.T) {
|
||||
if err := container.Start(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
setTimeout(t, "Timeout reading from the process", 3*time.Second, func() {
|
||||
timeout := make(chan bool)
|
||||
go func() {
|
||||
l1, err = bufio.NewReader(stdout1).ReadString('\n')
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
@@ -137,8 +139,15 @@ func TestMultipleAttachRestart(t *testing.T) {
|
||||
if strings.Trim(l3, " \r\n") != "hello" {
|
||||
t.Fatalf("Unexpected output. Expected [%s], received [%s]", "hello", l3)
|
||||
}
|
||||
})
|
||||
container.Wait()
|
||||
timeout <- false
|
||||
}()
|
||||
go func() {
|
||||
time.Sleep(3 * time.Second)
|
||||
timeout <- true
|
||||
}()
|
||||
if <-timeout {
|
||||
t.Fatalf("Timeout reading from the process")
|
||||
}
|
||||
}
|
||||
|
||||
func TestDiff(t *testing.T) {
|
||||
@@ -148,10 +157,8 @@ func TestDiff(t *testing.T) {
|
||||
}
|
||||
defer nuke(runtime)
|
||||
|
||||
builder := NewBuilder(runtime)
|
||||
|
||||
// Create a container and remove a file
|
||||
container1, err := builder.Create(
|
||||
container1, err := runtime.Create(
|
||||
&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"/bin/rm", "/etc/passwd"},
|
||||
@@ -186,13 +193,13 @@ func TestDiff(t *testing.T) {
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
img, err := runtime.graph.Create(rwTar, container1, "unit test commited image - diff", "", nil)
|
||||
img, err := runtime.graph.Create(rwTar, container1, "unit test commited image - diff", "")
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
// Create a new container from the commited image
|
||||
container2, err := builder.Create(
|
||||
container2, err := runtime.Create(
|
||||
&Config{
|
||||
Image: img.Id,
|
||||
Cmd: []string{"cat", "/etc/passwd"},
|
||||
@@ -219,98 +226,17 @@ func TestDiff(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestCommitAutoRun(t *testing.T) {
|
||||
runtime, err := newTestRuntime()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer nuke(runtime)
|
||||
|
||||
builder := NewBuilder(runtime)
|
||||
container1, err := builder.Create(
|
||||
&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"/bin/sh", "-c", "echo hello > /world"},
|
||||
},
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer runtime.Destroy(container1)
|
||||
|
||||
if container1.State.Running {
|
||||
t.Errorf("Container shouldn't be running")
|
||||
}
|
||||
if err := container1.Run(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if container1.State.Running {
|
||||
t.Errorf("Container shouldn't be running")
|
||||
}
|
||||
|
||||
rwTar, err := container1.ExportRw()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
img, err := runtime.graph.Create(rwTar, container1, "unit test commited image", "", &Config{Cmd: []string{"cat", "/world"}})
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
// FIXME: Make a TestCommit that stops here and check docker.root/layers/img.id/world
|
||||
container2, err := builder.Create(
|
||||
&Config{
|
||||
Image: img.Id,
|
||||
},
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer runtime.Destroy(container2)
|
||||
stdout, err := container2.StdoutPipe()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
stderr, err := container2.StderrPipe()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := container2.Start(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
container2.Wait()
|
||||
output, err := ioutil.ReadAll(stdout)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
output2, err := ioutil.ReadAll(stderr)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := stdout.Close(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := stderr.Close(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if string(output) != "hello\n" {
|
||||
t.Fatalf("Unexpected output. Expected %s, received: %s (err: %s)", "hello\n", output, output2)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCommitRun(t *testing.T) {
|
||||
runtime, err := newTestRuntime()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer nuke(runtime)
|
||||
|
||||
builder := NewBuilder(runtime)
|
||||
|
||||
container1, err := builder.Create(
|
||||
container1, err := runtime.Create(
|
||||
&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"/bin/sh", "-c", "echo hello > /world"},
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"/bin/sh", "-c", "echo hello > /world"},
|
||||
Memory: 33554432,
|
||||
},
|
||||
)
|
||||
if err != nil {
|
||||
@@ -332,17 +258,18 @@ func TestCommitRun(t *testing.T) {
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
img, err := runtime.graph.Create(rwTar, container1, "unit test commited image", "", nil)
|
||||
img, err := runtime.graph.Create(rwTar, container1, "unit test commited image", "")
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
// FIXME: Make a TestCommit that stops here and check docker.root/layers/img.id/world
|
||||
|
||||
container2, err := builder.Create(
|
||||
container2, err := runtime.Create(
|
||||
&Config{
|
||||
Image: img.Id,
|
||||
Cmd: []string{"cat", "/world"},
|
||||
Image: img.Id,
|
||||
Memory: 33554432,
|
||||
Cmd: []string{"cat", "/world"},
|
||||
},
|
||||
)
|
||||
if err != nil {
|
||||
@@ -386,11 +313,10 @@ func TestStart(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer nuke(runtime)
|
||||
container, err := NewBuilder(runtime).Create(
|
||||
container, err := runtime.Create(
|
||||
&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Memory: 33554432,
|
||||
CpuShares: 1000,
|
||||
Cmd: []string{"/bin/cat"},
|
||||
OpenStdin: true,
|
||||
},
|
||||
@@ -400,11 +326,6 @@ func TestStart(t *testing.T) {
|
||||
}
|
||||
defer runtime.Destroy(container)
|
||||
|
||||
cStdin, err := container.StdinPipe()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if err := container.Start(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
@@ -420,6 +341,7 @@ func TestStart(t *testing.T) {
|
||||
}
|
||||
|
||||
// Try to avoid the timeoout in destroy. Best effort, don't check error
|
||||
cStdin, _ := container.StdinPipe()
|
||||
cStdin.Close()
|
||||
container.WaitTimeout(2 * time.Second)
|
||||
}
|
||||
@@ -430,10 +352,11 @@ func TestRun(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer nuke(runtime)
|
||||
container, err := NewBuilder(runtime).Create(
|
||||
container, err := runtime.Create(
|
||||
&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"ls", "-al"},
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Memory: 33554432,
|
||||
Cmd: []string{"ls", "-al"},
|
||||
},
|
||||
)
|
||||
if err != nil {
|
||||
@@ -458,7 +381,7 @@ func TestOutput(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer nuke(runtime)
|
||||
container, err := NewBuilder(runtime).Create(
|
||||
container, err := runtime.Create(
|
||||
&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"echo", "-n", "foobar"},
|
||||
@@ -483,7 +406,7 @@ func TestKillDifferentUser(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer nuke(runtime)
|
||||
container, err := NewBuilder(runtime).Create(&Config{
|
||||
container, err := runtime.Create(&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"tail", "-f", "/etc/resolv.conf"},
|
||||
User: "daemon",
|
||||
@@ -531,7 +454,7 @@ func TestKill(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer nuke(runtime)
|
||||
container, err := NewBuilder(runtime).Create(&Config{
|
||||
container, err := runtime.Create(&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"cat", "/dev/zero"},
|
||||
},
|
||||
@@ -577,9 +500,7 @@ func TestExitCode(t *testing.T) {
|
||||
}
|
||||
defer nuke(runtime)
|
||||
|
||||
builder := NewBuilder(runtime)
|
||||
|
||||
trueContainer, err := builder.Create(&Config{
|
||||
trueContainer, err := runtime.Create(&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"/bin/true", ""},
|
||||
})
|
||||
@@ -594,7 +515,7 @@ func TestExitCode(t *testing.T) {
|
||||
t.Errorf("Unexpected exit code %d (expected 0)", trueContainer.State.ExitCode)
|
||||
}
|
||||
|
||||
falseContainer, err := builder.Create(&Config{
|
||||
falseContainer, err := runtime.Create(&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"/bin/false", ""},
|
||||
})
|
||||
@@ -616,7 +537,7 @@ func TestRestart(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer nuke(runtime)
|
||||
container, err := NewBuilder(runtime).Create(&Config{
|
||||
container, err := runtime.Create(&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"echo", "-n", "foobar"},
|
||||
},
|
||||
@@ -649,7 +570,7 @@ func TestRestartStdin(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer nuke(runtime)
|
||||
container, err := NewBuilder(runtime).Create(&Config{
|
||||
container, err := runtime.Create(&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"cat"},
|
||||
|
||||
@@ -728,10 +649,8 @@ func TestUser(t *testing.T) {
|
||||
}
|
||||
defer nuke(runtime)
|
||||
|
||||
builder := NewBuilder(runtime)
|
||||
|
||||
// Default user must be root
|
||||
container, err := builder.Create(&Config{
|
||||
container, err := runtime.Create(&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"id"},
|
||||
},
|
||||
@@ -749,7 +668,7 @@ func TestUser(t *testing.T) {
|
||||
}
|
||||
|
||||
// Set a username
|
||||
container, err = builder.Create(&Config{
|
||||
container, err = runtime.Create(&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"id"},
|
||||
|
||||
@@ -769,7 +688,7 @@ func TestUser(t *testing.T) {
|
||||
}
|
||||
|
||||
// Set a UID
|
||||
container, err = builder.Create(&Config{
|
||||
container, err = runtime.Create(&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"id"},
|
||||
|
||||
@@ -789,7 +708,7 @@ func TestUser(t *testing.T) {
|
||||
}
|
||||
|
||||
// Set a different user by uid
|
||||
container, err = builder.Create(&Config{
|
||||
container, err = runtime.Create(&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"id"},
|
||||
|
||||
@@ -811,7 +730,7 @@ func TestUser(t *testing.T) {
|
||||
}
|
||||
|
||||
// Set a different user by username
|
||||
container, err = builder.Create(&Config{
|
||||
container, err = runtime.Create(&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"id"},
|
||||
|
||||
@@ -838,9 +757,7 @@ func TestMultipleContainers(t *testing.T) {
|
||||
}
|
||||
defer nuke(runtime)
|
||||
|
||||
builder := NewBuilder(runtime)
|
||||
|
||||
container1, err := builder.Create(&Config{
|
||||
container1, err := runtime.Create(&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"cat", "/dev/zero"},
|
||||
},
|
||||
@@ -850,7 +767,7 @@ func TestMultipleContainers(t *testing.T) {
|
||||
}
|
||||
defer runtime.Destroy(container1)
|
||||
|
||||
container2, err := builder.Create(&Config{
|
||||
container2, err := runtime.Create(&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"cat", "/dev/zero"},
|
||||
},
|
||||
@@ -896,7 +813,7 @@ func TestStdin(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer nuke(runtime)
|
||||
container, err := NewBuilder(runtime).Create(&Config{
|
||||
container, err := runtime.Create(&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"cat"},
|
||||
|
||||
@@ -943,7 +860,7 @@ func TestTty(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer nuke(runtime)
|
||||
container, err := NewBuilder(runtime).Create(&Config{
|
||||
container, err := runtime.Create(&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"cat"},
|
||||
|
||||
@@ -990,7 +907,7 @@ func TestEnv(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer nuke(runtime)
|
||||
container, err := NewBuilder(runtime).Create(&Config{
|
||||
container, err := runtime.Create(&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"/usr/bin/env"},
|
||||
},
|
||||
@@ -1064,17 +981,12 @@ func TestLXCConfig(t *testing.T) {
|
||||
memMin := 33554432
|
||||
memMax := 536870912
|
||||
mem := memMin + rand.Intn(memMax-memMin)
|
||||
// CPU shares as well
|
||||
cpuMin := 100
|
||||
cpuMax := 10000
|
||||
cpu := cpuMin + rand.Intn(cpuMax-cpuMin)
|
||||
container, err := NewBuilder(runtime).Create(&Config{
|
||||
container, err := runtime.Create(&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"/bin/true"},
|
||||
|
||||
Hostname: "foobar",
|
||||
Memory: int64(mem),
|
||||
CpuShares: int64(cpu),
|
||||
Hostname: "foobar",
|
||||
Memory: int64(mem),
|
||||
},
|
||||
)
|
||||
if err != nil {
|
||||
@@ -1096,7 +1008,7 @@ func BenchmarkRunSequencial(b *testing.B) {
|
||||
}
|
||||
defer nuke(runtime)
|
||||
for i := 0; i < b.N; i++ {
|
||||
container, err := NewBuilder(runtime).Create(&Config{
|
||||
container, err := runtime.Create(&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"echo", "-n", "foo"},
|
||||
},
|
||||
@@ -1131,7 +1043,7 @@ func BenchmarkRunParallel(b *testing.B) {
|
||||
complete := make(chan error)
|
||||
tasks = append(tasks, complete)
|
||||
go func(i int, complete chan error) {
|
||||
container, err := NewBuilder(runtime).Create(&Config{
|
||||
container, err := runtime.Create(&Config{
|
||||
Image: GetTestImage(runtime).Id,
|
||||
Cmd: []string{"echo", "-n", "foo"},
|
||||
},
|
||||
|
||||
@@ -1,22 +1,17 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
"time"
|
||||
)
|
||||
|
||||
var DOCKER_PATH string = path.Join(os.Getenv("DOCKERPATH"), "docker")
|
||||
const DOCKER_PATH = "/home/creack/dotcloud/docker/docker/docker"
|
||||
|
||||
// WARNING: this crashTest will 1) crash your host, 2) remove all containers
|
||||
func runDaemon() (*exec.Cmd, error) {
|
||||
os.Remove("/var/run/docker.pid")
|
||||
exec.Command("rm", "-rf", "/var/lib/docker/containers").Run()
|
||||
cmd := exec.Command(DOCKER_PATH, "-d")
|
||||
outPipe, err := cmd.StdoutPipe()
|
||||
if err != nil {
|
||||
@@ -43,43 +38,19 @@ func crashTest() error {
|
||||
return err
|
||||
}
|
||||
|
||||
var endpoint string
|
||||
if ep := os.Getenv("TEST_ENDPOINT"); ep == "" {
|
||||
endpoint = "192.168.56.1:7979"
|
||||
} else {
|
||||
endpoint = ep
|
||||
}
|
||||
|
||||
c := make(chan bool)
|
||||
var conn io.Writer
|
||||
|
||||
go func() {
|
||||
conn, _ = net.Dial("tcp", endpoint)
|
||||
c <- false
|
||||
}()
|
||||
go func() {
|
||||
time.Sleep(2 * time.Second)
|
||||
c <- true
|
||||
}()
|
||||
<-c
|
||||
|
||||
restartCount := 0
|
||||
totalTestCount := 1
|
||||
for {
|
||||
daemon, err := runDaemon()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
restartCount++
|
||||
// time.Sleep(5000 * time.Millisecond)
|
||||
var stop bool
|
||||
go func() error {
|
||||
stop = false
|
||||
for i := 0; i < 100 && !stop; {
|
||||
for i := 0; i < 100 && !stop; i++ {
|
||||
func() error {
|
||||
cmd := exec.Command(DOCKER_PATH, "run", "base", "echo", fmt.Sprintf("%d", totalTestCount))
|
||||
i++
|
||||
totalTestCount++
|
||||
cmd := exec.Command(DOCKER_PATH, "run", "base", "echo", "hello", "world")
|
||||
log.Printf("%d", i)
|
||||
outPipe, err := cmd.StdoutPipe()
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -91,10 +62,9 @@ func crashTest() error {
|
||||
if err := cmd.Start(); err != nil {
|
||||
return err
|
||||
}
|
||||
if conn != nil {
|
||||
go io.Copy(conn, outPipe)
|
||||
}
|
||||
|
||||
go func() {
|
||||
io.Copy(os.Stdout, outPipe)
|
||||
}()
|
||||
// Expecting error, do not check
|
||||
inPipe.Write([]byte("hello world!!!!!\n"))
|
||||
go inPipe.Write([]byte("hello world!!!!!\n"))
|
||||
|
||||
@@ -49,39 +49,26 @@ def docker(args, stdin=None):
|
||||
def image_exists(img):
|
||||
return docker(["inspect", img]).read().strip() != ""
|
||||
|
||||
def image_config(img):
|
||||
return json.loads(docker(["inspect", img]).read()).get("config", {})
|
||||
|
||||
def run_and_commit(img_in, cmd, stdin=None, author=None, run=None):
|
||||
def run_and_commit(img_in, cmd, stdin=None):
|
||||
run_id = docker(["run"] + (["-i", "-a", "stdin"] if stdin else ["-d"]) + [img_in, "/bin/sh", "-c", cmd], stdin=stdin).read().rstrip()
|
||||
print "---> Waiting for " + run_id
|
||||
result=int(docker(["wait", run_id]).read().rstrip())
|
||||
if result != 0:
|
||||
print "!!! '{}' return non-zero exit code '{}'. Aborting.".format(cmd, result)
|
||||
sys.exit(1)
|
||||
return docker(["commit"] + (["-author", author] if author else []) + (["-run", json.dumps(run)] if run is not None else []) + [run_id]).read().rstrip()
|
||||
return docker(["commit", run_id]).read().rstrip()
|
||||
|
||||
def insert(base, src, dst, author=None):
|
||||
def insert(base, src, dst):
|
||||
print "COPY {} to {} in {}".format(src, dst, base)
|
||||
if dst == "":
|
||||
raise Exception("Missing destination path")
|
||||
stdin = file(src)
|
||||
stdin.seek(0)
|
||||
return run_and_commit(base, "cat > {0}; chmod +x {0}".format(dst), stdin=stdin, author=author)
|
||||
|
||||
def add(base, src, dst, author=None):
|
||||
print "PUSH to {} in {}".format(dst, base)
|
||||
if src == ".":
|
||||
tar = subprocess.Popen(["tar", "-c", "."], stdout=subprocess.PIPE).stdout
|
||||
else:
|
||||
tar = subprocess.Popen(["curl", src], stdout=subprocess.PIPE).stdout
|
||||
if dst == "":
|
||||
raise Exception("Missing argument to push")
|
||||
return run_and_commit(base, "mkdir -p '{0}' && tar -C '{0}' -x".format(dst), stdin=tar, author=author)
|
||||
return run_and_commit(base, "cat > {0}; chmod +x {0}".format(dst), stdin=stdin)
|
||||
|
||||
|
||||
def main():
|
||||
base=""
|
||||
maintainer=""
|
||||
steps = []
|
||||
try:
|
||||
for line in sys.stdin.readlines():
|
||||
@@ -89,48 +76,23 @@ def main():
|
||||
# Skip comments and empty lines
|
||||
if line == "" or line[0] == "#":
|
||||
continue
|
||||
op, param = line.split(None, 1)
|
||||
print op.upper() + " " + param
|
||||
op, param = line.split(" ", 1)
|
||||
if op == "from":
|
||||
print "FROM " + param
|
||||
base = param
|
||||
steps.append(base)
|
||||
elif op == "maintainer":
|
||||
maintainer = param
|
||||
elif op == "run":
|
||||
result = run_and_commit(base, param, author=maintainer)
|
||||
print "RUN " + param
|
||||
result = run_and_commit(base, param)
|
||||
steps.append(result)
|
||||
base = result
|
||||
print "===> " + base
|
||||
elif op == "copy":
|
||||
src, dst = param.split(" ", 1)
|
||||
result = insert(base, src, dst, author=maintainer)
|
||||
result = insert(base, src, dst)
|
||||
steps.append(result)
|
||||
base = result
|
||||
print "===> " + base
|
||||
elif op == "add":
|
||||
src, dst = param.split(" ", 1)
|
||||
result = add(base, src, dst, author=maintainer)
|
||||
steps.append(result)
|
||||
base=result
|
||||
print "===> " + base
|
||||
elif op == "expose":
|
||||
config = image_config(base)
|
||||
if config.get("PortSpecs") is None:
|
||||
config["PortSpecs"] = []
|
||||
portspec = param.strip()
|
||||
config["PortSpecs"].append(portspec)
|
||||
result = run_and_commit(base, "# (nop) expose port {}".format(portspec), author=maintainer, run=config)
|
||||
steps.append(result)
|
||||
base=result
|
||||
print "===> " + base
|
||||
elif op == "cmd":
|
||||
config = image_config(base)
|
||||
cmd = list(json.loads(param))
|
||||
config["Cmd"] = cmd
|
||||
result = run_and_commit(base, "# (nop) set default command to '{}'".format(" ".join(cmd)), author=maintainer, run=config)
|
||||
steps.append(result)
|
||||
base=result
|
||||
print "===> " + base
|
||||
else:
|
||||
print "Skipping uknown op " + op
|
||||
except:
|
||||
|
||||
@@ -1,13 +1,11 @@
|
||||
# Start build from a know base image
|
||||
maintainer Solomon Hykes <solomon@dotcloud.com>
|
||||
from base:ubuntu-12.10
|
||||
# Update ubuntu sources
|
||||
run echo 'deb http://archive.ubuntu.com/ubuntu quantal main universe multiverse' > /etc/apt/sources.list
|
||||
run apt-get update
|
||||
# Install system packages
|
||||
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q git
|
||||
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q curl
|
||||
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q golang
|
||||
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q curl
|
||||
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q golang
|
||||
# Insert files from the host (./myscript must be present in the current directory)
|
||||
copy myscript /usr/local/bin/myscript
|
||||
push /src
|
||||
copy myscript /usr/local/bin/myscript
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
echo hello, world!
|
||||
@@ -36,9 +36,9 @@ else
|
||||
fi
|
||||
|
||||
echo "Downloading docker binary and uncompressing into /usr/local/bin..."
|
||||
curl -s http://get.docker.io/builds/$(uname -s)/$(uname -m)/docker-latest.tgz |
|
||||
curl -s http://get.docker.io/builds/$(uname -s)/$(uname -m)/docker-master.tgz |
|
||||
tar -C /usr/local/bin --strip-components=1 -zxf- \
|
||||
docker-latest/docker
|
||||
docker-master/docker
|
||||
|
||||
if [ -f /etc/init/dockerd.conf ]
|
||||
then
|
||||
|
||||
@@ -1,61 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# these should match the names found at http://www.debian.org/releases/
|
||||
stableSuite='squeeze'
|
||||
testingSuite='wheezy'
|
||||
unstableSuite='sid'
|
||||
|
||||
# if suite is equal to this, it gets the "latest" tag
|
||||
latestSuite="$testingSuite"
|
||||
|
||||
variant='minbase'
|
||||
include='iproute,iputils-ping'
|
||||
|
||||
repo="$1"
|
||||
suite="${2:-$latestSuite}"
|
||||
mirror="${3:-}" # stick to the default debootstrap mirror if one is not provided
|
||||
|
||||
if [ ! "$repo" ]; then
|
||||
echo >&2 "usage: $0 repo [suite [mirror]]"
|
||||
echo >&2 " ie: $0 tianon/debian squeeze"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
target="/tmp/docker-rootfs-debian-$suite-$$-$RANDOM"
|
||||
|
||||
cd "$(dirname "$(readlink -f "$BASH_SOURCE")")"
|
||||
returnTo="$(pwd -P)"
|
||||
|
||||
set -x
|
||||
|
||||
# bootstrap
|
||||
mkdir -p "$target"
|
||||
sudo debootstrap --verbose --variant="$variant" --include="$include" "$suite" "$target" "$mirror"
|
||||
|
||||
cd "$target"
|
||||
|
||||
# create the image
|
||||
img=$(sudo tar -c . | docker import -)
|
||||
|
||||
# tag suite
|
||||
docker tag $img $repo $suite
|
||||
|
||||
if [ "$suite" = "$latestSuite" ]; then
|
||||
# tag latest
|
||||
docker tag $img $repo latest
|
||||
fi
|
||||
|
||||
# test the image
|
||||
docker run -i -t $repo:$suite echo success
|
||||
|
||||
# unstable's version numbers match testing (since it's mostly just a sandbox for testing), so it doesn't get a version number tag
|
||||
if [ "$suite" != "$unstableSuite" -a "$suite" != 'unstable' ]; then
|
||||
# tag the specific version
|
||||
ver=$(docker run $repo:$suite cat /etc/debian_version)
|
||||
docker tag $img $repo $ver
|
||||
fi
|
||||
|
||||
# cleanup
|
||||
cd "$returnTo"
|
||||
sudo rm -rf "$target"
|
||||
@@ -4,7 +4,9 @@ import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"github.com/dotcloud/docker"
|
||||
"github.com/dotcloud/docker/utils"
|
||||
"github.com/dotcloud/docker/rcli"
|
||||
"github.com/dotcloud/docker/term"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
@@ -18,7 +20,7 @@ var (
|
||||
)
|
||||
|
||||
func main() {
|
||||
if utils.SelfPath() == "/sbin/init" {
|
||||
if docker.SelfPath() == "/sbin/init" {
|
||||
// Running in init mode
|
||||
docker.SysInit()
|
||||
return
|
||||
@@ -26,7 +28,6 @@ func main() {
|
||||
// FIXME: Switch d and D ? (to be more sshd like)
|
||||
flDaemon := flag.Bool("d", false, "Daemon mode")
|
||||
flDebug := flag.Bool("D", false, "Debug mode")
|
||||
flAutoRestart := flag.Bool("r", false, "Restart previously running containers")
|
||||
bridgeName := flag.String("b", "", "Attach containers to a pre-existing network bridge")
|
||||
pidfile := flag.String("p", "/var/run/docker.pid", "File containing process PID")
|
||||
flag.Parse()
|
||||
@@ -44,14 +45,12 @@ func main() {
|
||||
flag.Usage()
|
||||
return
|
||||
}
|
||||
if err := daemon(*pidfile, *flAutoRestart); err != nil {
|
||||
if err := daemon(*pidfile); err != nil {
|
||||
log.Fatal(err)
|
||||
os.Exit(-1)
|
||||
}
|
||||
} else {
|
||||
if err := docker.ParseCommands(flag.Args()...); err != nil {
|
||||
if err := runCommand(flag.Args()); err != nil {
|
||||
log.Fatal(err)
|
||||
os.Exit(-1)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -83,7 +82,7 @@ func removePidFile(pidfile string) {
|
||||
}
|
||||
}
|
||||
|
||||
func daemon(pidfile string, autoRestart bool) error {
|
||||
func daemon(pidfile string) error {
|
||||
if err := createPidFile(pidfile); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
@@ -98,10 +97,50 @@ func daemon(pidfile string, autoRestart bool) error {
|
||||
os.Exit(0)
|
||||
}()
|
||||
|
||||
server, err := docker.NewServer(autoRestart)
|
||||
service, err := docker.NewServer()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return docker.ListenAndServe("0.0.0.0:4243", server, true)
|
||||
return rcli.ListenAndServe("tcp", "127.0.0.1:4242", service)
|
||||
}
|
||||
|
||||
func runCommand(args []string) error {
|
||||
// FIXME: we want to use unix sockets here, but net.UnixConn doesn't expose
|
||||
// CloseWrite(), which we need to cleanly signal that stdin is closed without
|
||||
// closing the connection.
|
||||
// See http://code.google.com/p/go/issues/detail?id=3345
|
||||
if conn, err := rcli.Call("tcp", "127.0.0.1:4242", args...); err == nil {
|
||||
options := conn.GetOptions()
|
||||
if options.RawTerminal &&
|
||||
term.IsTerminal(int(os.Stdin.Fd())) &&
|
||||
os.Getenv("NORAW") == "" {
|
||||
if oldState, err := rcli.SetRawTerminal(); err != nil {
|
||||
return err
|
||||
} else {
|
||||
defer rcli.RestoreTerminal(oldState)
|
||||
}
|
||||
}
|
||||
receiveStdout := docker.Go(func() error {
|
||||
_, err := io.Copy(os.Stdout, conn)
|
||||
return err
|
||||
})
|
||||
sendStdin := docker.Go(func() error {
|
||||
_, err := io.Copy(conn, os.Stdin)
|
||||
if err := conn.CloseWrite(); err != nil {
|
||||
log.Printf("Couldn't send EOF: " + err.Error())
|
||||
}
|
||||
return err
|
||||
})
|
||||
if err := <-receiveStdout; err != nil {
|
||||
return err
|
||||
}
|
||||
if !term.IsTerminal(int(os.Stdin.Fd())) {
|
||||
if err := <-sendStdin; err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
} else {
|
||||
return fmt.Errorf("Can't connect to docker daemon. Is 'docker -d' running on this host?")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -44,28 +44,32 @@ clean:
|
||||
-rm -rf $(BUILDDIR)/*
|
||||
|
||||
docs:
|
||||
#-rm -rf $(BUILDDIR)/*
|
||||
-rm -rf $(BUILDDIR)/*
|
||||
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/html
|
||||
cp sources/index.html $(BUILDDIR)/html/
|
||||
cp -r sources/gettingstarted $(BUILDDIR)/html/
|
||||
cp sources/dotcloud.yml $(BUILDDIR)/html/
|
||||
cp sources/CNAME $(BUILDDIR)/html/
|
||||
cp sources/.nojekyll $(BUILDDIR)/html/
|
||||
cp sources/nginx.conf $(BUILDDIR)/html/
|
||||
@echo
|
||||
@echo "Build finished. The documentation pages are now in $(BUILDDIR)/html."
|
||||
|
||||
|
||||
site:
|
||||
cp -r website $(BUILDDIR)/
|
||||
cp -r theme/docker/static/ $(BUILDDIR)/website/
|
||||
@echo
|
||||
@echo "The Website pages are in $(BUILDDIR)/site."
|
||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
|
||||
|
||||
connect:
|
||||
@echo connecting dotcloud to www.docker.io website, make sure to use user 1
|
||||
@cd _build/website/ ; \
|
||||
dotcloud connect dockerwebsite ;
|
||||
dotcloud list
|
||||
@echo pushing changes to staging site
|
||||
@cd _build/html/ ; \
|
||||
@dotcloud list ; \
|
||||
dotcloud connect dockerwebsite
|
||||
|
||||
push:
|
||||
@cd _build/website/ ; \
|
||||
@cd _build/html/ ; \
|
||||
dotcloud push
|
||||
|
||||
github-deploy: docs
|
||||
rm -fr github-deploy
|
||||
git clone ssh://git@github.com/dotcloud/docker github-deploy
|
||||
cd github-deploy && git checkout -f gh-pages && git rm -r * && rsync -avH ../_build/html/ ./ && touch .nojekyll && echo "docker.io" > CNAME && git add * && git commit -m "Updating docs"
|
||||
|
||||
$(VERSIONS):
|
||||
@echo "Hello world"
|
||||
|
||||
|
||||
@@ -15,7 +15,6 @@ Installation
|
||||
|
||||
* Work in your own fork of the code, we accept pull requests.
|
||||
* Install sphinx: ``pip install sphinx``
|
||||
* Install sphinx httpdomain contrib package ``sphinxcontrib-httpdomain``
|
||||
* If pip is not available you can probably install it using your favorite package manager as **python-pip**
|
||||
|
||||
Usage
|
||||
|
||||
@@ -1,2 +0,0 @@
|
||||
Sphinx==1.1.3
|
||||
sphinxcontrib-httpdomain==1.1.8
|
||||
0
docs/sources/.nojekyll
Normal file
0
docs/sources/.nojekyll
Normal file
1
docs/sources/CNAME
Normal file
1
docs/sources/CNAME
Normal file
@@ -0,0 +1 @@
|
||||
docker.io
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,17 +0,0 @@
|
||||
:title: API Documentation
|
||||
:description: docker documentation
|
||||
:keywords: docker, ipa, documentation
|
||||
|
||||
API's
|
||||
=============
|
||||
|
||||
This following :
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
|
||||
registry_api
|
||||
index_search_api
|
||||
docker_remote_api
|
||||
|
||||
|
||||
@@ -1,43 +0,0 @@
|
||||
:title: Docker Index documentation
|
||||
:description: Documentation for docker Index
|
||||
:keywords: docker, index, api
|
||||
|
||||
|
||||
=======================
|
||||
Docker Index Search API
|
||||
=======================
|
||||
|
||||
Search
|
||||
------
|
||||
|
||||
.. http:get:: /v1/search
|
||||
|
||||
Search the Index given a search term. It accepts :http:method:`get` only.
|
||||
|
||||
**Example request**:
|
||||
|
||||
.. sourcecode:: http
|
||||
|
||||
GET /v1/search?q=search_term HTTP/1.1
|
||||
Host: example.com
|
||||
Accept: application/json
|
||||
|
||||
**Example response**:
|
||||
|
||||
.. sourcecode:: http
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Vary: Accept
|
||||
Content-Type: application/json
|
||||
|
||||
{"query":"search_term",
|
||||
"num_results": 2,
|
||||
"results" : [
|
||||
{"name": "dotcloud/base", "description": "A base ubuntu64 image..."},
|
||||
{"name": "base2", "description": "A base ubuntu64 image..."},
|
||||
]
|
||||
}
|
||||
|
||||
:query q: what you want to search for
|
||||
:statuscode 200: no error
|
||||
:statuscode 500: server error
|
||||
@@ -1,473 +0,0 @@
|
||||
:title: Registry Documentation
|
||||
:description: Documentation for docker Registry and Registry API
|
||||
:keywords: docker, registry, api, index
|
||||
|
||||
|
||||
===================
|
||||
Docker Registry API
|
||||
===================
|
||||
|
||||
.. contents:: Table of Contents
|
||||
|
||||
1. The 3 roles
|
||||
===============
|
||||
|
||||
1.1 Index
|
||||
---------
|
||||
|
||||
The Index is responsible for centralizing information about:
|
||||
- User accounts
|
||||
- Checksums of the images
|
||||
- Public namespaces
|
||||
|
||||
The Index has different components:
|
||||
- Web UI
|
||||
- Meta-data store (comments, stars, list public repositories)
|
||||
- Authentication service
|
||||
- Tokenization
|
||||
|
||||
The index is authoritative for those information.
|
||||
|
||||
We expect that there will be only one instance of the index, run and managed by dotCloud.
|
||||
|
||||
1.2 Registry
|
||||
------------
|
||||
- It stores the images and the graph for a set of repositories
|
||||
- It does not have user accounts data
|
||||
- It has no notion of user accounts or authorization
|
||||
- It delegates authentication and authorization to the Index Auth service using tokens
|
||||
- It supports different storage backends (S3, cloud files, local FS)
|
||||
- It doesn’t have a local database
|
||||
- It will be open-sourced at some point
|
||||
|
||||
We expect that there will be multiple registries out there. To help to grasp the context, here are some examples of registries:
|
||||
|
||||
- **sponsor registry**: such a registry is provided by a third-party hosting infrastructure as a convenience for their customers and the docker community as a whole. Its costs are supported by the third party, but the management and operation of the registry are supported by dotCloud. It features read/write access, and delegates authentication and authorization to the Index.
|
||||
- **mirror registry**: such a registry is provided by a third-party hosting infrastructure but is targeted at their customers only. Some mechanism (unspecified to date) ensures that public images are pulled from a sponsor registry to the mirror registry, to make sure that the customers of the third-party provider can “docker pull” those images locally.
|
||||
- **vendor registry**: such a registry is provided by a software vendor, who wants to distribute docker images. It would be operated and managed by the vendor. Only users authorized by the vendor would be able to get write access. Some images would be public (accessible for anyone), others private (accessible only for authorized users). Authentication and authorization would be delegated to the Index. The goal of vendor registries is to let someone do “docker pull basho/riak1.3” and automatically push from the vendor registry (instead of a sponsor registry); i.e. get all the convenience of a sponsor registry, while retaining control on the asset distribution.
|
||||
- **private registry**: such a registry is located behind a firewall, or protected by an additional security layer (HTTP authorization, SSL client-side certificates, IP address authorization...). The registry is operated by a private entity, outside of dotCloud’s control. It can optionally delegate additional authorization to the Index, but it is not mandatory.
|
||||
|
||||
.. note::
|
||||
|
||||
Mirror registries and private registries which do not use the Index don’t even need to run the registry code. They can be implemented by any kind of transport implementing HTTP GET and PUT. Read-only registries can be powered by a simple static HTTP server.
|
||||
|
||||
.. note::
|
||||
|
||||
The latter implies that while HTTP is the protocol of choice for a registry, multiple schemes are possible (and in some cases, trivial):
|
||||
- HTTP with GET (and PUT for read-write registries);
|
||||
- local mount point;
|
||||
- remote docker addressed through SSH.
|
||||
|
||||
The latter would only require two new commands in docker, e.g. “registryget” and “registryput”, wrapping access to the local filesystem (and optionally doing consistency checks). Authentication and authorization are then delegated to SSH (e.g. with public keys).
|
||||
|
||||
1.3 Docker
|
||||
----------
|
||||
|
||||
On top of being a runtime for LXC, Docker is the Registry client. It supports:
|
||||
- Push / Pull on the registry
|
||||
- Client authentication on the Index
|
||||
|
||||
2. Workflow
|
||||
===========
|
||||
|
||||
2.1 Pull
|
||||
--------
|
||||
|
||||
.. image:: /static_files/docker_pull_chart.png
|
||||
|
||||
1. Contact the Index to know where I should download “samalba/busybox”
|
||||
2. Index replies:
|
||||
a. “samalba/busybox” is on Registry A
|
||||
b. here are the checksums for “samalba/busybox” (for all layers)
|
||||
c. token
|
||||
3. Contact Registry A to receive the layers for “samalba/busybox” (all of them to the base image). Registry A is authoritative for “samalba/busybox” but keeps a copy of all inherited layers and serve them all from the same location.
|
||||
4. registry contacts index to verify if token/user is allowed to download images
|
||||
5. Index returns true/false lettings registry know if it should proceed or error out
|
||||
6. Get the payload for all layers
|
||||
|
||||
It’s possible to run docker pull \https://<registry>/repositories/samalba/busybox. In this case, docker bypasses the Index. However the security is not guaranteed (in case Registry A is corrupted) because there won’t be any checksum checks.
|
||||
|
||||
Currently registry redirects to s3 urls for downloads, going forward all downloads need to be streamed through the registry. The Registry will then abstract the calls to S3 by a top-level class which implements sub-classes for S3 and local storage.
|
||||
|
||||
Token is only returned when the 'X-Docker-Token' header is sent with request.
|
||||
|
||||
Basic Auth is required to pull private repos. Basic auth isn't required for pulling public repos, but if one is provided, it needs to be valid and for an active account.
|
||||
|
||||
API (pulling repository foo/bar):
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
1. (Docker -> Index) GET /v1/repositories/foo/bar/images
|
||||
**Headers**:
|
||||
Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
|
||||
X-Docker-Token: true
|
||||
**Action**:
|
||||
(looking up the foo/bar in db and gets images and checksums for that repo (all if no tag is specified, if tag, only checksums for those tags) see part 4.4.1)
|
||||
|
||||
2. (Index -> Docker) HTTP 200 OK
|
||||
|
||||
**Headers**:
|
||||
- Authorization: Token signature=123abc,repository=”foo/bar”,access=write
|
||||
- X-Docker-Endpoints: registry.docker.io [, registry2.docker.io]
|
||||
**Body**:
|
||||
Jsonified checksums (see part 4.4.1)
|
||||
|
||||
3. (Docker -> Registry) GET /v1/repositories/foo/bar/tags/latest
|
||||
**Headers**:
|
||||
Authorization: Token signature=123abc,repository=”foo/bar”,access=write
|
||||
|
||||
4. (Registry -> Index) GET /v1/repositories/foo/bar/images
|
||||
|
||||
**Headers**:
|
||||
Authorization: Token signature=123abc,repository=”foo/bar”,access=read
|
||||
|
||||
**Body**:
|
||||
<ids and checksums in payload>
|
||||
|
||||
**Action**:
|
||||
( Lookup token see if they have access to pull.)
|
||||
|
||||
If good:
|
||||
HTTP 200 OK
|
||||
Index will invalidate the token
|
||||
If bad:
|
||||
HTTP 401 Unauthorized
|
||||
|
||||
5. (Docker -> Registry) GET /v1/images/928374982374/ancestry
|
||||
**Action**:
|
||||
(for each image id returned in the registry, fetch /json + /layer)
|
||||
|
||||
.. note::
|
||||
|
||||
If someone makes a second request, then we will always give a new token, never reuse tokens.
|
||||
|
||||
2.2 Push
|
||||
--------
|
||||
|
||||
.. image:: /static_files/docker_push_chart.png
|
||||
|
||||
1. Contact the index to allocate the repository name “samalba/busybox” (authentication required with user credentials)
|
||||
2. If authentication works and namespace available, “samalba/busybox” is allocated and a temporary token is returned (namespace is marked as initialized in index)
|
||||
3. Push the image on the registry (along with the token)
|
||||
4. Registry A contacts the Index to verify the token (token must corresponds to the repository name)
|
||||
5. Index validates the token. Registry A starts reading the stream pushed by docker and store the repository (with its images)
|
||||
6. docker contacts the index to give checksums for upload images
|
||||
|
||||
.. note::
|
||||
|
||||
**It’s possible not to use the Index at all!** In this case, a deployed version of the Registry is deployed to store and serve images. Those images are not authentified and the security is not guaranteed.
|
||||
|
||||
.. note::
|
||||
|
||||
**Index can be replaced!** For a private Registry deployed, a custom Index can be used to serve and validate token according to different policies.
|
||||
|
||||
Docker computes the checksums and submit them to the Index at the end of the push. When a repository name does not have checksums on the Index, it means that the push is in progress (since checksums are submitted at the end).
|
||||
|
||||
API (pushing repos foo/bar):
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
1. (Docker -> Index) PUT /v1/repositories/foo/bar/
|
||||
**Headers**:
|
||||
Authorization: Basic sdkjfskdjfhsdkjfh==
|
||||
X-Docker-Token: true
|
||||
|
||||
**Action**::
|
||||
- in index, we allocated a new repository, and set to initialized
|
||||
|
||||
**Body**::
|
||||
(The body contains the list of images that are going to be pushed, with empty checksums. The checksums will be set at the end of the push)::
|
||||
|
||||
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”}]
|
||||
|
||||
2. (Index -> Docker) 200 Created
|
||||
**Headers**:
|
||||
- WWW-Authenticate: Token signature=123abc,repository=”foo/bar”,access=write
|
||||
- X-Docker-Endpoints: registry.docker.io [, registry2.docker.io]
|
||||
|
||||
3. (Docker -> Registry) PUT /v1/images/98765432_parent/json
|
||||
**Headers**:
|
||||
Authorization: Token signature=123abc,repository=”foo/bar”,access=write
|
||||
|
||||
4. (Registry->Index) GET /v1/repositories/foo/bar/images
|
||||
**Headers**:
|
||||
Authorization: Token signature=123abc,repository=”foo/bar”,access=write
|
||||
**Action**::
|
||||
- Index:
|
||||
will invalidate the token.
|
||||
- Registry:
|
||||
grants a session (if token is approved) and fetches the images id
|
||||
|
||||
5. (Docker -> Registry) PUT /v1/images/98765432_parent/json
|
||||
**Headers**::
|
||||
- Authorization: Token signature=123abc,repository=”foo/bar”,access=write
|
||||
- Cookie: (Cookie provided by the Registry)
|
||||
|
||||
6. (Docker -> Registry) PUT /v1/images/98765432/json
|
||||
**Headers**:
|
||||
Cookie: (Cookie provided by the Registry)
|
||||
|
||||
7. (Docker -> Registry) PUT /v1/images/98765432_parent/layer
|
||||
**Headers**:
|
||||
Cookie: (Cookie provided by the Registry)
|
||||
|
||||
8. (Docker -> Registry) PUT /v1/images/98765432/layer
|
||||
**Headers**:
|
||||
X-Docker-Checksum: sha256:436745873465fdjkhdfjkgh
|
||||
|
||||
9. (Docker -> Registry) PUT /v1/repositories/foo/bar/tags/latest
|
||||
**Headers**:
|
||||
Cookie: (Cookie provided by the Registry)
|
||||
**Body**:
|
||||
“98765432”
|
||||
|
||||
10. (Docker -> Index) PUT /v1/repositories/foo/bar/images
|
||||
|
||||
**Headers**:
|
||||
Authorization: Basic 123oislifjsldfj==
|
||||
X-Docker-Endpoints: registry1.docker.io (no validation on this right now)
|
||||
|
||||
**Body**:
|
||||
(The image, id’s, tags and checksums)
|
||||
|
||||
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”,
|
||||
“checksum”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”}]
|
||||
|
||||
**Return** HTTP 204
|
||||
|
||||
.. note::
|
||||
|
||||
If push fails and they need to start again, what happens in the index, there will already be a record for the namespace/name, but it will be initialized. Should we allow it, or mark as name already used? One edge case could be if someone pushes the same thing at the same time with two different shells.
|
||||
|
||||
If it's a retry on the Registry, Docker has a cookie (provided by the registry after token validation). So the Index won’t have to provide a new token.
|
||||
|
||||
3. How to use the Registry in standalone mode
|
||||
=============================================
|
||||
|
||||
The Index has two main purposes (along with its fancy social features):
|
||||
|
||||
- Resolve short names (to avoid passing absolute URLs all the time)
|
||||
- username/projectname -> \https://registry.docker.io/users/<username>/repositories/<projectname>/
|
||||
- team/projectname -> \https://registry.docker.io/team/<team>/repositories/<projectname>/
|
||||
- Authenticate a user as a repos owner (for a central referenced repository)
|
||||
|
||||
3.1 Without an Index
|
||||
--------------------
|
||||
Using the Registry without the Index can be useful to store the images on a private network without having to rely on an external entity controlled by dotCloud.
|
||||
|
||||
In this case, the registry will be launched in a special mode (--standalone? --no-index?). In this mode, the only thing which changes is that Registry will never contact the Index to verify a token. It will be the Registry owner responsibility to authenticate the user who pushes (or even pulls) an image using any mechanism (HTTP auth, IP based, etc...).
|
||||
|
||||
In this scenario, the Registry is responsible for the security in case of data corruption since the checksums are not delivered by a trusted entity.
|
||||
|
||||
As hinted previously, a standalone registry can also be implemented by any HTTP server handling GET/PUT requests (or even only GET requests if no write access is necessary).
|
||||
|
||||
3.2 With an Index
|
||||
-----------------
|
||||
|
||||
The Index data needed by the Registry are simple:
|
||||
- Serve the checksums
|
||||
- Provide and authorize a Token
|
||||
|
||||
In the scenario of a Registry running on a private network with the need of centralizing and authorizing, it’s easy to use a custom Index.
|
||||
|
||||
The only challenge will be to tell Docker to contact (and trust) this custom Index. Docker will be configurable at some point to use a specific Index, it’ll be the private entity responsibility (basically the organization who uses Docker in a private environment) to maintain the Index and the Docker’s configuration among its consumers.
|
||||
|
||||
4. The API
|
||||
==========
|
||||
|
||||
The first version of the api is available here: https://github.com/jpetazzo/docker/blob/acd51ecea8f5d3c02b00a08176171c59442df8b3/docs/images-repositories-push-pull.md
|
||||
|
||||
4.1 Images
|
||||
----------
|
||||
|
||||
The format returned in the images is not defined here (for layer and json), basically because Registry stores exactly the same kind of information as Docker uses to manage them.
|
||||
|
||||
The format of ancestry is a line-separated list of image ids, in age order. I.e. the image’s parent is on the last line, the parent of the parent on the next-to-last line, etc.; if the image has no parent, the file is empty.
|
||||
|
||||
GET /v1/images/<image_id>/layer
|
||||
PUT /v1/images/<image_id>/layer
|
||||
GET /v1/images/<image_id>/json
|
||||
PUT /v1/images/<image_id>/json
|
||||
GET /v1/images/<image_id>/ancestry
|
||||
PUT /v1/images/<image_id>/ancestry
|
||||
|
||||
4.2 Users
|
||||
---------
|
||||
|
||||
4.2.1 Create a user (Index)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
POST /v1/users
|
||||
|
||||
**Body**:
|
||||
{"email": "sam@dotcloud.com", "password": "toto42", "username": "foobar"'}
|
||||
|
||||
**Validation**:
|
||||
- **username** : min 4 character, max 30 characters, all lowercase no special characters.
|
||||
- **password**: min 5 characters
|
||||
|
||||
**Valid**: return HTTP 200
|
||||
|
||||
Errors: HTTP 400 (we should create error codes for possible errors)
|
||||
- invalid json
|
||||
- missing field
|
||||
- wrong format (username, password, email, etc)
|
||||
- forbidden name
|
||||
- name already exists
|
||||
|
||||
.. note::
|
||||
|
||||
A user account will be valid only if the email has been validated (a validation link is sent to the email address).
|
||||
|
||||
4.2.2 Update a user (Index)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
PUT /v1/users/<username>
|
||||
|
||||
**Body**:
|
||||
{"password": "toto"}
|
||||
|
||||
.. note::
|
||||
|
||||
We can also update email address, if they do, they will need to reverify their new email address.
|
||||
|
||||
4.2.3 Login (Index)
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
Does nothing else but asking for a user authentication. Can be used to validate credentials. HTTP Basic Auth for now, maybe change in future.
|
||||
|
||||
GET /v1/users
|
||||
|
||||
**Return**:
|
||||
- Valid: HTTP 200
|
||||
- Invalid login: HTTP 401
|
||||
- Account inactive: HTTP 403 Account is not Active
|
||||
|
||||
4.3 Tags (Registry)
|
||||
-------------------
|
||||
|
||||
The Registry does not know anything about users. Even though repositories are under usernames, it’s just a namespace for the registry. Allowing us to implement organizations or different namespaces per user later, without modifying the Registry’s API.
|
||||
|
||||
4.3.1 Get all tags
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
|
||||
GET /v1/repositories/<namespace>/<repository_name>/tags
|
||||
|
||||
**Return**: HTTP 200
|
||||
{
|
||||
"latest": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f",
|
||||
“0.1.1”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”
|
||||
}
|
||||
|
||||
4.3.2 Read the content of a tag (resolve the image id)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
GET /v1/repositories/<namespace>/<repo_name>/tags/<tag>
|
||||
|
||||
**Return**:
|
||||
"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f"
|
||||
|
||||
4.3.3 Delete a tag (registry)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
DELETE /v1/repositories/<namespace>/<repo_name>/tags/<tag>
|
||||
|
||||
4.4 Images (Index)
|
||||
------------------
|
||||
|
||||
For the Index to “resolve” the repository name to a Registry location, it uses the X-Docker-Endpoints header. In other terms, this requests always add a “X-Docker-Endpoints” to indicate the location of the registry which hosts this repository.
|
||||
|
||||
4.4.1 Get the images
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
GET /v1/repositories/<namespace>/<repo_name>/images
|
||||
|
||||
**Return**: HTTP 200
|
||||
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”, “checksum”: “md5:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”}]
|
||||
|
||||
|
||||
4.4.2 Add/update the images
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
You always add images, you never remove them.
|
||||
|
||||
PUT /v1/repositories/<namespace>/<repo_name>/images
|
||||
|
||||
**Body**:
|
||||
[ {“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”, “checksum”: “sha256:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”} ]
|
||||
|
||||
**Return** 204
|
||||
|
||||
5. Chaining Registries
|
||||
======================
|
||||
|
||||
It’s possible to chain Registries server for several reasons:
|
||||
- Load balancing
|
||||
- Delegate the next request to another server
|
||||
|
||||
When a Registry is a reference for a repository, it should host the entire images chain in order to avoid breaking the chain during the download.
|
||||
|
||||
The Index and Registry use this mechanism to redirect on one or the other.
|
||||
|
||||
Example with an image download:
|
||||
On every request, a special header can be returned:
|
||||
|
||||
X-Docker-Endpoints: server1,server2
|
||||
|
||||
On the next request, the client will always pick a server from this list.
|
||||
|
||||
6. Authentication & Authorization
|
||||
=================================
|
||||
|
||||
6.1 On the Index
|
||||
-----------------
|
||||
|
||||
The Index supports both “Basic” and “Token” challenges. Usually when there is a “401 Unauthorized”, the Index replies this::
|
||||
|
||||
401 Unauthorized
|
||||
WWW-Authenticate: Basic realm="auth required",Token
|
||||
|
||||
You have 3 options:
|
||||
|
||||
1. Provide user credentials and ask for a token
|
||||
|
||||
**Header**:
|
||||
- Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
|
||||
- X-Docker-Token: true
|
||||
|
||||
In this case, along with the 200 response, you’ll get a new token (if user auth is ok):
|
||||
If authorization isn't correct you get a 401 response.
|
||||
If account isn't active you will get a 403 response.
|
||||
|
||||
**Response**:
|
||||
- 200 OK
|
||||
- X-Docker-Token: Token signature=123abc,repository=”foo/bar”,access=read
|
||||
|
||||
2. Provide user credentials only
|
||||
|
||||
**Header**:
|
||||
Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
|
||||
|
||||
3. Provide Token
|
||||
|
||||
**Header**:
|
||||
Authorization: Token signature=123abc,repository=”foo/bar”,access=read
|
||||
|
||||
6.2 On the Registry
|
||||
-------------------
|
||||
|
||||
The Registry only supports the Token challenge::
|
||||
|
||||
401 Unauthorized
|
||||
WWW-Authenticate: Token
|
||||
|
||||
The only way is to provide a token on “401 Unauthorized” responses::
|
||||
|
||||
Authorization: Token signature=123abc,repository=”foo/bar”,access=read
|
||||
|
||||
Usually, the Registry provides a Cookie when a Token verification succeeded. Every time the Registry passes a Cookie, you have to pass it back the same cookie.::
|
||||
|
||||
200 OK
|
||||
Set-Cookie: session="wD/J7LqL5ctqw8haL10vgfhrb2Q=?foo=UydiYXInCnAxCi4=×tamp=RjEzNjYzMTQ5NDcuNDc0NjQzCi4="; Path=/; HttpOnly
|
||||
|
||||
Next request::
|
||||
|
||||
GET /(...)
|
||||
Cookie: session="wD/J7LqL5ctqw8haL10vgfhrb2Q=?foo=UydiYXInCnAxCi4=×tamp=RjEzNjYzMTQ5NDcuNDc0NjQzCi4="
|
||||
@@ -1,6 +1,6 @@
|
||||
:title: Basic Commands
|
||||
:title: Base commands
|
||||
:description: Common usage and commands
|
||||
:keywords: Examples, Usage, basic commands, docker, documentation, examples
|
||||
:keywords: Examples, Usage
|
||||
|
||||
|
||||
The basics
|
||||
@@ -76,8 +76,8 @@ Expose a service on a TCP port
|
||||
echo "Daemon received: $(docker logs $JOB)"
|
||||
|
||||
|
||||
Committing (saving) a container state
|
||||
-------------------------------------
|
||||
Committing (saving) an image
|
||||
-----------------------------
|
||||
|
||||
Save your containers state to a container image, so the state can be re-used.
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
.. _cli:
|
||||
|
||||
Overview
|
||||
Command Line Interface
|
||||
======================
|
||||
|
||||
Docker Usage
|
||||
@@ -24,10 +24,9 @@ Available Commands
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:maxdepth: 1
|
||||
|
||||
command/attach
|
||||
command/build
|
||||
command/commit
|
||||
command/diff
|
||||
command/export
|
||||
@@ -47,7 +46,6 @@ Available Commands
|
||||
command/rm
|
||||
command/rmi
|
||||
command/run
|
||||
command/search
|
||||
command/start
|
||||
command/stop
|
||||
command/tag
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Attach Command
|
||||
:description: Attach to a running container
|
||||
:keywords: attach, container, docker, documentation
|
||||
|
||||
===========================================
|
||||
``attach`` -- Attach to a running container
|
||||
===========================================
|
||||
|
||||
@@ -1,13 +0,0 @@
|
||||
:title: Build Command
|
||||
:description: Build a new image from the Dockerfile passed via stdin
|
||||
:keywords: build, docker, container, documentation
|
||||
|
||||
========================================================
|
||||
``build`` -- Build a container from Dockerfile via stdin
|
||||
========================================================
|
||||
|
||||
::
|
||||
|
||||
Usage: docker build -
|
||||
Example: cat Dockerfile | docker build -
|
||||
Build a new image from the Dockerfile passed via stdin
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Commit Command
|
||||
:description: Create a new image from a container's changes
|
||||
:keywords: commit, docker, container, documentation
|
||||
|
||||
===========================================================
|
||||
``commit`` -- Create a new image from a container's changes
|
||||
===========================================================
|
||||
@@ -13,20 +9,3 @@
|
||||
Create a new image from a container's changes
|
||||
|
||||
-m="": Commit message
|
||||
-author="": Author (eg. "John Hannibal Smith <hannibal@a-team.com>"
|
||||
-run="": Config automatically applied when the image is run. "+`(ex: {"Cmd": ["cat", "/world"], "PortSpecs": ["22"]}')
|
||||
|
||||
Full -run example::
|
||||
|
||||
{"Hostname": "",
|
||||
"User": "",
|
||||
"CpuShares": 0,
|
||||
"Memory": 0,
|
||||
"MemorySwap": 0,
|
||||
"PortSpecs": ["22", "80", "443"],
|
||||
"Tty": true,
|
||||
"OpenStdin": true,
|
||||
"StdinOnce": true,
|
||||
"Env": ["FOO=BAR", "FOO2=BAR2"],
|
||||
"Cmd": ["cat", "-e", "/etc/resolv.conf"],
|
||||
"Dns": ["8.8.8.8", "8.8.4.4"]}
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Diff Command
|
||||
:description: Inspect changes on a container's filesystem
|
||||
:keywords: diff, docker, container, documentation
|
||||
|
||||
=======================================================
|
||||
``diff`` -- Inspect changes on a container's filesystem
|
||||
=======================================================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Export Command
|
||||
:description: Export the contents of a filesystem as a tar archive
|
||||
:keywords: export, docker, container, documentation
|
||||
|
||||
=================================================================
|
||||
``export`` -- Stream the contents of a container as a tar archive
|
||||
=================================================================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: History Command
|
||||
:description: Show the history of an image
|
||||
:keywords: history, docker, container, documentation
|
||||
|
||||
===========================================
|
||||
``history`` -- Show the history of an image
|
||||
===========================================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Images Command
|
||||
:description: List images
|
||||
:keywords: images, docker, container, documentation
|
||||
|
||||
=========================
|
||||
``images`` -- List images
|
||||
=========================
|
||||
@@ -14,13 +10,3 @@
|
||||
|
||||
-a=false: show all images
|
||||
-q=false: only show numeric IDs
|
||||
-viz=false: output in graphviz format
|
||||
|
||||
Displaying images visually
|
||||
--------------------------
|
||||
|
||||
::
|
||||
|
||||
docker images -viz | dot -Tpng -o docker.png
|
||||
|
||||
.. image:: images/docker_images.gif
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 35 KiB |
@@ -1,7 +1,3 @@
|
||||
:title: Import Command
|
||||
:description: Create a new filesystem image from the contents of a tarball
|
||||
:keywords: import, tarball, docker, url, documentation
|
||||
|
||||
==========================================================================
|
||||
``import`` -- Create a new filesystem image from the contents of a tarball
|
||||
==========================================================================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Info Command
|
||||
:description: Display system-wide information.
|
||||
:keywords: info, docker, information, documentation
|
||||
|
||||
===========================================
|
||||
``info`` -- Display system-wide information
|
||||
===========================================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Inspect Command
|
||||
:description: Return low-level information on a container
|
||||
:keywords: inspect, container, docker, documentation
|
||||
|
||||
==========================================================
|
||||
``inspect`` -- Return low-level information on a container
|
||||
==========================================================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Kill Command
|
||||
:description: Kill a running container
|
||||
:keywords: kill, container, docker, documentation
|
||||
|
||||
====================================
|
||||
``kill`` -- Kill a running container
|
||||
====================================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Login Command
|
||||
:description: Register or Login to the docker registry server
|
||||
:keywords: login, docker, documentation
|
||||
|
||||
============================================================
|
||||
``login`` -- Register or Login to the docker registry server
|
||||
============================================================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Logs Command
|
||||
:description: Fetch the logs of a container
|
||||
:keywords: logs, container, docker, documentation
|
||||
|
||||
=========================================
|
||||
``logs`` -- Fetch the logs of a container
|
||||
=========================================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Port Command
|
||||
:description: Lookup the public-facing port which is NAT-ed to PRIVATE_PORT
|
||||
:keywords: port, docker, container, documentation
|
||||
|
||||
=========================================================================
|
||||
``port`` -- Lookup the public-facing port which is NAT-ed to PRIVATE_PORT
|
||||
=========================================================================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Ps Command
|
||||
:description: List containers
|
||||
:keywords: ps, docker, documentation, container
|
||||
|
||||
=========================
|
||||
``ps`` -- List containers
|
||||
=========================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Pull Command
|
||||
:description: Pull an image or a repository from the registry
|
||||
:keywords: pull, image, repo, repository, documentation, docker
|
||||
|
||||
=========================================================================
|
||||
``pull`` -- Pull an image or a repository from the docker registry server
|
||||
=========================================================================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Push Command
|
||||
:description: Push an image or a repository to the registry
|
||||
:keywords: push, docker, image, repository, documentation, repo
|
||||
|
||||
=======================================================================
|
||||
``push`` -- Push an image or a repository to the docker registry server
|
||||
=======================================================================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Restart Command
|
||||
:description: Restart a running container
|
||||
:keywords: restart, container, docker, documentation
|
||||
|
||||
==========================================
|
||||
``restart`` -- Restart a running container
|
||||
==========================================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Rm Command
|
||||
:description: Remove a container
|
||||
:keywords: remove, container, docker, documentation, rm
|
||||
|
||||
============================
|
||||
``rm`` -- Remove a container
|
||||
============================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Rmi Command
|
||||
:description: Remove an image
|
||||
:keywords: rmi, remove, image, docker, documentation
|
||||
|
||||
==========================
|
||||
``rmi`` -- Remove an image
|
||||
==========================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Run Command
|
||||
:description: Run a command in a new container
|
||||
:keywords: run, container, docker, documentation
|
||||
|
||||
===========================================
|
||||
``run`` -- Run a command in a new container
|
||||
===========================================
|
||||
@@ -13,7 +9,6 @@
|
||||
Run a command in a new container
|
||||
|
||||
-a=map[]: Attach to stdin, stdout or stderr.
|
||||
-c=0: CPU shares (relative weight)
|
||||
-d=false: Detached mode: leave the container running in the background
|
||||
-e=[]: Set environment variables
|
||||
-h="": Container host name
|
||||
@@ -22,6 +17,3 @@
|
||||
-p=[]: Map a network port to the container
|
||||
-t=false: Allocate a pseudo-tty
|
||||
-u="": Username or UID
|
||||
-d=[]: Set custom dns servers for the container
|
||||
-v=[]: Creates a new volume and mounts it at the specified path.
|
||||
-volumes-from="": Mount all volumes from the given container.
|
||||
|
||||
@@ -1,14 +0,0 @@
|
||||
:title: Search Command
|
||||
:description: Searches for the TERM parameter on the Docker index and prints out a list of repositories that match.
|
||||
:keywords: search, docker, image, documentation
|
||||
|
||||
===================================================================
|
||||
``search`` -- Search for an image in the docker index
|
||||
===================================================================
|
||||
|
||||
::
|
||||
|
||||
Usage: docker search TERM
|
||||
|
||||
Searches for the TERM parameter on the Docker index and prints out a list of repositories
|
||||
that match.
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Start Command
|
||||
:description: Start a stopped container
|
||||
:keywords: start, docker, container, documentation
|
||||
|
||||
======================================
|
||||
``start`` -- Start a stopped container
|
||||
======================================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Stop Command
|
||||
:description: Stop a running container
|
||||
:keywords: stop, container, docker, documentation
|
||||
|
||||
====================================
|
||||
``stop`` -- Stop a running container
|
||||
====================================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Tag Command
|
||||
:description: Tag an image into a repository
|
||||
:keywords: tag, docker, image, repository, documentation, repo
|
||||
|
||||
=========================================
|
||||
``tag`` -- Tag an image into a repository
|
||||
=========================================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Version Command
|
||||
:description:
|
||||
:keywords: version, docker, documentation
|
||||
|
||||
==================================================
|
||||
``version`` -- Show the docker version information
|
||||
==================================================
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Wait Command
|
||||
:description: Block until a container stops, then print its exit code.
|
||||
:keywords: wait, docker, container, documentation
|
||||
|
||||
===================================================================
|
||||
``wait`` -- Block until a container stops, then print its exit code
|
||||
===================================================================
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
:title: Commands
|
||||
:title: docker documentation
|
||||
:description: -- todo: change me
|
||||
:keywords: todo, commands, command line, help, docker, documentation
|
||||
:keywords: todo: change me
|
||||
|
||||
|
||||
Commands
|
||||
@@ -9,33 +9,8 @@ Commands
|
||||
Contents:
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:maxdepth: 2
|
||||
|
||||
cli
|
||||
attach <command/attach>
|
||||
build <command/build>
|
||||
commit <command/commit>
|
||||
diff <command/diff>
|
||||
export <command/export>
|
||||
history <command/history>
|
||||
images <command/images>
|
||||
import <command/import>
|
||||
info <command/info>
|
||||
inspect <command/inspect>
|
||||
kill <command/kill>
|
||||
login <command/login>
|
||||
logs <command/logs>
|
||||
port <command/port>
|
||||
ps <command/ps>
|
||||
pull <command/pull>
|
||||
push <command/push>
|
||||
restart <command/restart>
|
||||
rm <command/rm>
|
||||
rmi <command/rmi>
|
||||
run <command/run>
|
||||
search <command/search>
|
||||
start <command/start>
|
||||
stop <command/stop>
|
||||
tag <command/tag>
|
||||
version <command/version>
|
||||
wait <command/wait>
|
||||
basics
|
||||
workingwithrepository
|
||||
cli
|
||||
42
docs/sources/commandline/workingwithrepository.rst
Normal file
42
docs/sources/commandline/workingwithrepository.rst
Normal file
@@ -0,0 +1,42 @@
|
||||
.. _working_with_the_repository:
|
||||
|
||||
Working with the repository
|
||||
============================
|
||||
|
||||
Connecting to the repository
|
||||
----------------------------
|
||||
|
||||
You create a user on the central docker repository by running
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
docker login
|
||||
|
||||
|
||||
If your username does not exist it will prompt you to also enter a password and your e-mail address. It will then
|
||||
automatically log you in.
|
||||
|
||||
|
||||
Committing a container to a named image
|
||||
---------------------------------------
|
||||
|
||||
In order to commit to the repository it is required to have committed your container to an image with your namespace.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# for example docker commit $CONTAINER_ID dhrp/kickassapp
|
||||
docker commit <container_id> <your username>/<some_name>
|
||||
|
||||
|
||||
Pushing a container to the repository
|
||||
-----------------------------------------
|
||||
|
||||
In order to push an image to the repository you need to have committed your container to a named image (see above)
|
||||
|
||||
Now you can commit this image to the repository
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# for example docker push dhrp/kickassapp
|
||||
docker push <image-name>
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
:title: Building Blocks
|
||||
:title: Building blocks
|
||||
:description: An introduction to docker and standard containers?
|
||||
:keywords: containers, lxc, concepts, explanation
|
||||
|
||||
@@ -12,7 +12,7 @@ Images
|
||||
------
|
||||
An original container image. These are stored on disk and are comparable with what you normally expect from a stopped virtual machine image. Images are stored (and retrieved from) repository
|
||||
|
||||
Images are stored on your local file system under /var/lib/docker/graph
|
||||
Images are stored on your local file system under /var/lib/docker/images
|
||||
|
||||
|
||||
.. _containers:
|
||||
|
||||
@@ -1,8 +1,128 @@
|
||||
:title: Introduction
|
||||
:description: An introduction to docker and standard containers?
|
||||
:keywords: containers, lxc, concepts, explanation, docker, documentation
|
||||
:keywords: containers, lxc, concepts, explanation
|
||||
|
||||
|
||||
:note: This version of the introduction is temporary, just to make sure we don't break the links from the website when the documentation is updated
|
||||
|
||||
This document has been moved to :ref:`introduction`, please update your bookmarks.
|
||||
|
||||
Introduction
|
||||
============
|
||||
|
||||
Docker - The Linux container runtime
|
||||
------------------------------------
|
||||
|
||||
Docker complements LXC with a high-level API which operates at the process level. It runs unix processes with strong guarantees of isolation and repeatability across servers.
|
||||
|
||||
Docker is a great building block for automating distributed systems: large-scale web deployments, database clusters, continuous deployment systems, private PaaS, service-oriented architectures, etc.
|
||||
|
||||
|
||||
- **Heterogeneous payloads** Any combination of binaries, libraries, configuration files, scripts, virtualenvs, jars, gems, tarballs, you name it. No more juggling between domain-specific tools. Docker can deploy and run them all.
|
||||
- **Any server** Docker can run on any x64 machine with a modern linux kernel - whether it's a laptop, a bare metal server or a VM. This makes it perfect for multi-cloud deployments.
|
||||
- **Isolation** docker isolates processes from each other and from the underlying host, using lightweight containers.
|
||||
- **Repeatability** Because containers are isolated in their own filesystem, they behave the same regardless of where, when, and alongside what they run.
|
||||
|
||||
|
||||
|
||||
What is a Standard Container?
|
||||
-----------------------------
|
||||
|
||||
Docker defines a unit of software delivery called a Standard Container. The goal of a Standard Container is to encapsulate a software component and all its dependencies in
|
||||
a format that is self-describing and portable, so that any compliant runtime can run it without extra dependency, regardless of the underlying machine and the contents of the container.
|
||||
|
||||
The spec for Standard Containers is currently work in progress, but it is very straightforward. It mostly defines 1) an image format, 2) a set of standard operations, and 3) an execution environment.
|
||||
|
||||
A great analogy for this is the shipping container. Just like Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery.
|
||||
|
||||
Standard operations
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Just like shipping containers, Standard Containers define a set of STANDARD OPERATIONS. Shipping containers can be lifted, stacked, locked, loaded, unloaded and labelled. Similarly, standard containers can be started, stopped, copied, snapshotted, downloaded, uploaded and tagged.
|
||||
|
||||
|
||||
Content-agnostic
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Just like shipping containers, Standard Containers are CONTENT-AGNOSTIC: all standard operations have the same effect regardless of the contents. A shipping container will be stacked in exactly the same way whether it contains Vietnamese powder coffee or spare Maserati parts. Similarly, Standard Containers are started or uploaded in the same way whether they contain a postgres database, a php application with its dependencies and application server, or Java build artifacts.
|
||||
|
||||
|
||||
Infrastructure-agnostic
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Both types of containers are INFRASTRUCTURE-AGNOSTIC: they can be transported to thousands of facilities around the world, and manipulated by a wide variety of equipment. A shipping container can be packed in a factory in Ukraine, transported by truck to the nearest routing center, stacked onto a train, loaded into a German boat by an Australian-built crane, stored in a warehouse at a US facility, etc. Similarly, a standard container can be bundled on my laptop, uploaded to S3, downloaded, run and snapshotted by a build server at Equinix in Virginia, uploaded to 10 staging servers in a home-made Openstack cluster, then sent to 30 production instances across 3 EC2 regions.
|
||||
|
||||
|
||||
Designed for automation
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Because they offer the same standard operations regardless of content and infrastructure, Standard Containers, just like their physical counterpart, are extremely well-suited for automation. In fact, you could say automation is their secret weapon.
|
||||
|
||||
Many things that once required time-consuming and error-prone human effort can now be programmed. Before shipping containers, a bag of powder coffee was hauled, dragged, dropped, rolled and stacked by 10 different people in 10 different locations by the time it reached its destination. 1 out of 50 disappeared. 1 out of 20 was damaged. The process was slow, inefficient and cost a fortune - and was entirely different depending on the facility and the type of goods.
|
||||
|
||||
Similarly, before Standard Containers, by the time a software component ran in production, it had been individually built, configured, bundled, documented, patched, vendored, templated, tweaked and instrumented by 10 different people on 10 different computers. Builds failed, libraries conflicted, mirrors crashed, post-it notes were lost, logs were misplaced, cluster updates were half-broken. The process was slow, inefficient and cost a fortune - and was entirely different depending on the language and infrastructure provider.
|
||||
|
||||
|
||||
Industrial-grade delivery
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded on the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away.
|
||||
|
||||
With Standard Containers we can put an end to that embarrassment, by making INDUSTRIAL-GRADE DELIVERY of software a reality.
|
||||
|
||||
|
||||
Standard Container Specification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
(TODO)
|
||||
|
||||
Image format
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Standard operations
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Copy
|
||||
- Run
|
||||
- Stop
|
||||
- Wait
|
||||
- Commit
|
||||
- Attach standard streams
|
||||
- List filesystem changes
|
||||
- ...
|
||||
|
||||
Execution environment
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Root filesystem
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
Environment variables
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Process arguments
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
||||
Networking
|
||||
^^^^^^^^^^
|
||||
|
||||
Process namespacing
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Resource limits
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
Process monitoring
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Logging
|
||||
^^^^^^^
|
||||
|
||||
Signals
|
||||
^^^^^^^
|
||||
|
||||
Pseudo-terminal allocation
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Security
|
||||
^^^^^^^^
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
:title: Concepts
|
||||
:title: docker documentation
|
||||
:description: -- todo: change me
|
||||
:keywords: concepts, documentation, docker, containers
|
||||
:keywords: todo: change me
|
||||
|
||||
|
||||
|
||||
@@ -12,6 +12,6 @@ Contents:
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
../index
|
||||
introduction
|
||||
buildingblocks
|
||||
|
||||
|
||||
@@ -2,6 +2,8 @@
|
||||
:description: An introduction to docker and standard containers?
|
||||
:keywords: containers, lxc, concepts, explanation
|
||||
|
||||
|
||||
|
||||
Introduction
|
||||
============
|
||||
|
||||
@@ -18,7 +20,7 @@ Docker is a great building block for automating distributed systems: large-scale
|
||||
- **Isolation** docker isolates processes from each other and from the underlying host, using lightweight containers.
|
||||
- **Repeatability** Because containers are isolated in their own filesystem, they behave the same regardless of where, when, and alongside what they run.
|
||||
|
||||
.. image:: images/lego_docker.jpg
|
||||
.. image:: http://www.docker.io/_static/lego_docker.jpg
|
||||
|
||||
|
||||
What is a Standard Container?
|
||||
|
||||
@@ -25,7 +25,7 @@ import sys, os
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be extensions
|
||||
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
||||
extensions = ['sphinxcontrib.httpdomain']
|
||||
extensions = []
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ['_templates']
|
||||
@@ -41,7 +41,7 @@ html_add_permalinks = None
|
||||
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'toctree'
|
||||
master_doc = 'index'
|
||||
|
||||
# General information about the project.
|
||||
project = u'Docker'
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Contribution Guidelines
|
||||
:description: Contribution guidelines: create issues, convetions, pull requests
|
||||
:keywords: contributing, docker, documentation, help, guideline
|
||||
|
||||
Contributing to Docker
|
||||
======================
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ Instructions that have been verified to work on Ubuntu 12.10,
|
||||
|
||||
mkdir -p $GOPATH/src/github.com/dotcloud
|
||||
cd $GOPATH/src/github.com/dotcloud
|
||||
git clone git://github.com/dotcloud/docker.git
|
||||
git clone git@github.com:dotcloud/docker.git
|
||||
cd docker
|
||||
|
||||
go get -v github.com/dotcloud/docker/...
|
||||
|
||||
@@ -1,53 +0,0 @@
|
||||
:title: Sharing data between 2 couchdb databases
|
||||
:description: Sharing data between 2 couchdb databases
|
||||
:keywords: docker, example, package installation, networking, couchdb, data volumes
|
||||
|
||||
.. _running_couchdb_service:
|
||||
|
||||
Create a CouchDB service
|
||||
========================
|
||||
|
||||
.. include:: example_header.inc
|
||||
|
||||
Here's an example of using data volumes to share the same data between 2 couchdb containers.
|
||||
This could be used for hot upgrades, testing different versions of couchdb on the same data, etc.
|
||||
|
||||
Create first database
|
||||
---------------------
|
||||
|
||||
Note that we're marking /var/lib/couchdb as a data volume.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
COUCH1=$(docker run -d -v /var/lib/couchdb shykes/couchdb:2013-05-03)
|
||||
|
||||
Add data to the first database
|
||||
------------------------------
|
||||
|
||||
We're assuming your docker host is reachable at `localhost`. If not, replace `localhost` with the public IP of your docker host.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
HOST=localhost
|
||||
URL="http://$HOST:$(docker port $COUCH1 5984)/_utils/"
|
||||
echo "Navigate to $URL in your browser, and use the couch interface to add data"
|
||||
|
||||
Create second database
|
||||
----------------------
|
||||
|
||||
This time, we're requesting shared access to $COUCH1's volumes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
COUCH2=$(docker run -d -volumes-from $COUCH1) shykes/couchdb:2013-05-03)
|
||||
|
||||
Browse data on the second database
|
||||
----------------------------------
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
HOST=localhost
|
||||
URL="http://$HOST:$(docker port $COUCH2 5984)/_utils/"
|
||||
echo "Navigate to $URL in your browser. You should see the same data as in the first database!"
|
||||
|
||||
Congratulations, you are running 2 Couchdb containers, completely isolated from each other *except* for their data.
|
||||
@@ -18,4 +18,3 @@ Contents:
|
||||
python_web_app
|
||||
running_redis_service
|
||||
running_ssh_service
|
||||
couchdb_data_volumes
|
||||
|
||||
@@ -40,7 +40,7 @@ We attach to the new container to see what is going on. Ctrl-C to disconnect
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
BUILD_IMG=$(docker commit $BUILD_JOB _/builds/github.com/shykes/helloflask/master)
|
||||
BUILD_IMG=$(docker commit $BUILD_JOB _/builds/github.com/hykes/helloflask/master)
|
||||
|
||||
Save the changed we just made in the container to a new image called "_/builds/github.com/hykes/helloflask/master" and save the image id in the BUILD_IMG variable name.
|
||||
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: FAQ
|
||||
:description: Most frequently asked questions.
|
||||
:keywords: faq, questions, documentation, docker
|
||||
|
||||
FAQ
|
||||
===
|
||||
|
||||
@@ -19,7 +15,7 @@ Most frequently asked questions.
|
||||
|
||||
3. **Does Docker run on Mac OS X or Windows?**
|
||||
|
||||
Not at this time, Docker currently only runs on Linux, but you can use VirtualBox to run Docker in a virtual machine on your box, and get the best of both worlds. Check out the MacOSX_ and Windows_ installation guides.
|
||||
Not at this time, Docker currently only runs on Linux, but you can use VirtualBox to run Docker in a virtual machine on your box, and get the best of both worlds. Check out the MacOSX_ and Windows_ intallation guides.
|
||||
|
||||
4. **How do containers compare to virtual machines?**
|
||||
|
||||
@@ -39,8 +35,8 @@ Most frequently asked questions.
|
||||
* `Ask questions on Stackoverflow`_
|
||||
* `Join the conversation on Twitter`_
|
||||
|
||||
.. _Windows: ../installation/windows/
|
||||
.. _MacOSX: ../installation/vagrant/
|
||||
.. _Windows: ../documentation/installation/windows.html
|
||||
.. _MacOSX: ../documentation/installation/macos.html
|
||||
.. _the repo: http://www.github.com/dotcloud/docker
|
||||
.. _IRC\: docker on freenode: irc://chat.freenode.net#docker
|
||||
.. _Github: http://www.github.com/dotcloud/docker
|
||||
|
||||
@@ -13,15 +13,15 @@
|
||||
<meta name="viewport" content="width=device-width">
|
||||
|
||||
<!-- twitter bootstrap -->
|
||||
<link rel="stylesheet" href="../static/css/bootstrap.min.css">
|
||||
<link rel="stylesheet" href="../static/css/bootstrap-responsive.min.css">
|
||||
<link rel="stylesheet" href="../_static/css/bootstrap.min.css">
|
||||
<link rel="stylesheet" href="../_static/css/bootstrap-responsive.min.css">
|
||||
|
||||
<!-- main style file -->
|
||||
<link rel="stylesheet" href="../static/css/main.css">
|
||||
<link rel="stylesheet" href="../_static/css/main.css">
|
||||
|
||||
<!-- vendor scripts -->
|
||||
<script src="../static/js/vendor/jquery-1.9.1.min.js" type="text/javascript" ></script>
|
||||
<script src="../static/js/vendor/modernizr-2.6.2-respond-1.1.0.min.js" type="text/javascript" ></script>
|
||||
<script src="../_static/js/vendor/jquery-1.9.1.min.js" type="text/javascript" ></script>
|
||||
<script src="../_static/js/vendor/modernizr-2.6.2-respond-1.1.0.min.js" type="text/javascript" ></script>
|
||||
|
||||
</head>
|
||||
|
||||
@@ -35,8 +35,8 @@
|
||||
<div style="float: right" class="pull-right">
|
||||
<ul class="nav">
|
||||
<li><a href="../">Introduction</a></li>
|
||||
<li class="active"><a href="">Getting started</a></li>
|
||||
<li class=""><a href="http://docs.docker.io/en/latest/">Documentation</a></li>
|
||||
<li class="active"><a href="./">Getting started</a></li>
|
||||
<li class=""><a href="http://docs.docker.io/en/latest/concepts/containers/">Documentation</a></li>
|
||||
</ul>
|
||||
|
||||
<div class="social links" style="float: right; margin-top: 14px; margin-left: 12px">
|
||||
@@ -46,7 +46,7 @@
|
||||
</div>
|
||||
|
||||
<div style="margin-left: -12px; float: left;">
|
||||
<a href="../index.html"><img style="margin-top: 12px; height: 38px" src="../static/img/docker-letters-logo.gif"></a>
|
||||
<a href="../index.html"><img style="margin-top: 12px; height: 38px" src="../_static/img/docker-letters-logo.gif"></a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
@@ -76,7 +76,6 @@
|
||||
<ul>
|
||||
<li>Ubuntu 12.04 (LTS) (64-bit)</li>
|
||||
<li> or Ubuntu 12.10 (quantal) (64-bit)</li>
|
||||
<li>The 3.8 Linux Kernel</li>
|
||||
</ul>
|
||||
<ol>
|
||||
<li>
|
||||
@@ -106,8 +105,7 @@
|
||||
<pre>docker run -i -t ubuntu /bin/bash</pre>
|
||||
</div>
|
||||
</li>
|
||||
Continue with the <a href="http://docs.docker.io/en/latest/examples/hello_world/">Hello world</a> example.<br>
|
||||
Or check <a href="http://docs.docker.io/en/latest/installation/ubuntulinux/">more detailed installation instructions</a>
|
||||
Continue with the <a href="http://docs.docker.io/en/latest/examples/hello_world/">Hello world</a> example.
|
||||
</ol>
|
||||
</section>
|
||||
|
||||
@@ -188,7 +186,7 @@
|
||||
|
||||
|
||||
<!-- bootstrap javascipts -->
|
||||
<script src="../static/js/vendor/bootstrap.min.js" type="text/javascript"></script>
|
||||
<script src="../_static/js/vendor/bootstrap.min.js" type="text/javascript"></script>
|
||||
|
||||
<!-- Google analytics -->
|
||||
<script type="text/javascript">
|
||||
@@ -7,45 +7,21 @@
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
|
||||
<meta name="google-site-verification" content="UxV66EKuPe87dgnH1sbrldrx6VsoWMrx5NjwkgUFxXI" />
|
||||
<title>Docker - the Linux container engine</title>
|
||||
<title>Docker - the Linux container runtime</title>
|
||||
|
||||
<meta name="description" content="Docker encapsulates heterogeneous payloads in standard containers">
|
||||
<meta name="viewport" content="width=device-width">
|
||||
|
||||
<!-- twitter bootstrap -->
|
||||
<link rel="stylesheet" href="static/css/bootstrap.min.css">
|
||||
<link rel="stylesheet" href="static/css/bootstrap-responsive.min.css">
|
||||
<link rel="stylesheet" href="_static/css/bootstrap.min.css">
|
||||
<link rel="stylesheet" href="_static/css/bootstrap-responsive.min.css">
|
||||
|
||||
<!-- main style file -->
|
||||
<link rel="stylesheet" href="static/css/main.css">
|
||||
<link rel="stylesheet" href="_static/css/main.css">
|
||||
|
||||
<!-- vendor scripts -->
|
||||
<script src="static/js/vendor/jquery-1.9.1.min.js" type="text/javascript" ></script>
|
||||
<script src="static/js/vendor/modernizr-2.6.2-respond-1.1.0.min.js" type="text/javascript" ></script>
|
||||
|
||||
<style>
|
||||
.indexlabel {
|
||||
float: left;
|
||||
width: 150px;
|
||||
display: block;
|
||||
padding: 10px 20px 10px;
|
||||
font-size: 20px;
|
||||
font-weight: 200;
|
||||
background-color: #a30000;
|
||||
color: white;
|
||||
height: 22px;
|
||||
}
|
||||
.searchbutton {
|
||||
font-size: 20px;
|
||||
height: 40px;
|
||||
}
|
||||
|
||||
.debug {
|
||||
border: 1px red dotted;
|
||||
}
|
||||
|
||||
</style>
|
||||
<script src="_static/js/vendor/jquery-1.9.1.min.js" type="text/javascript" ></script>
|
||||
<script src="_static/js/vendor/modernizr-2.6.2-respond-1.1.0.min.js" type="text/javascript" ></script>
|
||||
|
||||
</head>
|
||||
|
||||
@@ -58,9 +34,9 @@
|
||||
|
||||
<div class="pull-right" >
|
||||
<ul class="nav">
|
||||
<li class="active"><a href="/">Introduction</a></li>
|
||||
<li ><a href="gettingstarted">Getting started</a></li>
|
||||
<li class=""><a href="http://docs.docker.io/en/latest/">Documentation</a></li>
|
||||
<li class="active"><a href="./">Introduction</a></li>
|
||||
<li ><a href="gettingstarted/">Getting started</a></li>
|
||||
<li class=""><a href="http://docs.docker.io/en/latest/concepts/containers/">Documentation</a></li>
|
||||
</ul>
|
||||
|
||||
<div class="social links" style="float: right; margin-top: 14px; margin-left: 12px">
|
||||
@@ -73,40 +49,49 @@
|
||||
</div>
|
||||
|
||||
|
||||
<div class="container" style="margin-top: 30px;">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="text-center">
|
||||
<img src="_static/img/docker-letters-logo.gif">
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
|
||||
<div class="span12">
|
||||
<section class="contentblock header">
|
||||
|
||||
<div class="span5" style="margin-bottom: 15px;">
|
||||
<div style="text-align: center;" >
|
||||
<img src="static/img/docker_letters_500px.png">
|
||||
|
||||
<h2>The Linux container engine</h2>
|
||||
</div>
|
||||
|
||||
<div style="display: block; text-align: center; margin-top: 20px;">
|
||||
|
||||
<h5>
|
||||
Docker is an open-source engine which automates the deployment of applications as highly portable, self-sufficient containers which are independent of hardware, language, framework, packaging system and hosting provider.
|
||||
</h5>
|
||||
|
||||
</div>
|
||||
|
||||
|
||||
<div style="display: block; text-align: center; margin-top: 30px;">
|
||||
<a class="btn btn-custom btn-large" href="gettingstarted/">Let's get started</a>
|
||||
</div>
|
||||
|
||||
</div>
|
||||
|
||||
<div class="span6" >
|
||||
<div class="span6" style="margin:10px 0px 0px 30px; float: right; ">
|
||||
<div class="js-video" >
|
||||
<iframe width="600" height="360" src="http://www.youtube.com/embed/wW9CAH9nSLs?feature=player_detailpage&rel=0&modestbranding=1&start=11" frameborder="0" allowfullscreen></iframe>
|
||||
<iframe width="640" height="360" src="http://www.youtube.com/embed/wW9CAH9nSLs?feature=player_detailpage&rel=0&modestbranding=1&start=11" frameborder="0" allowfullscreen></iframe>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div style="text-align: center; padding: 50px 30px 50px 30px;">
|
||||
<h1>Docker</h1>
|
||||
<h2>The Linux container runtime</h2>
|
||||
</div>
|
||||
|
||||
<div style="display: block; text-align: center; padding: 10px 30px 50px 30px;">
|
||||
<p>
|
||||
Docker complements LXC with a high-level API which operates at the process level.
|
||||
It runs unix processes with strong guarantees of isolation and repeatability across servers.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Docker is a great building block for automating distributed systems: large-scale web deployments, database clusters, continuous deployment systems, private PaaS, service-oriented architectures, etc.
|
||||
</p>
|
||||
</div>
|
||||
|
||||
|
||||
<div style="display: block; text-align: center;">
|
||||
<a class="btn btn-custom btn-large" href="gettingstarted/">Let's get started</a>
|
||||
</div>
|
||||
|
||||
|
||||
|
||||
<br style="clear: both"/>
|
||||
</section>
|
||||
</div>
|
||||
@@ -116,72 +101,31 @@
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
|
||||
<div class="span6">
|
||||
<div class="span3">
|
||||
<section class="contentblock">
|
||||
<h4>Heterogeneous payloads</h4>
|
||||
<p>Any combination of binaries, libraries, configuration files, scripts, virtualenvs, jars, gems, tarballs, you name it. No more juggling between domain-specific tools. Docker can deploy and run them all.</p>
|
||||
<h4>Any server</h4>
|
||||
<p>Docker can run on any x64 machine with a modern linux kernel - whether it's a laptop, a bare metal server or a VM. This makes it perfect for multi-cloud deployments.</p>
|
||||
<h4>Isolation</h4>
|
||||
<p>Docker isolates processes from each other and from the underlying host, using lightweight containers.</p>
|
||||
<h4>Repeatability</h4>
|
||||
<p>Because each container is isolated in its own filesystem, they behave the same regardless of where, when, and alongside what they run.</p>
|
||||
</section>
|
||||
<section class="contentblock">
|
||||
<div class="container">
|
||||
<div class="span2" style="margin-left: 0" >
|
||||
<a href="http://dotcloud.theresumator.com/apply/mWjkD4/Software-Engineer.html" title="Job description"><img src="static/img/hiring_graphic.png" width="140px" style="margin-top: 25px"></a>
|
||||
</div>
|
||||
<div class="span4" style="margin-left: 0">
|
||||
<h4>Do you think it is cool to hack on docker? Join us!</h4>
|
||||
<ul>
|
||||
<li>Work on open source</li>
|
||||
<li>Program in Go</li>
|
||||
</ul>
|
||||
<a href="http://dotcloud.theresumator.com/apply/mWjkD4/Software-Engineer.html" title="Job description">read more</a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</section>
|
||||
</div>
|
||||
<div class="span6">
|
||||
<div class="span3">
|
||||
<section class="contentblock">
|
||||
<h1>New! Docker Index</h1>
|
||||
On the Docker Index you can find and explore pre-made container images. It allows you to share your images and download them.
|
||||
|
||||
<br><br>
|
||||
<a href="https://index.docker.io" target="_blank">
|
||||
<div class="indexlabel">
|
||||
DOCKER index
|
||||
</div>
|
||||
</a>
|
||||
|
||||
<input type="button" class="searchbutton" type="submit" value="Search images"
|
||||
onClick="window.open('https://index.docker.io')" />
|
||||
|
||||
<h4>Any server</h4>
|
||||
<p>Docker can run on any x64 machine with a modern linux kernel - whether it's a laptop, a bare metal server or a VM. This makes it perfect for multi-cloud deployments.</p>
|
||||
</section>
|
||||
</div>
|
||||
<div class="span3">
|
||||
<section class="contentblock">
|
||||
<div id="wufoo-z7x3p3">
|
||||
Fill out my <a href="http://dotclouddocker.wufoo.com/forms/z7x3p3">online form</a>.
|
||||
</div>
|
||||
<script type="text/javascript">var z7x3p3;(function(d, t) {
|
||||
var s = d.createElement(t), options = {
|
||||
'userName':'dotclouddocker',
|
||||
'formHash':'z7x3p3',
|
||||
'autoResize':true,
|
||||
'height':'577',
|
||||
'async':true,
|
||||
'header':'show'};
|
||||
s.src = ('https:' == d.location.protocol ? 'https://' : 'http://') + 'wufoo.com/scripts/embed/form.js';
|
||||
s.onload = s.onreadystatechange = function() {
|
||||
var rs = this.readyState; if (rs) if (rs != 'complete') if (rs != 'loaded') return;
|
||||
try { z7x3p3 = new WufooForm();z7x3p3.initialize(options);z7x3p3.display(); } catch (e) {}};
|
||||
var scr = d.getElementsByTagName(t)[0], par = scr.parentNode; par.insertBefore(s, scr);
|
||||
})(document, 'script');</script>
|
||||
<h4>Isolation</h4>
|
||||
<p>docker isolates processes from each other and from the underlying host, using lightweight containers.</p>
|
||||
</section>
|
||||
</div>
|
||||
<div class="span3">
|
||||
<section class="contentblock">
|
||||
<h4>Repeatability</h4>
|
||||
<p>Because containers are isolated in their own filesystem, they behave the same regardless of where, when, and alongside what they run.</p>
|
||||
</section>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</div>
|
||||
|
||||
<style>
|
||||
@@ -198,35 +142,6 @@
|
||||
|
||||
|
||||
<div class="container">
|
||||
|
||||
<div class="row">
|
||||
<div class="span6">
|
||||
<section class="contentblock twitterblock">
|
||||
<img src="https://si0.twimg.com/profile_images/2707460527/252a64411a339184ff375a96fb68dcb0_bigger.png">
|
||||
<em>Mitchell Hashimoto@mitchellh:</em> Docker launched today. It is incredible. They’re also working RIGHT NOW on a Vagrant provider. LXC is COMING!!
|
||||
</section>
|
||||
</div>
|
||||
<div class="span6">
|
||||
<section class="contentblock twitterblock">
|
||||
<img src="https://si0.twimg.com/profile_images/1108290260/Adam_Jacob-114x150_original_bigger.jpg">
|
||||
<em>Adam Jacob@adamhjk:</em> Docker is clearly the right idea. @solomonstre absolutely killed it. Containerized app deployment is the future, I think.
|
||||
</section>
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="span6">
|
||||
<section class="contentblock twitterblock">
|
||||
<img src="https://si0.twimg.com/profile_images/14872832/twitter_pic_bigger.jpg">
|
||||
<em>Matt Townsend@mtownsend:</em> I have a serious code crush on docker.io - it's Lego for PaaS. Motherfucking awesome Lego.
|
||||
</section>
|
||||
</div>
|
||||
<div class="span6">
|
||||
<section class="contentblock twitterblock">
|
||||
<img src="https://si0.twimg.com/profile_images/1312352395/rupert-259x300_bigger.jpg">
|
||||
<em>Rob Harrop@robertharrop:</em> Impressed by @getdocker - it's all kinds of magic. Serious rethink of AWS architecture happening @skillsmatter.
|
||||
</section>
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="span6">
|
||||
<section class="contentblock twitterblock">
|
||||
@@ -258,12 +173,18 @@
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- <p>Docker encapsulates heterogeneous payloads in <a href="#container">Standard Containers</a>, and runs them on any server with strong guarantees of isolation and repeatability.</p>
|
||||
|
||||
<p>It is a great building block for automating distributed systems: large-scale web deployments, database clusters, continuous deployment systems, private PaaS, service-oriented architectures, etc.</p>
|
||||
|
||||
-->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="span6">
|
||||
|
||||
<section class="contentblock">
|
||||
|
||||
<!-- <img src="_static/lego_docker.jpg" width="600px" style="float:right; margin-left: 10px"> -->
|
||||
<h2>Notable features</h2>
|
||||
|
||||
<ul>
|
||||
@@ -287,26 +208,35 @@
|
||||
<li><a href="http://lxc.sourceforge.net/">lxc</a>, a set of convenience scripts to simplify the creation of linux containers.</li>
|
||||
</ul>
|
||||
|
||||
<h2>Who started it</h2>
|
||||
<p>
|
||||
Docker is an open-source implementation of the deployment engine which powers <a href="http://dotcloud.com">dotCloud</a>, a popular Platform-as-a-Service.</p>
|
||||
|
||||
<p>It benefits directly from the experience accumulated over several years of large-scale operation and support of hundreds of thousands
|
||||
of applications and databases.
|
||||
</p>
|
||||
|
||||
</section>
|
||||
</div>
|
||||
|
||||
<div class="span6">
|
||||
|
||||
<section class="contentblock">
|
||||
<div id="wufoo-z7x3p3">
|
||||
Fill out my <a href="http://dotclouddocker.wufoo.com/forms/z7x3p3">online form</a>.
|
||||
</div>
|
||||
<script type="text/javascript">var z7x3p3;(function(d, t) {
|
||||
var s = d.createElement(t), options = {
|
||||
'userName':'dotclouddocker',
|
||||
'formHash':'z7x3p3',
|
||||
'autoResize':true,
|
||||
'height':'577',
|
||||
'async':true,
|
||||
'header':'show'};
|
||||
s.src = ('https:' == d.location.protocol ? 'https://' : 'http://') + 'wufoo.com/scripts/embed/form.js';
|
||||
s.onload = s.onreadystatechange = function() {
|
||||
var rs = this.readyState; if (rs) if (rs != 'complete') if (rs != 'loaded') return;
|
||||
try { z7x3p3 = new WufooForm();z7x3p3.initialize(options);z7x3p3.display(); } catch (e) {}};
|
||||
var scr = d.getElementsByTagName(t)[0], par = scr.parentNode; par.insertBefore(s, scr);
|
||||
})(document, 'script');</script>
|
||||
</section>
|
||||
|
||||
<section class="contentblock">
|
||||
<h3 id="twitter">Twitter</h3>
|
||||
<a class="twitter-timeline" href="https://twitter.com/getdocker" data-widget-id="312730839718957056">Tweets by @getdocker</a>
|
||||
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script>
|
||||
</section>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -335,7 +265,7 @@
|
||||
|
||||
|
||||
<!-- bootstrap javascipts -->
|
||||
<script src="static/js/vendor/bootstrap.min.js" type="text/javascript"></script>
|
||||
<script src="_static/js/vendor/bootstrap.min.js" type="text/javascript"></script>
|
||||
|
||||
<!-- Google analytics -->
|
||||
<script type="text/javascript">
|
||||
@@ -1,127 +1,21 @@
|
||||
:title: Introduction
|
||||
:description: An introduction to docker and standard containers?
|
||||
:keywords: containers, lxc, concepts, explanation
|
||||
:title: docker documentation
|
||||
:description: docker documentation
|
||||
:keywords:
|
||||
|
||||
.. _introduction:
|
||||
Documentation
|
||||
=============
|
||||
|
||||
Introduction
|
||||
============
|
||||
This documentation has the following resources:
|
||||
|
||||
Docker - The Linux container runtime
|
||||
------------------------------------
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
Docker complements LXC with a high-level API which operates at the process level. It runs unix processes with strong guarantees of isolation and repeatability across servers.
|
||||
|
||||
Docker is a great building block for automating distributed systems: large-scale web deployments, database clusters, continuous deployment systems, private PaaS, service-oriented architectures, etc.
|
||||
concepts/index
|
||||
installation/index
|
||||
examples/index
|
||||
contributing/index
|
||||
commandline/index
|
||||
faq
|
||||
|
||||
|
||||
- **Heterogeneous payloads** Any combination of binaries, libraries, configuration files, scripts, virtualenvs, jars, gems, tarballs, you name it. No more juggling between domain-specific tools. Docker can deploy and run them all.
|
||||
- **Any server** Docker can run on any x64 machine with a modern linux kernel - whether it's a laptop, a bare metal server or a VM. This makes it perfect for multi-cloud deployments.
|
||||
- **Isolation** docker isolates processes from each other and from the underlying host, using lightweight containers.
|
||||
- **Repeatability** Because containers are isolated in their own filesystem, they behave the same regardless of where, when, and alongside what they run.
|
||||
|
||||
.. image:: concepts/images/lego_docker.jpg
|
||||
|
||||
|
||||
What is a Standard Container?
|
||||
-----------------------------
|
||||
|
||||
Docker defines a unit of software delivery called a Standard Container. The goal of a Standard Container is to encapsulate a software component and all its dependencies in
|
||||
a format that is self-describing and portable, so that any compliant runtime can run it without extra dependency, regardless of the underlying machine and the contents of the container.
|
||||
|
||||
The spec for Standard Containers is currently work in progress, but it is very straightforward. It mostly defines 1) an image format, 2) a set of standard operations, and 3) an execution environment.
|
||||
|
||||
A great analogy for this is the shipping container. Just like Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery.
|
||||
|
||||
Standard operations
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Just like shipping containers, Standard Containers define a set of STANDARD OPERATIONS. Shipping containers can be lifted, stacked, locked, loaded, unloaded and labelled. Similarly, standard containers can be started, stopped, copied, snapshotted, downloaded, uploaded and tagged.
|
||||
|
||||
|
||||
Content-agnostic
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Just like shipping containers, Standard Containers are CONTENT-AGNOSTIC: all standard operations have the same effect regardless of the contents. A shipping container will be stacked in exactly the same way whether it contains Vietnamese powder coffee or spare Maserati parts. Similarly, Standard Containers are started or uploaded in the same way whether they contain a postgres database, a php application with its dependencies and application server, or Java build artifacts.
|
||||
|
||||
|
||||
Infrastructure-agnostic
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Both types of containers are INFRASTRUCTURE-AGNOSTIC: they can be transported to thousands of facilities around the world, and manipulated by a wide variety of equipment. A shipping container can be packed in a factory in Ukraine, transported by truck to the nearest routing center, stacked onto a train, loaded into a German boat by an Australian-built crane, stored in a warehouse at a US facility, etc. Similarly, a standard container can be bundled on my laptop, uploaded to S3, downloaded, run and snapshotted by a build server at Equinix in Virginia, uploaded to 10 staging servers in a home-made Openstack cluster, then sent to 30 production instances across 3 EC2 regions.
|
||||
|
||||
|
||||
Designed for automation
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Because they offer the same standard operations regardless of content and infrastructure, Standard Containers, just like their physical counterpart, are extremely well-suited for automation. In fact, you could say automation is their secret weapon.
|
||||
|
||||
Many things that once required time-consuming and error-prone human effort can now be programmed. Before shipping containers, a bag of powder coffee was hauled, dragged, dropped, rolled and stacked by 10 different people in 10 different locations by the time it reached its destination. 1 out of 50 disappeared. 1 out of 20 was damaged. The process was slow, inefficient and cost a fortune - and was entirely different depending on the facility and the type of goods.
|
||||
|
||||
Similarly, before Standard Containers, by the time a software component ran in production, it had been individually built, configured, bundled, documented, patched, vendored, templated, tweaked and instrumented by 10 different people on 10 different computers. Builds failed, libraries conflicted, mirrors crashed, post-it notes were lost, logs were misplaced, cluster updates were half-broken. The process was slow, inefficient and cost a fortune - and was entirely different depending on the language and infrastructure provider.
|
||||
|
||||
|
||||
Industrial-grade delivery
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded on the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away.
|
||||
|
||||
With Standard Containers we can put an end to that embarrassment, by making INDUSTRIAL-GRADE DELIVERY of software a reality.
|
||||
|
||||
|
||||
Standard Container Specification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
(TODO)
|
||||
|
||||
Image format
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Standard operations
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Copy
|
||||
- Run
|
||||
- Stop
|
||||
- Wait
|
||||
- Commit
|
||||
- Attach standard streams
|
||||
- List filesystem changes
|
||||
- ...
|
||||
|
||||
Execution environment
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Root filesystem
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
Environment variables
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Process arguments
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
||||
Networking
|
||||
^^^^^^^^^^
|
||||
|
||||
Process namespacing
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Resource limits
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
Process monitoring
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Logging
|
||||
^^^^^^^
|
||||
|
||||
Signals
|
||||
^^^^^^^
|
||||
|
||||
Pseudo-terminal allocation
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Security
|
||||
^^^^^^^^
|
||||
|
||||
.. image:: http://www.docker.io/_static/lego_docker.jpg
|
||||
@@ -1,27 +0,0 @@
|
||||
:title: Index Environment Variable
|
||||
:description: Setting this environment variable on the docker server will change the URL docker index.
|
||||
:keywords: docker, index environment variable, documentation
|
||||
|
||||
=================================
|
||||
Docker Index Environment Variable
|
||||
=================================
|
||||
|
||||
Variable
|
||||
--------
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
DOCKER_INDEX_URL
|
||||
|
||||
Setting this environment variable on the docker server will change the URL docker index.
|
||||
This address is used in commands such as ``docker login``, ``docker push`` and ``docker pull``.
|
||||
The docker daemon doesn't need to be restarted for this parameter to take effect.
|
||||
|
||||
Example
|
||||
-------
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
docker -d &
|
||||
export DOCKER_INDEX_URL="https://index.docker.io"
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Installation on Amazon EC2
|
||||
:description: Docker installation on Amazon EC2 with a single vagrant command. Vagrant 1.1 or higher is required.
|
||||
:keywords: amazon ec2, virtualization, cloud, docker, documentation, installation
|
||||
|
||||
Amazon EC2
|
||||
==========
|
||||
|
||||
@@ -72,7 +68,7 @@ Docker can now be installed on Amazon EC2 with a single vagrant command. Vagrant
|
||||
If it stalls indefinitely on ``[default] Waiting for SSH to become available...``, Double check your default security
|
||||
zone on AWS includes rights to SSH (port 22) to your container.
|
||||
|
||||
If you have an advanced AWS setup, you might want to have a look at https://github.com/mitchellh/vagrant-aws
|
||||
If you have an advanced AWS setup, you might want to have a look at the https://github.com/mitchellh/vagrant-aws
|
||||
|
||||
7. Connect to your machine
|
||||
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Installation on Arch Linux
|
||||
:description: Docker installation on Arch Linux.
|
||||
:keywords: arch linux, virtualization, docker, documentation, installation
|
||||
|
||||
.. _arch_linux:
|
||||
|
||||
Arch Linux
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Installation from Binaries
|
||||
:description: This instruction set is meant for hackers who want to try out Docker on a variety of environments.
|
||||
:keywords: binaries, installation, docker, documentation, linux
|
||||
|
||||
.. _binaries:
|
||||
|
||||
Binaries
|
||||
@@ -9,59 +5,49 @@ Binaries
|
||||
|
||||
**Please note this project is currently under heavy development. It should not be used in production.**
|
||||
|
||||
**This instruction set is meant for hackers who want to try out Docker on a variety of environments.**
|
||||
|
||||
Right now, the officially supported distributions are:
|
||||
|
||||
- :ref:`ubuntu_precise`
|
||||
- :ref:`ubuntu_raring`
|
||||
- Ubuntu 12.04 (precise LTS) (64-bit)
|
||||
- Ubuntu 12.10 (quantal) (64-bit)
|
||||
|
||||
|
||||
But we know people have had success running it under
|
||||
Install dependencies:
|
||||
---------------------
|
||||
|
||||
- Debian
|
||||
- Suse
|
||||
- :ref:`arch_linux`
|
||||
::
|
||||
|
||||
sudo apt-get install lxc bsdtar
|
||||
sudo apt-get install linux-image-extra-`uname -r`
|
||||
|
||||
Dependencies:
|
||||
-------------
|
||||
The linux-image-extra package is needed on standard Ubuntu EC2 AMIs in order to install the aufs kernel module.
|
||||
|
||||
* 3.8 Kernel
|
||||
* AUFS filesystem support
|
||||
* lxc
|
||||
* bsdtar
|
||||
Install the docker binary:
|
||||
|
||||
::
|
||||
|
||||
Get the docker binary:
|
||||
----------------------
|
||||
wget http://get.docker.io/builds/Linux/x86_64/docker-master.tgz
|
||||
tar -xf docker-master.tgz
|
||||
sudo cp ./docker-master /usr/local/bin
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
wget http://get.docker.io/builds/Linux/x86_64/docker-latest.tgz
|
||||
tar -xf docker-latest.tgz
|
||||
Note: docker currently only supports 64-bit Linux hosts.
|
||||
|
||||
|
||||
Run the docker daemon
|
||||
---------------------
|
||||
|
||||
.. code-block:: bash
|
||||
::
|
||||
|
||||
# start the docker in daemon mode from the directory you unpacked
|
||||
sudo ./docker -d &
|
||||
sudo docker -d &
|
||||
|
||||
|
||||
Run your first container!
|
||||
-------------------------
|
||||
|
||||
.. code-block:: bash
|
||||
::
|
||||
|
||||
# check your docker version
|
||||
./docker version
|
||||
|
||||
# run a container and open an interactive shell in the container
|
||||
./docker run -i -t ubuntu /bin/bash
|
||||
docker run -i -t ubuntu /bin/bash
|
||||
|
||||
|
||||
|
||||
Continue with the :ref:`hello_world` example.
|
||||
Continue with the :ref:`hello_world` example.
|
||||
@@ -1,6 +1,6 @@
|
||||
:title: Documentation
|
||||
:title: docker documentation
|
||||
:description: -- todo: change me
|
||||
:keywords: todo, docker, documentation, installation, OS support
|
||||
:keywords: todo: change me
|
||||
|
||||
|
||||
|
||||
@@ -14,10 +14,8 @@ Contents:
|
||||
|
||||
ubuntulinux
|
||||
binaries
|
||||
archlinux
|
||||
vagrant
|
||||
windows
|
||||
amazon
|
||||
rackspace
|
||||
archlinux
|
||||
upgrading
|
||||
kernel
|
||||
|
||||
@@ -1,115 +0,0 @@
|
||||
:title: Kernel Requirements
|
||||
:description: Kernel supports
|
||||
:keywords: kernel requirements, kernel support, docker, installation, cgroups, namespaces
|
||||
|
||||
.. _kernel:
|
||||
|
||||
Kernel Requirements
|
||||
===================
|
||||
|
||||
In short, Docker has the following kernel requirements:
|
||||
|
||||
- Linux version 3.8 or above.
|
||||
|
||||
- `AUFS support <http://aufs.sourceforge.net/>`_.
|
||||
|
||||
- Cgroups and namespaces must be enabled.
|
||||
|
||||
|
||||
The officially supported kernel is the one recommended by the
|
||||
:ref:`ubuntu_linux` installation path. It is the one that most developers
|
||||
will use, and the one that receives the most attention from the core
|
||||
contributors. If you decide to go with a different kernel and hit a bug,
|
||||
please try to reproduce it with the official kernels first.
|
||||
|
||||
If you cannot or do not want to use the "official" kernels,
|
||||
here is some technical background about the features (both optional and
|
||||
mandatory) that docker needs to run successfully.
|
||||
|
||||
Linux version 3.8 or above
|
||||
--------------------------
|
||||
|
||||
Kernel versions 3.2 to 3.5 are not stable when used with docker.
|
||||
In some circumstances, you will experience kernel "oopses", or even crashes.
|
||||
The symptoms include:
|
||||
|
||||
- a container being killed in the middle of an operation (e.g. an ``apt-get``
|
||||
command doesn't complete);
|
||||
- kernel messages including mentioning calls to ``mntput`` or
|
||||
``d_hash_and_lookup``;
|
||||
- kernel crash causing the machine to freeze for a few minutes, or even
|
||||
completely.
|
||||
|
||||
While it is still possible to use older kernels for development, it is
|
||||
really not advised to do so.
|
||||
|
||||
Docker checks the kernel version when it starts, and emits a warning if it
|
||||
detects something older than 3.8.
|
||||
|
||||
See issue `#407 <https://github.com/dotcloud/docker/issues/407>`_ for details.
|
||||
|
||||
|
||||
AUFS support
|
||||
------------
|
||||
|
||||
Docker currently relies on AUFS, an unioning filesystem.
|
||||
While AUFS is included in the kernels built by the Debian and Ubuntu
|
||||
distributions, is not part of the standard kernel. This means that if
|
||||
you decide to roll your own kernel, you will have to patch your
|
||||
kernel tree to add AUFS. The process is documented on
|
||||
`AUFS webpage <http://aufs.sourceforge.net/>`_.
|
||||
|
||||
|
||||
Cgroups and namespaces
|
||||
----------------------
|
||||
|
||||
You need to enable namespaces and cgroups, to the extend of what is needed
|
||||
to run LXC containers. Technically, while namespaces have been introduced
|
||||
in the early 2.6 kernels, we do not advise to try any kernel before 2.6.32
|
||||
to run LXC containers. Note that 2.6.32 has some documented issues regarding
|
||||
network namespace setup and teardown; those issues are not a risk if you
|
||||
run containers in a private environment, but can lead to denial-of-service
|
||||
attacks if you want to run untrusted code in your containers. For more details,
|
||||
see `[LP#720095 <https://bugs.launchpad.net/ubuntu/+source/linux/+bug/720095>`_.
|
||||
|
||||
Kernels 2.6.38, and every version since 3.2, have been deployed successfully
|
||||
to run containerized production workloads. Feature-wise, there is no huge
|
||||
improvement between 2.6.38 and up to 3.6 (as far as docker is concerned!).
|
||||
|
||||
|
||||
|
||||
|
||||
Extra Cgroup Controllers
|
||||
------------------------
|
||||
|
||||
Most control groups can be enabled or disabled individually. For instance,
|
||||
you can decide that you do not want to compile support for the CPU or memory
|
||||
controller. In some cases, the feature can be enabled or disabled at boot
|
||||
time. It is worth mentioning that some distributions (like Debian) disable
|
||||
"expensive" features, like the memory controller, because they can have
|
||||
a significant performance impact.
|
||||
|
||||
In the specific case of the memory cgroup, docker will detect if the cgroup
|
||||
is available or not. If it's not, it will print a warning, and it won't
|
||||
use the feature. If you want to enable that feature -- read on!
|
||||
|
||||
|
||||
Memory and Swap Accounting on Debian/Ubuntu
|
||||
-------------------------------------------
|
||||
|
||||
If you use Debian or Ubuntu kernels, and want to enable memory and swap
|
||||
accounting, you must add the following command-line parameters to your kernel::
|
||||
|
||||
cgroup_enable=memory swapaccount
|
||||
|
||||
On Debian or Ubuntu systems, if you use the default GRUB bootloader, you can
|
||||
add those parameters by editing ``/etc/default/grub`` and extending
|
||||
``GRUB_CMDLINE_LINUX``. Look for the following line::
|
||||
|
||||
GRUB_CMDLINE_LINUX=""
|
||||
|
||||
And replace it by the following one::
|
||||
|
||||
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount"
|
||||
|
||||
Then run ``update-grub``, and reboot.
|
||||
@@ -1,95 +0,0 @@
|
||||
:title: Rackspace Cloud Installation
|
||||
:description: Installing Docker on Ubuntu proviced by Rackspace
|
||||
:keywords: Rackspace Cloud, installation, docker, linux, ubuntu
|
||||
|
||||
===============
|
||||
Rackspace Cloud
|
||||
===============
|
||||
|
||||
Please note this is a community contributed installation path. The only 'official' installation is using the
|
||||
:ref:`ubuntu_linux` installation path. This version may sometimes be out of date.
|
||||
|
||||
|
||||
Installing Docker on Ubuntu proviced by Rackspace is pretty straightforward, and you should mostly be able to follow the
|
||||
:ref:`ubuntu_linux` installation guide.
|
||||
|
||||
**However, there is one caveat:**
|
||||
|
||||
If you are using any linux not already shipping with the 3.8 kernel you will need to install it. And this is a little
|
||||
more difficult on Rackspace.
|
||||
|
||||
Rackspace boots their servers using grub's menu.lst and does not like non 'virtual' packages (e.g. xen compatible)
|
||||
kernels there, although they do work. This makes ``update-grub`` to not have the expected result, and you need to
|
||||
set the kernel manually.
|
||||
|
||||
**Do not attempt this on a production machine!**
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# update apt
|
||||
apt-get update
|
||||
|
||||
# install the new kernel
|
||||
apt-get install linux-generic-lts-raring
|
||||
|
||||
|
||||
Great, now you have kernel installed in /boot/, next is to make it boot next time.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# find the exact names
|
||||
find /boot/ -name '*3.8*'
|
||||
|
||||
# this should return some results
|
||||
|
||||
|
||||
Now you need to manually edit /boot/grub/menu.lst, you will find a section at the bottom with the existing options.
|
||||
Copy the top one and substitute the new kernel into that. Make sure the new kernel is on top, and double check kernel
|
||||
and initrd point to the right files.
|
||||
|
||||
Make special care to double check the kernel and initrd entries.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# now edit /boot/grub/menu.lst
|
||||
vi /boot/grub/menu.lst
|
||||
|
||||
It will probably look something like this:
|
||||
|
||||
::
|
||||
|
||||
## ## End Default Options ##
|
||||
|
||||
title Ubuntu 12.04.2 LTS, kernel 3.8.x generic
|
||||
root (hd0)
|
||||
kernel /boot/vmlinuz-3.8.0-19-generic root=/dev/xvda1 ro quiet splash console=hvc0
|
||||
initrd /boot/initrd.img-3.8.0-19-generic
|
||||
|
||||
title Ubuntu 12.04.2 LTS, kernel 3.2.0-38-virtual
|
||||
root (hd0)
|
||||
kernel /boot/vmlinuz-3.2.0-38-virtual root=/dev/xvda1 ro quiet splash console=hvc0
|
||||
initrd /boot/initrd.img-3.2.0-38-virtual
|
||||
|
||||
title Ubuntu 12.04.2 LTS, kernel 3.2.0-38-virtual (recovery mode)
|
||||
root (hd0)
|
||||
kernel /boot/vmlinuz-3.2.0-38-virtual root=/dev/xvda1 ro quiet splash single
|
||||
initrd /boot/initrd.img-3.2.0-38-virtual
|
||||
|
||||
|
||||
Reboot server (either via command line or console)
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# reboot
|
||||
|
||||
Verify the kernel was updated
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
uname -a
|
||||
# Linux docker-12-04 3.8.0-19-generic #30~precise1-Ubuntu SMP Wed May 1 22:26:36 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
|
||||
|
||||
# nice! 3.8.
|
||||
|
||||
|
||||
Now you can finish with the :ref:`ubuntu_linux` instructions.
|
||||
@@ -1,7 +1,3 @@
|
||||
:title: Requirements and Installation on Ubuntu Linux
|
||||
:description: Please note this project is currently under heavy development. It should not be used in production.
|
||||
:keywords: Docker, Docker documentation, requirements, virtualbox, vagrant, git, ssh, putty, cygwin, linux
|
||||
|
||||
.. _ubuntu_linux:
|
||||
|
||||
Ubuntu Linux
|
||||
@@ -9,89 +5,22 @@ Ubuntu Linux
|
||||
|
||||
**Please note this project is currently under heavy development. It should not be used in production.**
|
||||
|
||||
Right now, the officially supported distribution are:
|
||||
|
||||
- :ref:`ubuntu_precise`
|
||||
- :ref:`ubuntu_raring`
|
||||
|
||||
Docker has the following dependencies
|
||||
|
||||
* Linux kernel 3.8
|
||||
* AUFS file system support (we are working on BTRFS support as an alternative)
|
||||
|
||||
.. _ubuntu_precise:
|
||||
|
||||
Ubuntu Precise 12.04 (LTS) (64-bit)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
This installation path should work at all times.
|
||||
Right now, the officially supported distributions are:
|
||||
|
||||
- Ubuntu 12.04 (precise LTS) (64-bit)
|
||||
- Ubuntu 12.10 (quantal) (64-bit)
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
**Linux kernel 3.8**
|
||||
|
||||
Due to a bug in LXC docker works best on the 3.8 kernel. Precise comes with a 3.2 kernel, so we need to upgrade it. The kernel we install comes with AUFS built in.
|
||||
|
||||
The linux-image-extra package is only needed on standard Ubuntu EC2 AMIs in order to install the aufs kernel module.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# install the backported kernel
|
||||
sudo apt-get update && sudo apt-get install linux-image-3.8.0-19-generic
|
||||
|
||||
# reboot
|
||||
sudo reboot
|
||||
|
||||
|
||||
Installation
|
||||
------------
|
||||
|
||||
Docker is available as a Ubuntu PPA (Personal Package Archive),
|
||||
`hosted on launchpad <https://launchpad.net/~dotcloud/+archive/lxc-docker>`_
|
||||
which makes installing Docker on Ubuntu very easy.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Add the PPA sources to your apt sources list.
|
||||
sudo sh -c "echo 'deb http://ppa.launchpad.net/dotcloud/lxc-docker/ubuntu precise main' > /etc/apt/sources.list.d/lxc-docker.list"
|
||||
|
||||
# Update your sources, you will see a warning.
|
||||
sudo apt-get update
|
||||
|
||||
# Install, you will see another warning that the package cannot be authenticated. Confirm install.
|
||||
sudo apt-get install lxc-docker
|
||||
|
||||
Verify it worked
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# download the base 'ubuntu' container and run bash inside it while setting up an interactive shell
|
||||
docker run -i -t ubuntu /bin/bash
|
||||
|
||||
# type 'exit' to exit
|
||||
|
||||
|
||||
**Done!**, now continue with the :ref:`hello_world` example.
|
||||
|
||||
.. _ubuntu_raring:
|
||||
|
||||
Ubuntu Raring 13.04 (64 bit)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
**AUFS filesystem support**
|
||||
|
||||
Ubuntu Raring already comes with the 3.8 kernel, so we don't need to install it. However, not all systems
|
||||
have AUFS filesystem support enabled, so we need to install it.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo apt-get update
|
||||
sudo apt-get install linux-image-extra-`uname -r`
|
||||
|
||||
|
||||
Installation
|
||||
------------
|
||||
|
||||
@@ -100,17 +29,25 @@ Docker is available as a Ubuntu PPA (Personal Package Archive),
|
||||
which makes installing Docker on Ubuntu very easy.
|
||||
|
||||
|
||||
Add the custom package sources to your apt sources list.
|
||||
|
||||
Add the custom package sources to your apt sources list. Copy and paste the following lines at once.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# add the sources to your apt
|
||||
sudo add-apt-repository ppa:dotcloud/lxc-docker
|
||||
sudo sh -c "echo 'deb http://ppa.launchpad.net/dotcloud/lxc-docker/ubuntu precise main' >> /etc/apt/sources.list"
|
||||
|
||||
|
||||
Update your sources. You will see a warning that GPG signatures cannot be verified.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# update
|
||||
sudo apt-get update
|
||||
|
||||
# install
|
||||
|
||||
Now install it, you will see another warning that the package cannot be authenticated. Confirm install.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo apt-get install lxc-docker
|
||||
|
||||
|
||||
@@ -118,10 +55,7 @@ Verify it worked
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# download the base 'ubuntu' container and run bash inside it while setting up an interactive shell
|
||||
docker run -i -t ubuntu /bin/bash
|
||||
|
||||
# type exit to exit
|
||||
docker
|
||||
|
||||
|
||||
**Done!**, now continue with the :ref:`hello_world` example.
|
||||
|
||||
@@ -1,60 +1,41 @@
|
||||
:title: Upgrading
|
||||
:description: These instructions are for upgrading Docker
|
||||
:keywords: Docker, Docker documentation, upgrading docker, upgrade
|
||||
|
||||
.. _upgrading:
|
||||
|
||||
Upgrading
|
||||
============
|
||||
|
||||
**These instructions are for upgrading Docker**
|
||||
These instructions are for upgrading your Docker binary for when you had a custom (non package manager) installation.
|
||||
If you istalled docker using apt-get, use that to upgrade.
|
||||
|
||||
|
||||
After normal installation
|
||||
-------------------------
|
||||
Get the latest docker binary:
|
||||
|
||||
If you installed Docker normally using apt-get or used Vagrant, use apt-get to upgrade.
|
||||
::
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# update your sources list
|
||||
sudo apt-get update
|
||||
|
||||
# install the latest
|
||||
sudo apt-get install lxc-docker
|
||||
wget http://get.docker.io/builds/$(uname -s)/$(uname -m)/docker-master.tgz
|
||||
|
||||
|
||||
After manual installation
|
||||
-------------------------
|
||||
|
||||
If you installed the Docker binary
|
||||
Unpack it to your current dir
|
||||
|
||||
::
|
||||
|
||||
tar -xf docker-master.tgz
|
||||
|
||||
|
||||
.. code-block:: bash
|
||||
Stop your current daemon. How you stop your daemon depends on how you started it.
|
||||
|
||||
# kill the running docker daemon
|
||||
killall docker
|
||||
- If you started the daemon manually (``sudo docker -d``), you can just kill the process: ``killall docker``
|
||||
- If the process was started using upstart (the ubuntu startup daemon), you may need to use that to stop it
|
||||
|
||||
|
||||
.. code-block:: bash
|
||||
Start docker in daemon mode (-d) and disconnect (&) starting ./docker will start the version in your current dir rather
|
||||
than the one in your PATH.
|
||||
|
||||
# get the latest binary
|
||||
wget http://get.docker.io/builds/Linux/x86_64/docker-latest.tgz
|
||||
Now start the daemon
|
||||
|
||||
::
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Unpack it to your current dir
|
||||
tar -xf docker-latest.tgz
|
||||
|
||||
|
||||
Start docker in daemon mode (-d) and disconnect (&) starting ./docker will start the version in your current dir rather than a version which
|
||||
might reside in your path.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# start the new version
|
||||
sudo ./docker -d &
|
||||
|
||||
|
||||
Alternatively you can replace the docker binary in ``/usr/local/bin``
|
||||
Alternatively you can replace the docker binary in ``/usr/local/bin``
|
||||
@@ -1,13 +1,14 @@
|
||||
:title: Using Vagrant (Mac, Linux)
|
||||
:description: This guide will setup a new virtualbox virtual machine with docker installed on your computer.
|
||||
:keywords: Docker, Docker documentation, virtualbox, vagrant, git, ssh, putty, cygwin
|
||||
|
||||
.. _install_using_vagrant:
|
||||
|
||||
Using Vagrant (Mac, Linux)
|
||||
==========================
|
||||
Using Vagrant
|
||||
=============
|
||||
|
||||
This guide will setup a new virtualbox virtual machine with docker installed on your computer. This works on most operating
|
||||
Please note this is a community contributed installation path. The only 'official' installation is using the
|
||||
:ref:`ubuntu_linux` installation path. This version may sometimes be out of date.
|
||||
|
||||
**Requirements:**
|
||||
This guide will setup a new virtual machine with docker installed on your computer. This works on most operating
|
||||
systems, including MacOX, Windows, Linux, FreeBSD and others. If you can install these and have at least 400Mb RAM
|
||||
to spare you should be good.
|
||||
|
||||
|
||||
@@ -3,8 +3,8 @@
|
||||
:keywords: Docker, Docker documentation, Windows, requirements, virtualbox, vagrant, git, ssh, putty, cygwin
|
||||
|
||||
|
||||
Using Vagrant (Windows)
|
||||
=======================
|
||||
Windows (with Vagrant)
|
||||
======================
|
||||
|
||||
Please note this is a community contributed installation path. The only 'official' installation is using the :ref:`ubuntu_linux` installation path. This version
|
||||
may be out of date because it depends on some binaries to be updated and published
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user