Compare commits

..

19 Commits

Author SHA1 Message Date
Guillaume J. Charmes
c97a1aada6 Better varibale names 2013-05-01 13:45:50 -07:00
Guillaume J. Charmes
803a8d86e5 Change dockerbulder format, no more tabs and COPY becomes INSERT to avoid conflict with contrib script 2013-05-01 13:45:35 -07:00
Guillaume J. Charmes
5fd1ff014a Add doc for the builder 2013-05-01 13:37:32 -07:00
Guillaume J. Charmes
3c9ed5cdd6 Remove the open from CmdBuild 2013-04-30 18:03:15 -07:00
Guillaume J. Charmes
59b6a93504 Fix image pipe with Builder COPY 2013-04-27 21:45:05 -07:00
Guillaume J. Charmes
72d7c3847a Add builder_test.go 2013-04-25 11:20:56 -07:00
Guillaume J. Charmes
6e7b8efa92 Make Builder.Build return the builded image 2013-04-25 11:20:45 -07:00
Guillaume J. Charmes
fa401da0ff Merge pull request #475 from justone/builder
use new image as base of next command
2013-04-25 08:56:46 -07:00
Nate Jones
26ec7b2e77 use new image as base of next command 2013-04-25 08:08:05 -07:00
Guillaume J. Charmes
03a9e41245 Update the unit tests to reflect the new API 2013-04-24 15:35:28 -07:00
Guillaume J. Charmes
55869531f5 Move runtime.Commit to builder.Commit 2013-04-24 15:24:14 -07:00
Guillaume J. Charmes
9193585d66 Moving runtime.Create to builder.Create 2013-04-24 15:14:10 -07:00
Guillaume J. Charmes
38b8373434 Implement the COPY operator within the builder 2013-04-24 14:28:51 -07:00
Guillaume J. Charmes
03b5f8a585 Make sure the destination directory exists when using docker insert 2013-04-24 13:51:28 -07:00
Guillaume J. Charmes
bc260f0225 Add insert command in order to insert external files within an image 2013-04-24 13:37:00 -07:00
Guillaume J. Charmes
45dcd1125b Add a Builder.Commit method 2013-04-24 13:35:57 -07:00
Guillaume J. Charmes
d2e063d9e1 make builder.Run public it now runs only given arguments without sh -c 2013-04-24 12:31:20 -07:00
Guillaume J. Charmes
567a484b66 Clear the containers/images upon failure 2013-04-24 12:02:00 -07:00
Guillaume J. Charmes
5d4b886ad6 Add build command 2013-04-24 11:03:01 -07:00
606 changed files with 13678 additions and 112187 deletions

9
.gitignore vendored
View File

@@ -5,16 +5,13 @@ docker/docker
a.out
*.orig
build_src
command-line-arguments.test
.flymake*
docker.test
auth/auth.test
.idea
.DS_Store
docs/_build
docs/_static
docs/_templates
.gopath/
.dotcloud
*.test
bundles/
.hg/
.git/
vendor/pkg/

View File

@@ -1,8 +1,8 @@
# Generate AUTHORS: git log --format='%aN <%aE>' | sort -uf | grep -v vagrant-ubuntu-12
# Generate AUTHORS: git log --all --format='%aN <%aE>' | sort -uf | grep -v vagrant-ubuntu-12
<charles.hooper@dotcloud.com> <chooper@plumata.com>
<daniel.mizyrycki@dotcloud.com> <daniel@dotcloud.com>
<daniel.mizyrycki@dotcloud.com> <mzdaniel@glidelink.net>
Guillaume J. Charmes <guillaume.charmes@dotcloud.com> <charmes.guillaume@gmail.com>
Guillaume J. Charmes <guillaume.charmes@dotcloud.com> creack <charmes.guillaume@gmail.com>
<guillaume.charmes@dotcloud.com> <guillaume@dotcloud.com>
<kencochrane@gmail.com> <KenCochrane@gmail.com>
<sridharr@activestate.com> <github@srid.name>
@@ -16,25 +16,4 @@ Tim Terhorst <mynamewastaken+git@gmail.com>
Andy Smith <github@anarkystic.com>
<kalessin@kalessin.fr> <louis@dotcloud.com>
<victor.vieux@dotcloud.com> <victor@dotcloud.com>
<victor.vieux@dotcloud.com> <dev@vvieux.com>
<dominik@honnef.co> <dominikh@fork-bomb.org>
Thatcher Peskens <thatcher@dotcloud.com>
<ehanchrow@ine.com> <eric.hanchrow@gmail.com>
Walter Stanish <walter@pratyeka.org>
<daniel@gasienica.ch> <dgasienica@zynga.com>
Roberto Hashioka <roberto_hashioka@hotmail.com>
Konstantin Pelykh <kpelykh@zettaset.com>
David Sissitka <me@dsissitka.com>
Nolan Darilek <nolan@thewordnerd.info>
<mastahyeti@gmail.com> <mastahyeti@users.noreply.github.com>
Benoit Chesneau <bchesneau@gmail.com>
Jordan Arentsen <blissdev@gmail.com>
Daniel Garcia <daniel@danielgarcia.info>
Miguel Angel Fernández <elmendalerenda@gmail.com>
Bhiraj Butala <abhiraj.butala@gmail.com>
Faiz Khan <faizkhan00@gmail.com>
Victor Lyuboslavsky <victor@victoreda.com>
Jean-Baptiste Barth <jeanbaptiste.barth@gmail.com>
Matthew Mueller <mattmuelle@gmail.com>
<mosoni@ebay.com> <mohitsoni1989@gmail.com>
Shih-Yuan Lee <fourdollars@gmail.com>

147
AUTHORS
View File

@@ -1,190 +1,45 @@
# This file lists all individuals having contributed content to the repository.
# If you're submitting a patch, please add your name here in alphabetical order as part of the patch.
#
# For a list of active project maintainers, see the MAINTAINERS file.
#
Al Tobey <al@ooyala.com>
Alex Gaynor <alex.gaynor@gmail.com>
Alexander Larsson <alexl@redhat.com>
Alexey Shamrin <shamrin@gmail.com>
Andrea Luzzardi <aluzzardi@gmail.com>
Andreas Savvides <andreas@editd.com>
Andreas Tiefenthaler <at@an-ti.eu>
Andrew Macgregor <andrew.macgregor@agworld.com.au>
Andrew Munsell <andrew@wizardapps.net>
Andrews Medina <andrewsmedina@gmail.com>
Andy Rothfusz <github@metaliveblog.com>
Andy Smith <github@anarkystic.com>
Anthony Bishopric <git@anthonybishopric.com>
Antony Messerli <amesserl@rackspace.com>
Asbjørn Enge <asbjorn@hanafjedle.net>
Barry Allard <barry.allard@gmail.com>
Ben Toews <mastahyeti@gmail.com>
Benoit Chesneau <bchesneau@gmail.com>
Bhiraj Butala <abhiraj.butala@gmail.com>
Bouke Haarsma <bouke@webatoom.nl>
Brandon Liu <bdon@bdon.org>
Brandon Philips <brandon@ifup.co>
Brian McCallister <brianm@skife.org>
Brian Olsen <brian@maven-group.org>
Brian Shumate <brian@couchbase.com>
Briehan Lombaard <briehan.lombaard@gmail.com>
Bruno Bigras <bigras.bruno@gmail.com>
Caleb Spare <cespare@gmail.com>
Calen Pennington <cale@edx.org>
Charles Hooper <charles.hooper@dotcloud.com>
Christopher Currie <codemonkey+github@gmail.com>
Colin Dunklau <colin.dunklau@gmail.com>
Colin Rice <colin@daedrum.net>
Dan Buch <d.buch@modcloth.com>
Daniel Garcia <daniel@danielgarcia.info>
Daniel Gasienica <daniel@gasienica.ch>
Daniel Mizyrycki <daniel.mizyrycki@dotcloud.com>
Daniel Nordberg <dnordberg@gmail.com>
Daniel Robinson <gottagetmac@gmail.com>
Daniel Von Fange <daniel@leancoder.com>
Daniel YC Lin <dlin.tw@gmail.com>
David Calavera <david.calavera@gmail.com>
David Sissitka <me@dsissitka.com>
Deni Bertovic <deni@kset.org>
Dominik Honnef <dominik@honnef.co>
Don Spaulding <donspauldingii@gmail.com>
Dr Nic Williams <drnicwilliams@gmail.com>
Dražen Lučanin <kermit666@gmail.com>
Elias Probst <mail@eliasprobst.eu>
Emily Rose <emily@contactvibe.com>
Eric Hanchrow <ehanchrow@ine.com>
Eric Myhre <hash@exultant.us>
Erno Hopearuoho <erno.hopearuoho@gmail.com>
Evan Phoenix <evan@fallingsnow.net>
Evan Wies <evan@neomantra.net>
ezbercih <cem.ezberci@gmail.com>
Fabrizio Regini <freegenie@gmail.com>
Faiz Khan <faizkhan00@gmail.com>
Fareed Dudhia <fareeddudhia@googlemail.com>
Flavio Castelli <fcastelli@suse.com>
Francisco Souza <f@souza.cc>
Frederick F. Kautz IV <fkautz@alumni.cmu.edu>
Gabriel Monroy <gabriel@opdemand.com>
Gareth Rushgrove <gareth@morethanseven.net>
Greg Thornton <xdissent@me.com>
Guillaume J. Charmes <guillaume.charmes@dotcloud.com>
Guruprasad <lgp171188@gmail.com>
Harley Laue <losinggeneration@gmail.com>
Hector Castro <hectcastro@gmail.com>
Hunter Blanks <hunter@twilio.com>
Isao Jonas <isao.jonas@gmail.com>
James Carr <james.r.carr@gmail.com>
Jason McVetta <jason.mcvetta@gmail.com>
Jean-Baptiste Barth <jeanbaptiste.barth@gmail.com>
Jeff Lindsay <progrium@gmail.com>
Jeremy Grosser <jeremy@synack.me>
Jim Alateras <jima@comware.com.au>
Jimmy Cuadra <jimmy@jimmycuadra.com>
Joe Van Dyk <joe@tanga.com>
Joffrey F <joffrey@dotcloud.com>
Johan Euphrosine <proppy@google.com>
John Costa <john.costa@gmail.com>
Jon Wedaman <jweede@gmail.com>
Jonas Pfenniger <jonas@pfenniger.name>
Jonathan Mueller <j.mueller@apoveda.ch>
Jonathan Rudenberg <jonathan@titanous.com>
Joost Cassee <joost@cassee.net>
Jordan Arentsen <blissdev@gmail.com>
Joseph Anthony Pasquale Holsten <joseph@josephholsten.com>
Julien Barbier <write0@gmail.com>
Jérôme Petazzoni <jerome.petazzoni@dotcloud.com>
Karan Lyons <karan@karanlyons.com>
Karl Grzeszczak <karl@karlgrz.com>
Kawsar Saiyeed <kawsar.saiyeed@projiris.com>
Keli Hu <dev@keli.hu>
Ken Cochrane <kencochrane@gmail.com>
Kevin Clark <kevin.clark@gmail.com>
Kevin J. Lynagh <kevin@keminglabs.com>
kim0 <email.ahmedkamal@googlemail.com>
Kimbro Staken <kstaken@kstaken.com>
Kiran Gangadharan <kiran.daredevil@gmail.com>
Konstantin Pelykh <kpelykh@zettaset.com>
Kyle Conroy <kyle.j.conroy@gmail.com>
Laurie Voss <github@seldo.com>
Louis Opter <kalessin@kalessin.fr>
Manuel Meurer <manuel@krautcomputing.com>
Marco Hennings <marco.hennings@freiheit.com>
Marcus Farkas <toothlessgear@finitebox.com>
Marcus Ramberg <marcus@nordaaker.com>
Mark McGranaghan <mmcgrana@gmail.com>
Marko Mikulicic <mmikulicic@gmail.com>
Markus Fix <lispmeister@gmail.com>
Martin Redmond <martin@tinychat.com>
Matt Apperson <me@mattapperson.com>
Matt Bachmann <bachmann.matt@gmail.com>
Matthew Mueller <mattmuelle@gmail.com>
Maxim Treskin <zerthurd@gmail.com>
meejah <meejah@meejah.ca>
Michael Crosby <crosby.michael@gmail.com>
Michael Gorsuch <gorsuch@github.com>
Miguel Angel Fernández <elmendalerenda@gmail.com>
Mike Gaffney <mike@uberu.com>
Mikhail Sobolev <mss@mawhrin.net>
Mohit Soni <mosoni@ebay.com>
Morten Siebuhr <sbhr@sbhr.dk>
Nan Monnand Deng <monnand@gmail.com>
Nate Jones <nate@endot.org>
Nelson Chen <crazysim@gmail.com>
Niall O'Higgins <niallo@unworkable.org>
Nick Payne <nick@kurai.co.uk>
Nick Stenning <nick.stenning@digital.cabinet-office.gov.uk>
Nick Stinemates <nick@stinemates.org>
Nolan Darilek <nolan@thewordnerd.info>
odk- <github@odkurzacz.org>
Pascal Borreli <pascal@borreli.com>
Paul Bowsher <pbowsher@globalpersonals.co.uk>
Paul Hammond <paul@paulhammond.org>
Phil Spitler <pspitler@gmail.com>
Piotr Bogdan <ppbogdan@gmail.com>
pysqz <randomq@126.com>
Ramon van Alteren <ramon@vanalteren.nl>
Renato Riccieri Santos Zannon <renato.riccieri@gmail.com>
Rhys Hiltner <rhys@twitch.tv>
Robert Obryk <robryk@gmail.com>
Roberto Hashioka <roberto_hashioka@hotmail.com>
Ryan Fowler <rwfowler@gmail.com>
Sam Alba <sam.alba@gmail.com>
Sam J Sharpe <sam.sharpe@digital.cabinet-office.gov.uk>
Scott Bessler <scottbessler@gmail.com>
Sean P. Kane <skane@newrelic.com>
Shawn Siefkas <shawn.siefkas@meredith.com>
Shih-Yuan Lee <fourdollars@gmail.com>
Silas Sewell <silas@sewell.org>
Solomon Hykes <solomon@dotcloud.com>
Song Gao <song@gao.io>
Sridatta Thatipamala <sthatipamala@gmail.com>
Sridhar Ratnakumar <sridharr@activestate.com>
Steeve Morin <steeve.morin@gmail.com>
Stefan Praszalowicz <stefan@greplin.com>
Thatcher Peskens <thatcher@dotcloud.com>
Thermionix <bond711@gmail.com>
Thijs Terlouw <thijsterlouw@gmail.com>
Thomas Bikeev <thomas.bikeev@mac.com>
Thomas Frössman <thomasf@jossystem.se>
Thomas Hansen <thomas.hansen@gmail.com>
Tianon Gravi <admwiggin@gmail.com>
Tim Terhorst <mynamewastaken+git@gmail.com>
Tobias Bieniek <Tobias.Bieniek@gmx.de>
Tobias Schmidt <ts@soundcloud.com>
Tobias Schwab <tobias.schwab@dynport.de>
Tom Hulihan <hulihan.tom159@gmail.com>
Tommaso Visconti <tommaso.visconti@gmail.com>
Tyler Brock <tyler.brock@gmail.com>
Troy Howard <thoward37@gmail.com>
unclejack <unclejacksons@gmail.com>
Victor Coisne <victor.coisne@dotcloud.com>
Victor Lyuboslavsky <victor@victoreda.com>
Victor Vieux <victor.vieux@dotcloud.com>
Vincent Bernat <bernat@luffy.cx>
Vivek Agarwal <me@vivek.im>
Vladimir Kirillov <proger@wilab.org.ua>
Walter Stanish <walter@pratyeka.org>
Wes Morgan <cap10morgan@gmail.com>
Will Dietz <w@wdtz.org>
Yang Bai <hamo.by@gmail.com>
Zaiste! <oh@zaiste.net>

View File

@@ -1,435 +1,6 @@
# Changelog
## 0.6.5 (2013-10-29)
+ Runtime: Containers can now be named
+ Runtime: Containers can now be linked together for service discovery
+ Runtime: 'run -a', 'start -a' and 'attach' can forward signals to the container for better integration with process supervisors
+ Runtime: Automatically start crashed containers after a reboot
+ Runtime: Expose IP, port, and proto as separate environment vars for container links
* Runtime: Allow ports to be published to specific ips
* Runtime: Prohibit inter-container communication by default
* Documentation: Fix the flags for nc in example
- Client: Only pass stdin to hijack when needed to avoid closed pipe errors
* Testing: Remove warnings and prevent mount issues
* Client: Use less reflection in command-line method invocation
- Runtime: Ignore ErrClosedPipe for stdin in Container.Attach
- Testing: Change logic for tty resize to avoid warning in tests
- Client: Monitor the tty size after starting the container, not prior
- Hack: Update install.sh with $sh_c to get sudo/su for modprobe
- Client: Remove useless os.Exit() calls after log.Fatal
* Hack: Update all the mkimage scripts to use --numeric-owner as a tar argument
* Hack: Update hack/release.sh process to automatically invoke hack/make.sh and bail on build and test issues
+ Hack: Add initial init scripts library and a safer Ubuntu packaging script that works for Debian
- Runtime: Fix untag during removal of images
- Runtime: Remove unused field kernelVersion
* Hack: Add -p option to invoke debootstrap with http_proxy
- Builder: Fix race condition in docker build with verbose output
- Registry: Fix content-type for PushImageJSONIndex method
* Runtime: Fix issue when mounting subdirectories of /mnt in container
* Runtime: Check return value of syscall.Chdir when changing working directory inside dockerinit
* Contrib: Improve helper tools to generate debian and Arch linux server images
## 0.6.4 (2013-10-16)
- Runtime: Add cleanup of container when Start() fails
- Testing: Catch errClosing error when TCP and UDP proxies are terminated
- Testing: Add aggregated docker-ci email report
- Testing: Remove a few errors in tests
* Contrib: Reorganize contributed completion scripts to add zsh completion
* Contrib: Add vim syntax highlighting for Dockerfiles from @honza
* Runtime: Add better comments to utils/stdcopy.go
- Testing: add cleanup to remove leftover containers
* Documentation: Document how to edit and release docs
* Documentation: Add initial draft of the Docker infrastructure doc
* Contrib: Add mkimage-arch.sh
- Builder: Abort build if mergeConfig returns an error and fix duplicate error message
- Runtime: Remove error messages which are not actually errors
* Testing: Only run certain tests with TESTFLAGS='-run TestName' make.sh
* Testing: Prevent docker-ci to test closing PRs
- Documentation: Minor updates to postgresql_service.rst
* Testing: Add nightly release to docker-ci
* Hack: Improve network performance for VirtualBox
* Hack: Add vagrant user to the docker group
* Runtime: Add utils.Errorf for error logging
- Packaging: Remove deprecated packaging directory
* Hack: Revamp install.sh to be usable by more people, and to use official install methods whenever possible (apt repo, portage tree, etc.)
- Hack: Fix contrib/mkimage-debian.sh apt caching prevention
* Documentation: Clarify LGTM process to contributors
- Documentation: Small fixes to parameter names in docs for ADD command
* Runtime: Record termination time in state.
- Registry: Use correct auth config when logging in.
- Documentation: Corrected error in the package name
* Documentation: Document what `vagrant up` is actually doing
- Runtime: Fix `docker rm` with volumes
- Runtime: Use empty string so TempDir uses the OS's temp dir automatically
- Runtime: Make sure to close the network allocators
* Testing: Replace panic by log.Fatal in tests
+ Documentation: improve doc search results
- Runtime: Fix some error cases where a HTTP body might not be closed
* Hack: Add proper bash completion for "docker push"
* Documentation: Add devenvironment link to CONTRIBUTING.md
* Documentation: Cleanup whitespace in API 1.5 docs
* Documentation: use angle brackets in MAINTAINER example email
- Testing: Increase TestRunDetach timeout
* Documentation: Fix help text for -v option
+ Hack: Added Dockerfile.tmLanguage to contrib
+ Runtime: Autorestart containers by default
* Testing: Adding more tests around auth.ResolveAuthConfig
* Hack: Configured FPM to make /etc/init/docker.conf a config file
* Hack: Add xz utils as a runtime dep
* Documentation: Add `apt-get install curl` to Ubuntu docs
* Documentation: Remove Gentoo install notes about #1422 workaround
* Documentation: Fix Ping endpoint documentation
* Runtime: Bump vendor kr/pty to commit 3b1f6487b (syscall.O_NOCTTY)
* Runtime: lxc: Allow set_file_cap capability in container
* Documentation: Update archlinux.rst
- Documentation: Fix ironic typo in changelog
* Documentation: Add explanation for export restrictions
* Hack: Add cleanup/refactor portion of #2010 for hack and Dockerfile updates
+ Documentation: Changes to a new style for the docs. Includes version switcher.
* Documentation: Formatting, add information about multiline json
+ Hack: Add contrib/mkimage-centos.sh back (from #1621), and associated documentation link
- Runtime: Fix panic with wrong dockercfg file
- Runtime: Fix the attach behavior with -i
* Documentation: Add .dockercfg doc
- Runtime: Move run -rm to the cli only
* Hack: Enable SSH Agent forwarding in Vagrant VM
+ Runtime: Add -rm to docker run for removing a container on exit
* Documentation: Improve registry and index REST API documentation
* Runtime: Split stdout stderr
- Documentation: Replace deprecated upgrading reference to docker-latest.tgz, which hasn't been updated since 0.5.3
* Documentation: Update Gentoo installation documentation now that we're in the portage tree proper
- Registry: Fix the error message so it is the same as the regex
* Runtime: Always create a new session for the container
* Hack: Add several of the small make.sh fixes from #1920, and make the output more consistent and contributor-friendly
* Documentation: Various command fixes in postgres example
* Documentation: Cleanup and reorganize docs and tooling for contributors and maintainers
- Documentation: Minor spelling correction of protocoll -> protocol
* Hack: Several small tweaks/fixes for contrib/mkimage-debian.sh
+ Hack: Add @tianon to hack/MAINTAINERS
## 0.6.3 (2013-09-23)
* Packaging: Update tar vendor dependency
- Client: Fix detach issue
- Runtime: Only copy and change permissions on non-bindmount volumes
- Registry: Update regular expression to match index
* Runtime: Allow multiple volumes-from
* Packaging: Download apt key over HTTPS
* Documentation: Update section on extracting the docker binary after build
* Documentation: Update development environment docs for new build process
* Documentation: Remove 'base' image from documentation
* Packaging: Add 'docker' group on install for ubuntu package
- Runtime: Fix HTTP imports from STDIN
## 0.6.2 (2013-09-17)
+ Hack: Vendor all dependencies
+ Builder: Add -rm option in order to remove intermediate containers
+ Runtime: Add domainname support
+ Runtime: Implement image filtering with path.Match
* Builder: Allow multiline for the RUN instruction
* Runtime: Remove unnecesasry warnings
* Runtime: Only mount the hostname file when the config exists
* Runtime: Handle signals within the `docker login` command
* Runtime: Remove os/user dependency
* Registry: Implement login with private registry
* Remote API: Bump to v1.5
* Packaging: Break down hack/make.sh into small scripts, one per 'bundle': test, binary, ubuntu etc.
* Documentation: General improvements
- Runtime: UID and GID are now also applied to volumes
- Runtime: `docker start` set error code upon error
- Runtime: `docker run` set the same error code as the process started
- Registry: Fix push issues
## 0.6.1 (2013-08-23)
* Registry: Pass "meta" headers in API calls to the registry
- Packaging: Use correct upstart script with new build tool
- Packaging: Use libffi-dev, don't build it from sources
- Packaging: Removed duplicate mercurial install command
## 0.6.0 (2013-08-22)
- Runtime: Load authConfig only when needed and fix useless WARNING
+ Runtime: Add lxc-conf flag to allow custom lxc options
- Runtime: Fix race conditions in parallel pull
- Runtime: Improve CMD, ENTRYPOINT, and attach docs.
* Documentation: Small fix to docs regarding adding docker groups
* Documentation: Add MongoDB image example
+ Builder: Add USER instruction do Dockerfile
* Documentation: updated default -H docs
* Remote API: Sort Images by most recent creation date.
+ Builder: Add workdir support for the Buildfile
+ Runtime: Add an option to set the working directory
- Runtime: Show tag used when image is missing
* Documentation: Update readme with dependencies for building
* Documentation: Add instructions for creating and using the docker group
* Remote API: Reworking opaque requests in registry module
- Runtime: Fix Graph ByParent() to generate list of child images per parent image.
* Runtime: Add Image name to LogEvent tests
* Documentation: Add sudo to examples and installation to documentation
+ Hack: Bash Completion: Limit commands to containers of a relevant state
* Remote API: Add image name in /events
* Runtime: Apply volumes-from before creating volumes
- Runtime: Make docker run handle SIGINT/SIGTERM
- Runtime: Prevent crash when .dockercfg not readable
* Hack: Add docker dependencies coverage testing into docker-ci
+ Runtime: Add -privileged flag and relevant tests, docs, and examples
+ Packaging: Docker-brew 0.5.2 support and memory footprint reduction
- Runtime: Install script should be fetched over https, not http.
* Packaging: Add new docker dependencies into docker-ci
* Runtime: Use Go 1.1.2 for dockerbuilder
* Registry: Improve auth push
* Runtime: API, issue 1471: Use groups for socket permissions
* Documentation: PostgreSQL service example in documentation
* Contrib: bash completion script
* Tests: Improve TestKillDifferentUser to prevent timeout on buildbot
* Documentation: Fix typo in docs for docker run -dns
* Documentation: Adding a reference to ps -a
- Runtime: Correctly detect IPv4 forwarding
- Packaging: Revert "docker.upstart: avoid spawning a `sh` process"
* Runtime: Use ranged for loop on channels
- Runtime: Fix typo: fmt.Sprint -> fmt.Sprintf
- Tests: Fix typo in TestBindMounts (runContainer called without image)
* Runtime: add websocket support to /container/<name>/attach/ws
* Runtime: Mount /dev/shm as a tmpfs
- Builder: Only count known instructions as build steps
- Builder: Fix docker build and docker events output
- Runtime: switch from http to https for get.docker.io
* Tests: Improve TestGetContainersTop so it does not rely on sleep
+ Packaging: Docker-brew and Docker standard library
* Testing: Add some tests in server and utils
+ Packaging: Release docker with docker
- Builder: Make sure ENV instruction within build perform a commit each time
* Packaging: Fix the upstart script generated by get.docker.io
- Runtime: fix small \n error un docker build
* Runtime: Let userland proxy handle container-bound traffic
* Runtime: Updated the Docker CLI to specify a value for the "Host" header.
* Runtime: Add warning when net.ipv4.ip_forwarding = 0
* Registry: Registry unit tests + mock registry
* Runtime: fixed #910. print user name to docker info output
- Builder: Forbid certain paths within docker build ADD
- Runtime: change network range to avoid conflict with EC2 DNS
* Tests: Relax the lo interface test to allow iface index != 1
* Documentation: Suggest installing linux-headers by default.
* Documentation: Change the twitter handle
* Client: Add docker cp command and copy api endpoint to copy container files/folders to the host
* Remote API: Use mime pkg to parse Content-Type
- Runtime: Reduce connect and read timeout when pinging the registry
* Documentation: Update amazon.rst to explain that Vagrant is not necessary for running Docker on ec2
* Packaging: Enabled the docs to generate manpages.
* Runtime: Parallel pull
- Runtime: Handle ip route showing mask-less IP addresses
* Documentation: Clarify Amazon EC2 installation
* Documentation: 'Base' image is deprecated and should no longer be referenced in the docs.
* Runtime: Fix to "Inject dockerinit at /.dockerinit"
* Runtime: Allow ENTRYPOINT without CMD
- Runtime: Always consider localhost as a domain name when parsing the FQN repos name
* Remote API: 650 http utils and user agent field
* Documentation: fix a typo in the ubuntu installation guide
- Builder: Repository name (and optionally a tag) in build usage
* Documentation: Move note about officially supported kernel
* Packaging: Revert "Bind daemon to 0.0.0.0 in Vagrant.
* Builder: Add no cache for docker build
* Runtime: Add hostname to environment
* Runtime: Add last stable version in `docker version`
- Builder: Make sure ADD will create everything in 0755
* Documentation: Add ufw doc
* Tests: Add registry functional test to docker-ci
- Documentation: Solved the logo being squished in Safari
- Runtime: Use utils.ParseRepositoryTag instead of strings.Split(name, ":") in server.ImageDelete
* Runtime: Refactor checksum
- Runtime: Improve connect message with socket error
* Documentation: Added information about Docker's high level tools over LXC.
* Don't read from stdout when only attached to stdin
## 0.5.3 (2013-08-13)
* Runtime: Use docker group for socket permissions
- Runtime: Spawn shell within upstart script
- Builder: Make sure ENV instruction within build perform a commit each time
- Runtime: Handle ip route showing mask-less IP addresses
- Runtime: Add hostname to environment
## 0.5.2 (2013-08-08)
* Builder: Forbid certain paths within docker build ADD
- Runtime: Change network range to avoid conflict with EC2 DNS
* API: Change daemon to listen on unix socket by default
## 0.5.1 (2013-07-30)
+ API: Docker client now sets useragent (RFC 2616)
+ Runtime: Add `ps` args to `docker top`
+ Runtime: Add support for container ID files (pidfile like)
+ Runtime: Add container=lxc in default env
+ Runtime: Support networkless containers with `docker run -n` and `docker -d -b=none`
+ API: Add /events endpoint
+ Builder: ADD command now understands URLs
+ Builder: CmdAdd and CmdEnv now respect Dockerfile-set ENV variables
* Hack: Simplify unit tests with helpers
* Hack: Improve docker.upstart event
* Hack: Add coverage testing into docker-ci
* Runtime: Stdout/stderr logs are now stored in the same file as JSON
* Runtime: Allocate a /16 IP range by default, with fallback to /24. Try 12 ranges instead of 3.
* Runtime: Change .dockercfg format to json and support multiple auth remote
- Runtime: Do not override volumes from config
- Runtime: Fix issue with EXPOSE override
- Builder: Create directories with 755 instead of 700 within ADD instruction
## 0.5.0 (2013-07-17)
+ Runtime: List all processes running inside a container with 'docker top'
+ Runtime: Host directories can be mounted as volumes with 'docker run -v'
+ Runtime: Containers can expose public UDP ports (eg, '-p 123/udp')
+ Runtime: Optionally specify an exact public port (eg. '-p 80:4500')
+ Registry: New image naming scheme inspired by Go packaging convention allows arbitrary combinations of registries
+ Builder: ENTRYPOINT instruction sets a default binary entry point to a container
+ Builder: VOLUME instruction marks a part of the container as persistent data
* Builder: 'docker build' displays the full output of a build by default
* Runtime: 'docker login' supports additional options
- Runtime: Dont save a container's hostname when committing an image.
- Registry: Fix issues when uploading images to a private registry
## 0.4.8 (2013-07-01)
+ Builder: New build operation ENTRYPOINT adds an executable entry point to the container.
- Runtime: Fix a bug which caused 'docker run -d' to no longer print the container ID.
- Tests: Fix issues in the test suite
## 0.4.7 (2013-06-28)
* Registry: easier push/pull to a custom registry
* Remote API: the progress bar updates faster when downloading and uploading large files
- Remote API: fix a bug in the optional unix socket transport
* Runtime: improve detection of kernel version
+ Runtime: host directories can be mounted as volumes with 'docker run -b'
- Runtime: fix an issue when only attaching to stdin
* Runtime: use 'tar --numeric-owner' to avoid uid mismatch across multiple hosts
* Hack: improve test suite and dev environment
* Hack: remove dependency on unit tests on 'os/user'
+ Documentation: add terminology section
## 0.4.6 (2013-06-22)
- Runtime: fix a bug which caused creation of empty images (and volumes) to crash.
## 0.4.5 (2013-06-21)
+ Builder: 'docker build git://URL' fetches and builds a remote git repository
* Runtime: 'docker ps -s' optionally prints container size
* Tests: Improved and simplified
- Runtime: fix a regression introduced in 0.4.3 which caused the logs command to fail.
- Builder: fix a regression when using ADD with single regular file.
## 0.4.4 (2013-06-19)
- Builder: fix a regression introduced in 0.4.3 which caused builds to fail on new clients.
## 0.4.3 (2013-06-19)
+ Builder: ADD of a local file will detect tar archives and unpack them
* Runtime: Remove bsdtar dependency
* Runtime: Add unix socket and multiple -H support
* Runtime: Prevent rm of running containers
* Runtime: Use go1.1 cookiejar
* Builder: ADD improvements: use tar for copy + automatically unpack local archives
* Builder: ADD uses tar/untar for copies instead of calling 'cp -ar'
* Builder: nicer output for 'docker build'
* Builder: fixed the behavior of ADD to be (mostly) reverse-compatible, predictable and well-documented.
* Client: HumanReadable ProgressBar sizes in pull
* Client: Fix docker version's git commit output
* API: Send all tags on History API call
* API: Add tag lookup to history command. Fixes #882
- Runtime: Fix issue detaching from running TTY container
- Runtime: Forbid parralel push/pull for a single image/repo. Fixes #311
- Runtime: Fix race condition within Run command when attaching.
- Builder: fix a bug which caused builds to fail if ADD was the first command
- Documentation: fix missing command in irc bouncer example
## 0.4.2 (2013-06-17)
- Packaging: Bumped version to work around an Ubuntu bug
## 0.4.1 (2013-06-17)
+ Remote Api: Add flag to enable cross domain requests
+ Remote Api/Client: Add images and containers sizes in docker ps and docker images
+ Runtime: Configure dns configuration host-wide with 'docker -d -dns'
+ Runtime: Detect faulty DNS configuration and replace it with a public default
+ Runtime: allow docker run <name>:<id>
+ Runtime: you can now specify public port (ex: -p 80:4500)
* Client: allow multiple params in inspect
* Client: Print the container id before the hijack in `docker run`
* Registry: add regexp check on repo's name
* Registry: Move auth to the client
* Runtime: improved image removal to garbage-collect unreferenced parents
* Vagrantfile: Add the rest api port to vagrantfile's port_forward
* Upgrade to Go 1.1
- Builder: don't ignore last line in Dockerfile when it doesn't end with \n
- Registry: Remove login check on pull
## 0.4.0 (2013-06-03)
+ Introducing Builder: 'docker build' builds a container, layer by layer, from a source repository containing a Dockerfile
+ Introducing Remote API: control Docker programmatically using a simple HTTP/json API
* Runtime: various reliability and usability improvements
## 0.3.4 (2013-05-30)
+ Builder: 'docker build' builds a container, layer by layer, from a source repository containing a Dockerfile
+ Builder: 'docker build -t FOO' applies the tag FOO to the newly built container.
+ Runtime: interactive TTYs correctly handle window resize
* Runtime: fix how configuration is merged between layers
+ Remote API: split stdout and stderr on 'docker run'
+ Remote API: optionally listen on a different IP and port (use at your own risk)
* Documentation: improved install instructions.
## 0.3.3 (2013-05-23)
- Registry: Fix push regression
- Various bugfixes
## 0.3.2 (2013-05-09)
* Runtime: Store the actual archive on commit
* Registry: Improve the checksum process
* Registry: Use the size to have a good progress bar while pushing
* Registry: Use the actual archive if it exists in order to speed up the push
- Registry: Fix error 400 on push
## 0.3.1 (2013-05-08)
+ Builder: Implement the autorun capability within docker builder
+ Builder: Add caching to docker builder
+ Builder: Add support for docker builder with native API as top level command
+ Runtime: Add go version to debug infos
+ Builder: Implement ENV within docker builder
+ Registry: Add docker search top level command in order to search a repository
+ Images: output graph of images to dot (graphviz)
+ Documentation: new introduction and high-level overview
+ Documentation: Add the documentation for docker builder
+ Website: new high-level overview
- Makefile: Swap "go get" for "go get -d", especially to compile on go1.1rc
- Images: fix ByParent function
- Builder: Check the command existance prior create and add Unit tests for the case
- Registry: Fix pull for official images with specific tag
- Registry: Fix issue when login in with a different user and trying to push
- Documentation: CSS fix for docker documentation to make REST API docs look better.
- Documentation: Fixed CouchDB example page header mistake
- Documentation: fixed README formatting
* Registry: Improve checksum - async calculation
* Runtime: kernel version - don't show the dash if flavor is empty
* Documentation: updated www.docker.io website.
* Builder: use any whitespaces instead of tabs
* Packaging: packaging ubuntu; issue #510: Use goland-stable PPA package to build docker
## 0.3.0 (2013-05-06)
+ Registry: Implement the new registry
+ Documentation: new example: sharing data between 2 couchdb databases
- Runtime: Fix the command existance check
- Runtime: strings.Split may return an empty string on no match
- Runtime: Fix an index out of range crash if cgroup memory is not
* Documentation: Various improvments
* Vagrant: Use only one deb line in /etc/apt
## 0.2.2 (2013-05-03)
+ Support for data volumes ('docker run -v=PATH')
+ Share data volumes between containers ('docker run -volumes-from')
+ Improved documentation
* Upgrade to Go 1.0.3
* Various upgrades to the dev environment for contributors
## 0.2.1 (2013-05-01)
+ 'docker commit -run' bundles a layer with default runtime options: command, ports etc.
* Improve install process on Vagrant
+ New Dockerfile operation: "maintainer"
+ New Dockerfile operation: "expose"
+ New Dockerfile operation: "cmd"
+ Contrib script to build a Debian base layer
+ 'docker -d -r': restart crashed containers at daemon startup
* Runtime: improve test coverage
## 0.2.0 (2013-04-23)
## 0.2.0 (2012-04-23)
- Runtime: ghost containers can be killed and waited for
* Documentation: update install intructions
- Packaging: fix Vagrantfile
@@ -437,12 +8,13 @@
+ Add a changelog
- Various bugfixes
## 0.1.8 (2013-04-22)
- Dynamically detect cgroup capabilities
- Issue stability warning on kernels <3.8
- 'docker push' buffers on disk instead of memory
- Fix 'docker diff' for removed files
- Fix 'docker stop' for ghost containers
- Fix 'docker stop' for ghost containers
- Fix handling of pidfile
- Various bugfixes and stability improvements
@@ -463,7 +35,7 @@
- Improve diagnosis of missing system capabilities
- Allow disabling memory limits at compile time
- Add debian packaging
- Documentation: installing on Arch Linux
- Documentation: installing on Arch Linux
- Documentation: running Redis on docker
- Fixed lxc 0.9 compatibility
- Automatically load aufs module

View File

@@ -1,12 +1,11 @@
# Contributing to Docker
Want to hack on Docker? Awesome! Here are instructions to get you started. They are probably not perfect, please let us know if anything feels
Want to hack on Docker? Awesome! There are instructions to get you
started on the website: http://docker.io/gettingstarted.html
They are probably not perfect, please let us know if anything feels
wrong or incomplete.
## Build Environment
For instructions on setting up your development environment, please see our dedicated [dev environment setup docs](http://docs.docker.io/en/latest/contributing/devenvironment/).
## Contribution guidelines
### Pull requests are always welcome
@@ -27,7 +26,7 @@ that feature *on top of* docker.
### Discuss your design on the mailing list
We recommend discussing your plans [on the mailing
list](https://groups.google.com/forum/?fromgroups#!forum/docker-dev)
list](https://groups.google.com/forum/?fromgroups#!forum/docker-club)
before starting to code - especially for more ambitious contributions.
This gives other contributors a chance to point you in the right
direction, give feedback on your design, and maybe point out if someone
@@ -59,10 +58,8 @@ Submit unit tests for your changes. Go has a great test framework built in; use
it! Take a look at existing tests for inspiration. Run the full test suite on
your branch before submitting a pull request.
Update the documentation when creating or modifying features. Test
your documentation changes for clarity, concision, and correctness, as
well as a clean docmuent build. See ``docs/README.md`` for more
information on building the docs and how docs get released.
Make sure you include relevant updates or additions to documentation when
creating or modifying features.
Write clean code. Universally formatted code promotes ease of writing, reading,
and maintenance. Always run `go fmt` before committing your changes. Most
@@ -94,25 +91,3 @@ Add your name to the AUTHORS file, but make sure the list is sorted and your
name and email address match your git configuration. The AUTHORS file is
regenerated occasionally from the git commit history, so a mismatch may result
in your changes being overwritten.
### Approval
Docker maintainers use LGTM (looks good to me) in comments on the code review
to indicate acceptance.
A change requires LGTMs from an absolute majority of the maintainers of each
component affected. For example, if a change affects docs/ and registry/, it
needs an absolute majority from the maintainers of docs/ AND, separately, an
absolute majority of the maintainers of registry
For more details see [MAINTAINERS.md](hack/MAINTAINERS.md)
### How can I become a maintainer?
* Step 1: learn the component inside out
* Step 2: make yourself useful by contributing code, bugfixes, support etc.
* Step 3: volunteer on the irc channel (#docker@freenode)
Don't forget: being a maintainer is a time investment. Make sure you will have time to make yourself available.
You don't have to be a maintainer to make a difference on the project!

View File

@@ -1,67 +0,0 @@
# This file describes the standard way to build Docker, using docker
#
# Usage:
#
# # Assemble the full dev environment. This is slow the first time.
# docker build -t docker .
# # Apparmor messes with privileged mode: disable it
# /etc/init.d/apparmor stop ; /etc/init.d/apparmor teardown
#
# # Mount your source in an interactive container for quick testing:
# docker run -v `pwd`:/go/src/github.com/dotcloud/docker -privileged -lxc-conf=lxc.aa_profile=unconfined -i -t docker bash
#
#
# # Run the test suite:
# docker run -privileged -lxc-conf=lxc.aa_profile=unconfined docker hack/make.sh test
#
# # Publish a release:
# docker run -privileged -lxc-conf=lxc.aa_profile=unconfined \
# -e AWS_S3_BUCKET=baz \
# -e AWS_ACCESS_KEY=foo \
# -e AWS_SECRET_KEY=bar \
# -e GPG_PASSPHRASE=gloubiboulga \
# docker hack/release.sh
#
docker-version 0.6.1
from ubuntu:12.04
maintainer Solomon Hykes <solomon@dotcloud.com>
# Build dependencies
run echo 'deb http://archive.ubuntu.com/ubuntu precise main universe' > /etc/apt/sources.list
run apt-get update
run apt-get install -y -q curl
run apt-get install -y -q git
run apt-get install -y -q mercurial
run apt-get install -y -q build-essential libsqlite3-dev
# Install Go
run curl -s https://go.googlecode.com/files/go1.2rc2.src.tar.gz | tar -v -C /usr/local -xz
env PATH /usr/local/go/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin
env GOPATH /go:/go/src/github.com/dotcloud/docker/vendor
run cd /usr/local/go/src && ./make.bash && go install -ldflags '-w -linkmode external -extldflags "-static -Wl,--unresolved-symbols=ignore-in-shared-libs"' -tags netgo -a std
# Ubuntu stuff
run apt-get install -y -q ruby1.9.3 rubygems libffi-dev
run gem install --no-rdoc --no-ri fpm
run apt-get install -y -q reprepro dpkg-sig
# Install s3cmd 1.0.1 (earlier versions don't support env variables in the config)
run apt-get install -y -q python-pip
run pip install s3cmd
run pip install python-magic
run /bin/echo -e '[default]\naccess_key=$AWS_ACCESS_KEY\nsecret_key=$AWS_SECRET_KEY\n' > /.s3cfg
# Runtime dependencies
run apt-get install -y -q iptables
run apt-get install -y -q lxc
run apt-get install -y -q aufs-tools
volume /var/lib/docker
workdir /go/src/github.com/dotcloud/docker
# Wrap all commands in the "docker-in-docker" script to allow nested containers
entrypoint ["hack/dind"]
# Upload docker source
add . /go/src/github.com/dotcloud/docker

30
FIXME
View File

@@ -1,30 +0,0 @@
## FIXME
This file is a loose collection of things to improve in the codebase, for the internal
use of the maintainers.
They are not big enough to be in the roadmap, not user-facing enough to be github issues,
and not important enough to be discussed in the mailing list.
They are just like FIXME comments in the source code, except we're not sure where in the source
to put them - so we put them here :)
* Merge Runtime, Server and Builder into Runtime
* Run linter on codebase
* Unify build commands and regular commands
* Move source code into src/ subdir for clarity
* docker build: on non-existent local path for ADD, don't show full absolute path on the host
* docker tag foo REPO:TAG
* use size header for progress bar in pull
* Clean up context upload in build!!!
* Parallel pull
* Always generate a resolv.conf per container, to avoid changing resolv.conf under thne container's feet
* Save metadata with import/export (#1974)
* Upgrade dockerd without stopping containers
* Simple command to remove all untagged images (`docker rmi $(docker images | awk '/^<none>/ { print $3 }')`)
* Simple command to clean up containers for disk space
* Caching after an ADD (#880)
* Clean up the ProgressReader api, it's a PITA to use
* Use netlink instead of iproute2/iptables (#925)

View File

@@ -1,6 +0,0 @@
Solomon Hykes <solomon@dotcloud.com> (@shykes)
Guillaume Charmes <guillaume@dotcloud.com> (@creack)
Victor Vieux <victor@dotcloud.com> (@vieux)
Michael Crosby <michael@crosbymichael.com> (@crosbymichael)
api.go: Victor Vieux <victor@dotcloud.com> (@vieux)
Vagrantfile: Daniel Mizyrycki <daniel@dotcloud.com> (@mzdaniel)

78
Makefile Normal file
View File

@@ -0,0 +1,78 @@
DOCKER_PACKAGE := github.com/dotcloud/docker
RELEASE_VERSION := $(shell git tag | grep -E "v[0-9\.]+$$" | sort -nr | head -n 1)
SRCRELEASE := docker-$(RELEASE_VERSION)
BINRELEASE := docker-$(RELEASE_VERSION).tgz
GIT_ROOT := $(shell git rev-parse --show-toplevel)
BUILD_DIR := $(CURDIR)/.gopath
GOPATH ?= $(BUILD_DIR)
export GOPATH
GO_OPTIONS ?=
ifeq ($(VERBOSE), 1)
GO_OPTIONS += -v
endif
GIT_COMMIT = $(shell git rev-parse --short HEAD)
GIT_STATUS = $(shell test -n "`git status --porcelain`" && echo "+CHANGES")
BUILD_OPTIONS = -ldflags "-X main.GIT_COMMIT $(GIT_COMMIT)$(GIT_STATUS)"
SRC_DIR := $(GOPATH)/src
DOCKER_DIR := $(SRC_DIR)/$(DOCKER_PACKAGE)
DOCKER_MAIN := $(DOCKER_DIR)/docker
DOCKER_BIN_RELATIVE := bin/docker
DOCKER_BIN := $(CURDIR)/$(DOCKER_BIN_RELATIVE)
.PHONY: all clean test hack release srcrelease $(BINRELEASE) $(SRCRELEASE) $(DOCKER_BIN) $(DOCKER_DIR)
all: $(DOCKER_BIN)
$(DOCKER_BIN): $(DOCKER_DIR)
@mkdir -p $(dir $@)
@(cd $(DOCKER_MAIN); go build $(GO_OPTIONS) $(BUILD_OPTIONS) -o $@)
@echo $(DOCKER_BIN_RELATIVE) is created.
$(DOCKER_DIR):
@mkdir -p $(dir $@)
@rm -f $@
@ln -sf $(CURDIR)/ $@
@(cd $(DOCKER_MAIN); go get $(GO_OPTIONS))
whichrelease:
echo $(RELEASE_VERSION)
release: $(BINRELEASE)
srcrelease: $(SRCRELEASE)
deps: $(DOCKER_DIR)
# A clean checkout of $RELEASE_VERSION, with vendored dependencies
$(SRCRELEASE):
rm -fr $(SRCRELEASE)
git clone $(GIT_ROOT) $(SRCRELEASE)
cd $(SRCRELEASE); git checkout -q $(RELEASE_VERSION)
# A binary release ready to be uploaded to a mirror
$(BINRELEASE): $(SRCRELEASE)
rm -f $(BINRELEASE)
cd $(SRCRELEASE); make; cp -R bin docker-$(RELEASE_VERSION); tar -f ../$(BINRELEASE) -zv -c docker-$(RELEASE_VERSION)
clean:
@rm -rf $(dir $(DOCKER_BIN))
ifeq ($(GOPATH), $(BUILD_DIR))
@rm -rf $(BUILD_DIR)
else ifneq ($(DOCKER_DIR), $(realpath $(DOCKER_DIR)))
@rm -f $(DOCKER_DIR)
endif
test: all
@(cd $(DOCKER_DIR); sudo -E go test $(GO_OPTIONS))
fmt:
@gofmt -s -l -w .
hack:
cd $(CURDIR)/buildbot && vagrant up

42
NOTICE
View File

@@ -1,42 +1,6 @@
Docker
Copyright 2012-2013 Docker, Inc.
Copyright 2012-2013 dotCloud, inc.
This product includes software developed at Docker, Inc. (http://www.docker.com).
This product includes software developed at dotCloud, inc. (http://www.dotcloud.com).
This product contains software (https://github.com/kr/pty) developed
by Keith Rarick, licensed under the MIT License.
The following is courtesy of our legal counsel:
Transfers of Docker shall be in accordance with applicable export
controls of any country and all other applicable legal requirements.
Docker shall not be distributed or downloaded to or in Cuba, Iran,
North Korea, Sudan or Syria and shall not be distributed or downloaded
to any person on the Denied Persons List administered by the U.S.
Department of Commerce.
What does that mean?
Here is a further explanation from our legal counsel:
Like all software products that utilize cryptography, the export and
use of Docker is subject to the U.S. Commerce Department's Export
Administration Regulations (EAR) because it uses or contains
cryptography (see
http://www.bis.doc.gov/index.php/policy-guidance/encryption). Certain
free and open source software projects have a lightweight set of
requirements, which can generally be met by providing email notice to
the appropriate U.S. government agencies that their source code is
available on a publicly available repository and making the
appropriate statements in the README.
The restrictions of the EAR apply to certain denied locations
(currently Iran, Sudan, Syria, North Korea, or Cuba) and those
individuals on the Denied Persons List, which is available here:
http://www.bis.doc.gov/index.php/policy-guidance/lists-of-parties-of-concern/denied-persons-list.
If you are incorporating Docker into a new open source project, the
EAR restrictions apply to your incorporation of Docker into your
project in the same manner as other cryptography-enabled projects,
such as OpenSSL, almost all Linux distributions, etc.
For more information, see http://www.apache.org/dev/crypto.html and/or
seek legal counsel.
This product contains software (https://github.com/kr/pty) developed by Keith Rarick, licensed under the MIT License.

415
README.md
View File

@@ -1,202 +1,317 @@
Docker: the Linux container engine
==================================
Docker: the Linux container runtime
===================================
Docker is an open source project to pack, ship and run any application
as a lightweight container
Docker complements LXC with a high-level API which operates at the process level. It runs unix processes with strong guarantees of isolation and repeatability across servers.
Docker containers are both *hardware-agnostic* and
*platform-agnostic*. This means that they can run anywhere, from your
laptop to the largest EC2 compute instance and everything in between -
and they don't require that you use a particular language, framework
or packaging system. That makes them great building blocks for
deploying and scaling web apps, databases and backend services without
depending on a particular stack or provider.
Docker is a great building block for automating distributed systems: large-scale web deployments, database clusters, continuous deployment systems, private PaaS, service-oriented architectures, etc.
Docker is an open-source implementation of the deployment engine which
powers [dotCloud](http://dotcloud.com), a popular
Platform-as-a-Service. It benefits directly from the experience
accumulated over several years of large-scale operation and support of
hundreds of thousands of applications and databases.
![Docker L](docs/sources/static_files/lego_docker.jpg "Docker")
![Docker L](docs/theme/docker/static/img/dockerlogo-h.png "Docker")
* *Heterogeneous payloads*: any combination of binaries, libraries, configuration files, scripts, virtualenvs, jars, gems, tarballs, you name it. No more juggling between domain-specific tools. Docker can deploy and run them all.
## Better than VMs
* *Any server*: docker can run on any x64 machine with a modern linux kernel - whether it's a laptop, a bare metal server or a VM. This makes it perfect for multi-cloud deployments.
A common method for distributing applications and sandbox their
execution is to use virtual machines, or VMs. Typical VM formats are
VMWare's vmdk, Oracle Virtualbox's vdi, and Amazon EC2's ami. In
theory these formats should allow every developer to automatically
package their application into a "machine" for easy distribution and
deployment. In practice, that almost never happens, for a few reasons:
* *Isolation*: docker isolates processes from each other and from the underlying host, using lightweight containers.
* *Size*: VMs are very large which makes them impractical to store
and transfer.
* *Performance*: running VMs consumes significant CPU and memory,
which makes them impractical in many scenarios, for example local
development of multi-tier applications, and large-scale deployment
of cpu and memory-intensive applications on large numbers of
machines.
* *Portability*: competing VM environments don't play well with each
other. Although conversion tools do exist, they are limited and
add even more overhead.
* *Hardware-centric*: VMs were designed with machine operators in
mind, not software developers. As a result, they offer very
limited tooling for what developers need most: building, testing
and running their software. For example, VMs offer no facilities
for application versioning, monitoring, configuration, logging or
service discovery.
By contrast, Docker relies on a different sandboxing method known as
*containerization*. Unlike traditional virtualization,
containerization takes place at the kernel level. Most modern
operating system kernels now support the primitives necessary for
containerization, including Linux with [openvz](http://openvz.org),
[vserver](http://linux-vserver.org) and more recently
[lxc](http://lxc.sourceforge.net), Solaris with
[zones](http://docs.oracle.com/cd/E26502_01/html/E29024/preface-1.html#scrolltoc)
and FreeBSD with
[Jails](http://www.freebsd.org/doc/handbook/jails.html).
Docker builds on top of these low-level primitives to offer developers
a portable format and runtime environment that solves all 4
problems. Docker containers are small (and their transfer can be
optimized with layers), they have basically zero memory and cpu
overhead, they are completely portable and are designed from the
ground up with an application-centric design.
The best part: because ``docker`` operates at the OS level, it can
still be run inside a VM!
## Plays well with others
Docker does not require that you buy into a particular programming
language, framework, packaging system or configuration language.
Is your application a Unix process? Does it use files, tcp
connections, environment variables, standard Unix streams and
command-line arguments as inputs and outputs? Then ``docker`` can run
it.
Can your application's build be expressed as a sequence of such
commands? Then ``docker`` can build it.
* *Repeatability*: because containers are isolated in their own filesystem, they behave the same regardless of where, when, and alongside what they run.
## Escape dependency hell
Notable features
-----------------
A common problem for developers is the difficulty of managing all
their application's dependencies in a simple and automated way.
* Filesystem isolation: each process container runs in a completely separate root filesystem.
This is usually difficult for several reasons:
* Resource isolation: system resources like cpu and memory can be allocated differently to each process container, using cgroups.
* *Cross-platform dependencies*. Modern applications often depend on
a combination of system libraries and binaries, language-specific
packages, framework-specific modules, internal components
developed for another project, etc. These dependencies live in
different "worlds" and require different tools - these tools
typically don't work well with each other, requiring awkward
custom integrations.
* Network isolation: each process container runs in its own network namespace, with a virtual interface and IP address of its own.
* Conflicting dependencies. Different applications may depend on
different versions of the same dependency. Packaging tools handle
these situations with various degrees of ease - but they all
handle them in different and incompatible ways, which again forces
the developer to do extra work.
* Custom dependencies. A developer may need to prepare a custom
version of their application's dependency. Some packaging systems
can handle custom versions of a dependency, others can't - and all
of them handle it differently.
* Copy-on-write: root filesystems are created using copy-on-write, which makes deployment extremely fast, memory-cheap and disk-cheap.
* Logging: the standard streams (stdout/stderr/stdin) of each process container are collected and logged for real-time or batch retrieval.
Docker solves dependency hell by giving the developer a simple way to
express *all* their application's dependencies in one place, and
streamline the process of assembling them. If this makes you think of
[XKCD 927](http://xkcd.com/927/), don't worry. Docker doesn't
*replace* your favorite packaging systems. It simply orchestrates
their use in a simple and repeatable way. How does it do that? With
layers.
* Change management: changes to a container's filesystem can be committed into a new image and re-used to create more containers. No templating or manual configuration required.
Docker defines a build as running a sequence of Unix commands, one
after the other, in the same container. Build commands modify the
contents of the container (usually by installing new files on the
filesystem), the next command modifies it some more, etc. Since each
build command inherits the result of the previous commands, the
*order* in which the commands are executed expresses *dependencies*.
* Interactive shell: docker can allocate a pseudo-tty and attach to the standard input of any container, for example to run a throwaway interactive shell.
Here's a typical Docker build process:
Install instructions
==================
Quick install on Ubuntu 12.04 and 12.10
---------------------------------------
```bash
from ubuntu:12.10
run apt-get update
run DEBIAN_FRONTEND=noninteractive apt-get install -q -y python
run DEBIAN_FRONTEND=noninteractive apt-get install -q -y python-pip
run pip install django
run DEBIAN_FRONTEND=noninteractive apt-get install -q -y curl
run curl -L https://github.com/shykes/helloflask/archive/master.tar.gz | tar -xzv
run cd helloflask-master && pip install -r requirements.txt
curl get.docker.io | sh -x
```
Note that Docker doesn't care *how* dependencies are built - as long
as they can be built by running a Unix command in a container.
Binary installs
----------------
Docker supports the following binary installation methods.
Note that some methods are community contributions and not yet officially supported.
Getting started
===============
* [Ubuntu 12.04 and 12.10 (officially supported)](http://docs.docker.io/en/latest/installation/ubuntulinux/)
* [Arch Linux](http://docs.docker.io/en/latest/installation/archlinux/)
* [MacOS X (with Vagrant)](http://docs.docker.io/en/latest/installation/macos/)
* [Windows (with Vagrant)](http://docs.docker.io/en/latest/installation/windows/)
* [Amazon EC2 (with Vagrant)](http://docs.docker.io/en/latest/installation/amazon/)
Docker can be installed on your local machine as well as servers - both bare metal and virtualized.
It is available as a binary on most modern Linux systems, or as a VM on Windows, Mac and other systems.
Installing from source
----------------------
We also offer an interactive tutorial for quickly learning the basics of using Docker.
1. Make sure you have a [Go language](http://golang.org/doc/install) compiler and [git](http://git-scm.com) installed.
2. Checkout the source code
For up-to-date install instructions and online tutorials, see the [Getting Started page](http://www.docker.io/gettingstarted/).
```bash
git clone http://github.com/dotcloud/docker
```
3. Build the docker binary
```bash
cd docker
make VERBOSE=1
sudo cp ./bin/docker /usr/local/bin/docker
```
Usage examples
==============
Docker can be used to run short-lived commands, long-running daemons (app servers, databases etc.),
interactive shell sessions, etc.
First run the docker daemon
---------------------------
You can find a [list of real-world examples](http://docs.docker.io/en/latest/examples/) in the documentation.
All the examples assume your machine is running the docker daemon. To run the docker daemon in the background, simply type:
```bash
# On a production system you want this running in an init script
sudo docker -d &
```
Now you can run docker in client mode: all commands will be forwarded to the docker daemon, so the client can run from any account.
```bash
# Now you can run docker commands from any account.
docker help
```
Throwaway shell in a base ubuntu image
--------------------------------------
```bash
docker pull ubuntu:12.10
# Run an interactive shell, allocate a tty, attach stdin and stdout
# To detach the tty without exiting the shell, use the escape sequence Ctrl-p + Ctrl-q
docker run -i -t ubuntu:12.10 /bin/bash
```
Starting a long-running worker process
--------------------------------------
```bash
# Start a very useful long-running process
JOB=$(docker run -d ubuntu /bin/sh -c "while true; do echo Hello world; sleep 1; done")
# Collect the output of the job so far
docker logs $JOB
# Kill the job
docker kill $JOB
```
Running an irc bouncer
----------------------
```bash
BOUNCER_ID=$(docker run -d -p 6667 -u irc shykes/znc $USER $PASSWORD)
echo "Configure your irc client to connect to port $(docker port $BOUNCER_ID 6667) of this machine"
```
Running Redis
-------------
```bash
REDIS_ID=$(docker run -d -p 6379 shykes/redis redis-server)
echo "Configure your redis client to connect to port $(docker port $REDIS_ID 6379) of this machine"
```
Share your own image!
---------------------
```bash
CONTAINER=$(docker run -d ubuntu:12.10 apt-get install -y curl)
docker commit -m "Installed curl" $CONTAINER $USER/betterbase
docker push $USER/betterbase
```
A list of publicly available images is [available here](https://github.com/dotcloud/docker/wiki/Public-docker-images).
Expose a service on a TCP port
------------------------------
```bash
# Expose port 4444 of this container, and tell netcat to listen on it
JOB=$(docker run -d -p 4444 base /bin/nc -l -p 4444)
# Which public port is NATed to my container?
PORT=$(docker port $JOB 4444)
# Connect to the public port via the host's public address
# Please note that because of how routing works connecting to localhost or 127.0.0.1 $PORT will not work.
IP=$(ifconfig eth0 | perl -n -e 'if (m/inet addr:([\d\.]+)/g) { print $1 }')
echo hello world | nc $IP $PORT
# Verify that the network connection worked
echo "Daemon received: $(docker logs $JOB)"
```
Under the hood
--------------
Under the hood, Docker is built on the following components:
* The
[cgroup](http://blog.dotcloud.com/kernel-secrets-from-the-paas-garage-part-24-c)
and
[namespacing](http://blog.dotcloud.com/under-the-hood-linux-kernels-on-dotcloud-part)
capabilities of the Linux kernel;
* [AUFS](http://aufs.sourceforge.net/aufs.html), a powerful union
filesystem with copy-on-write capabilities;
* The [cgroup](http://blog.dotcloud.com/kernel-secrets-from-the-paas-garage-part-24-c) and [namespacing](http://blog.dotcloud.com/under-the-hood-linux-kernels-on-dotcloud-part) capabilities of the Linux kernel;
* [AUFS](http://aufs.sourceforge.net/aufs.html), a powerful union filesystem with copy-on-write capabilities;
* The [Go](http://golang.org) programming language;
* [lxc](http://lxc.sourceforge.net/), a set of convenience scripts to
simplify the creation of Linux containers.
* [lxc](http://lxc.sourceforge.net/), a set of convenience scripts to simplify the creation of linux containers.
Contributing to Docker
======================
Want to hack on Docker? Awesome! There are instructions to get you
started [here](CONTRIBUTING.md).
Want to hack on Docker? Awesome! There are instructions to get you started on the website: http://docs.docker.io/en/latest/contributing/contributing/
They are probably not perfect, please let us know if anything feels
wrong or incomplete.
They are probably not perfect, please let us know if anything feels wrong or incomplete.
### Legal
Note
----
*Brought to you courtesy of our legal counsel. For more context,
please see the Notice document.*
We also keep the documentation in this repository. The website documentation is generated using sphinx using these sources.
Please find it under docs/sources/ and read more about it https://github.com/dotcloud/docker/master/docs/README.md
Please feel free to fix / update the documentation and send us pull requests. More tutorials are also welcome.
Setting up a dev environment
----------------------------
Instructions that have been verified to work on Ubuntu 12.10,
```bash
sudo apt-get -y install lxc wget bsdtar curl golang git
export GOPATH=~/go/
export PATH=$GOPATH/bin:$PATH
mkdir -p $GOPATH/src/github.com/dotcloud
cd $GOPATH/src/github.com/dotcloud
git clone git@github.com:dotcloud/docker.git
cd docker
go get -v github.com/dotcloud/docker/...
go install -v github.com/dotcloud/docker/...
```
Then run the docker daemon,
```bash
sudo $GOPATH/bin/docker -d
```
Run the `go install` command (above) to recompile docker.
What is a Standard Container?
=============================
Docker defines a unit of software delivery called a Standard Container. The goal of a Standard Container is to encapsulate a software component and all its dependencies in
a format that is self-describing and portable, so that any compliant runtime can run it without extra dependencies, regardless of the underlying machine and the contents of the container.
The spec for Standard Containers is currently a work in progress, but it is very straightforward. It mostly defines 1) an image format, 2) a set of standard operations, and 3) an execution environment.
A great analogy for this is the shipping container. Just like Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery.
### 1. STANDARD OPERATIONS
Just like shipping containers, Standard Containers define a set of STANDARD OPERATIONS. Shipping containers can be lifted, stacked, locked, loaded, unloaded and labelled. Similarly, standard containers can be started, stopped, copied, snapshotted, downloaded, uploaded and tagged.
### 2. CONTENT-AGNOSTIC
Just like shipping containers, Standard Containers are CONTENT-AGNOSTIC: all standard operations have the same effect regardless of the contents. A shipping container will be stacked in exactly the same way whether it contains Vietnamese powder coffee or spare Maserati parts. Similarly, Standard Containers are started or uploaded in the same way whether they contain a postgres database, a php application with its dependencies and application server, or Java build artifacts.
### 3. INFRASTRUCTURE-AGNOSTIC
Both types of containers are INFRASTRUCTURE-AGNOSTIC: they can be transported to thousands of facilities around the world, and manipulated by a wide variety of equipment. A shipping container can be packed in a factory in Ukraine, transported by truck to the nearest routing center, stacked onto a train, loaded into a German boat by an Australian-built crane, stored in a warehouse at a US facility, etc. Similarly, a standard container can be bundled on my laptop, uploaded to S3, downloaded, run and snapshotted by a build server at Equinix in Virginia, uploaded to 10 staging servers in a home-made Openstack cluster, then sent to 30 production instances across 3 EC2 regions.
### 4. DESIGNED FOR AUTOMATION
Because they offer the same standard operations regardless of content and infrastructure, Standard Containers, just like their physical counterpart, are extremely well-suited for automation. In fact, you could say automation is their secret weapon.
Many things that once required time-consuming and error-prone human effort can now be programmed. Before shipping containers, a bag of powder coffee was hauled, dragged, dropped, rolled and stacked by 10 different people in 10 different locations by the time it reached its destination. 1 out of 50 disappeared. 1 out of 20 was damaged. The process was slow, inefficient and cost a fortune - and was entirely different depending on the facility and the type of goods.
Similarly, before Standard Containers, by the time a software component ran in production, it had been individually built, configured, bundled, documented, patched, vendored, templated, tweaked and instrumented by 10 different people on 10 different computers. Builds failed, libraries conflicted, mirrors crashed, post-it notes were lost, logs were misplaced, cluster updates were half-broken. The process was slow, inefficient and cost a fortune - and was entirely different depending on the language and infrastructure provider.
### 5. INDUSTRIAL-GRADE DELIVERY
There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded on the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away.
With Standard Containers we can put an end to that embarrassment, by making INDUSTRIAL-GRADE DELIVERY of software a reality.
Standard Container Specification
--------------------------------
(TODO)
### Image format
### Standard operations
* Copy
* Run
* Stop
* Wait
* Commit
* Attach standard streams
* List filesystem changes
* ...
### Execution environment
#### Root filesystem
#### Environment variables
#### Process arguments
#### Networking
#### Process namespacing
#### Resource limits
#### Process monitoring
#### Logging
#### Signals
#### Pseudo-terminal allocation
#### Security
Transfers of Docker shall be in accordance with applicable export controls
of any country and all other applicable legal requirements. Without limiting the
foregoing, Docker shall not be distributed or downloaded to any individual or
location if such distribution or download would violate the applicable US
government export regulations.
For more information, please see http://www.bis.doc.gov

71
SPECS/data-volumes.md Normal file
View File

@@ -0,0 +1,71 @@
## Spec for data volumes
Spec owner: Solomon Hykes <solomon@dotcloud.com>
Data volumes (issue #111) are a much-requested feature which trigger much discussion and debate. Below is the current authoritative spec for implementing data volumes.
This spec will be deprecated once the feature is fully implemented.
Discussion, requests, trolls, demands, offerings, threats and other forms of supplications concerning this spec should be addressed to Solomon here: https://github.com/dotcloud/docker/issues/111
### 1. Creating data volumes
At container creation, parts of a container's filesystem can be mounted as separate data volumes. Volumes are defined with the -v flag.
For example:
```bash
$ docker run -v /var/lib/postgres -v /var/log postgres /usr/bin/postgres
```
In this example, a new container is created from the 'postgres' image. At the same time, docker creates 2 new data volumes: one will be mapped to the container at /var/lib/postgres, the other at /var/log.
2 important notes:
1) Volumes don't have top-level names. At no point does the user provide a name, or is a name given to him. Volumes are identified by the path at which they are mounted inside their container.
2) The user doesn't choose the source of the volume. Docker only mounts volumes it created itself, in the same way that it only runs containers that it created itself. That is by design.
### 2. Sharing data volumes
Instead of creating its own volumes, a container can share another container's volumes. For example:
```bash
$ docker run --volumes-from $OTHER_CONTAINER_ID postgres /usr/local/bin/postgres-backup
```
In this example, a new container is created from the 'postgres' example. At the same time, docker will *re-use* the 2 data volumes created in the previous example. One volume will be mounted on the /var/lib/postgres of *both* containers, and the other will be mounted on the /var/log of both containers.
### 3. Under the hood
Docker stores volumes in /var/lib/docker/volumes. Each volume receives a globally unique ID at creation, and is stored at /var/lib/docker/volumes/ID.
At creation, volumes are attached to a single container - the source of truth for this mapping will be the container's configuration.
Mounting a volume consists of calling "mount --bind" from the volume's directory to the appropriate sub-directory of the container mountpoint. This may be done by Docker itself, or farmed out to lxc (which supports mount-binding) if possible.
### 4. Backups, transfers and other volume operations
Volumes sometimes need to be backed up, transfered between hosts, synchronized, etc. These operations typically are application-specific or site-specific, eg. rsync vs. S3 upload vs. replication vs...
Rather than attempting to implement all these scenarios directly, Docker will allow for custom implementations using an extension mechanism.
### 5. Custom volume handlers
Docker allows for arbitrary code to be executed against a container's volumes, to implement any custom action: backup, transfer, synchronization across hosts, etc.
Here's an example:
```bash
$ DB=$(docker run -d -v /var/lib/postgres -v /var/log postgres /usr/bin/postgres)
$ BACKUP_JOB=$(docker run -d --volumes-from $DB shykes/backuper /usr/local/bin/backup-postgres --s3creds=$S3CREDS)
$ docker wait $BACKUP_JOB
```
Congratulations, you just implemented a custom volume handler, using Docker's built-in ability to 1) execute arbitrary code and 2) share volumes between containers.

View File

@@ -1 +0,0 @@
0.6.5

126
Vagrantfile vendored
View File

@@ -1,67 +1,40 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
BOX_NAME = ENV['BOX_NAME'] || "ubuntu"
BOX_URI = ENV['BOX_URI'] || "http://files.vagrantup.com/precise64.box"
VF_BOX_URI = ENV['BOX_URI'] || "http://files.vagrantup.com/precise64_vmware_fusion.box"
AWS_REGION = ENV['AWS_REGION'] || "us-east-1"
AWS_AMI = ENV['AWS_AMI'] || "ami-d0f89fb9"
FORWARD_DOCKER_PORTS = ENV['FORWARD_DOCKER_PORTS']
def v10(config)
config.vm.box = 'precise64'
config.vm.box_url = 'http://files.vagrantup.com/precise64.box'
Vagrant::Config.run do |config|
# Setup virtual machine box. This VM configuration code is always executed.
config.vm.box = BOX_NAME
config.vm.box_url = BOX_URI
config.ssh.forward_agent = true
# Provision docker and new kernel if deployment was not done.
# It is assumed Vagrant can successfully launch the provider instance.
if Dir.glob("#{File.dirname(__FILE__)}/.vagrant/machines/default/*/id").empty?
# Add lxc-docker package
pkg_cmd = "wget -q -O - https://get.docker.io/gpg | apt-key add -;" \
"echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list;" \
"apt-get update -qq; apt-get install -q -y --force-yes lxc-docker; "
# Add Ubuntu raring backported kernel
pkg_cmd << "apt-get update -qq; apt-get install -q -y linux-image-generic-lts-raring; "
# Add guest additions if local vbox VM. As virtualbox is the default provider,
# it is assumed it won't be explicitly stated.
if ENV["VAGRANT_DEFAULT_PROVIDER"].nil? && ARGV.none? { |arg| arg.downcase.start_with?("--provider") }
pkg_cmd << "apt-get install -q -y linux-headers-generic-lts-raring dkms; " \
"echo 'Downloading VBox Guest Additions...'; " \
"wget -q http://dlc.sun.com.edgesuite.net/virtualbox/4.2.12/VBoxGuestAdditions_4.2.12.iso; "
# Prepare the VM to add guest additions after reboot
pkg_cmd << "echo -e 'mount -o loop,ro /home/vagrant/VBoxGuestAdditions_4.2.12.iso /mnt\n" \
"echo yes | /mnt/VBoxLinuxAdditions.run\numount /mnt\n" \
"rm /root/guest_additions.sh; ' > /root/guest_additions.sh; " \
"chmod 700 /root/guest_additions.sh; " \
"sed -i -E 's#^exit 0#[ -x /root/guest_additions.sh ] \\&\\& /root/guest_additions.sh#' /etc/rc.local; " \
"echo 'Installation of VBox Guest Additions is proceeding in the background.'; " \
"echo '\"vagrant reload\" can be used in about 2 minutes to activate the new guest additions.'; "
end
# Add vagrant user to the docker group
pkg_cmd << "usermod -a -G docker vagrant; "
# Activate new kernel
pkg_cmd << "shutdown -r +1; "
config.vm.provision :shell, :inline => pkg_cmd
end
# Install ubuntu packaging dependencies and create ubuntu packages
config.vm.provision :shell, :inline => "echo 'deb http://ppa.launchpad.net/dotcloud/lxc-docker/ubuntu precise main' >>/etc/apt/sources.list"
config.vm.provision :shell, :inline => 'export DEBIAN_FRONTEND=noninteractive; apt-get -qq update; apt-get install -qq -y --force-yes lxc-docker'
end
Vagrant::VERSION < "1.1.0" and Vagrant::Config.run do |config|
v10(config)
end
Vagrant::VERSION >= "1.1.0" and Vagrant.configure("1") do |config|
v10(config)
end
# Providers were added on Vagrant >= 1.1.0
Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
config.vm.provider :aws do |aws, override|
config.vm.provider :aws do |aws|
config.vm.box = "dummy"
config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
aws.access_key_id = ENV["AWS_ACCESS_KEY_ID"]
aws.secret_access_key = ENV["AWS_SECRET_ACCESS_KEY"]
aws.secret_access_key = ENV["AWS_SECRET_ACCESS_KEY"]
aws.keypair_name = ENV["AWS_KEYPAIR_NAME"]
override.ssh.private_key_path = ENV["AWS_SSH_PRIVKEY"]
override.ssh.username = "ubuntu"
aws.region = AWS_REGION
aws.ami = AWS_AMI
aws.ssh_private_key_path = ENV["AWS_SSH_PRIVKEY"]
aws.region = "us-east-1"
aws.ami = "ami-d0f89fb9"
aws.ssh_username = "ubuntu"
aws.instance_type = "t1.micro"
end
config.vm.provider :rackspace do |rs|
config.vm.box = "dummy"
config.vm.box_url = "https://github.com/mitchellh/vagrant-rackspace/raw/master/dummy.box"
config.ssh.private_key_path = ENV["RS_PRIVATE_KEY"]
rs.username = ENV["RS_USERNAME"]
rs.api_key = ENV["RS_API_KEY"]
@@ -70,31 +43,40 @@ Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
rs.image = /Ubuntu/
end
config.vm.provider :vmware_fusion do |f, override|
override.vm.box = BOX_NAME
override.vm.box_url = VF_BOX_URI
override.vm.synced_folder ".", "/vagrant", disabled: true
f.vmx["displayName"] = "docker"
config.vm.provider :virtualbox do |vb|
config.vm.box = 'precise64'
config.vm.box_url = 'http://files.vagrantup.com/precise64.box'
end
end
Vagrant::VERSION >= "1.2.0" and Vagrant.configure("2") do |config|
config.vm.provider :aws do |aws, override|
config.vm.box = "dummy"
config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
aws.access_key_id = ENV["AWS_ACCESS_KEY_ID"]
aws.secret_access_key = ENV["AWS_SECRET_ACCESS_KEY"]
aws.keypair_name = ENV["AWS_KEYPAIR_NAME"]
override.ssh.private_key_path = ENV["AWS_SSH_PRIVKEY"]
override.ssh.username = "ubuntu"
aws.region = "us-east-1"
aws.ami = "ami-d0f89fb9"
aws.instance_type = "t1.micro"
end
config.vm.provider :rackspace do |rs|
config.vm.box = "dummy"
config.vm.box_url = "https://github.com/mitchellh/vagrant-rackspace/raw/master/dummy.box"
config.ssh.private_key_path = ENV["RS_PRIVATE_KEY"]
rs.username = ENV["RS_USERNAME"]
rs.api_key = ENV["RS_API_KEY"]
rs.public_key_path = ENV["RS_PUBLIC_KEY"]
rs.flavor = /512MB/
rs.image = /Ubuntu/
end
config.vm.provider :virtualbox do |vb|
config.vm.box = BOX_NAME
config.vm.box_url = BOX_URI
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
end
end
if !FORWARD_DOCKER_PORTS.nil?
Vagrant::VERSION < "1.1.0" and Vagrant::Config.run do |config|
(49000..49900).each do |port|
config.vm.forward_port port, port
end
config.vm.box = 'precise64'
config.vm.box_url = 'http://files.vagrantup.com/precise64.box'
end
Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
(49000..49900).each do |port|
config.vm.network :forwarded_port, :host => port, :guest => port
end
end
end

1135
api.go

File diff suppressed because it is too large Load Diff

View File

@@ -1,123 +0,0 @@
package docker
type APIHistory struct {
ID string `json:"Id"`
Tags []string `json:",omitempty"`
Created int64
CreatedBy string `json:",omitempty"`
}
type APIImages struct {
Repository string `json:",omitempty"`
Tag string `json:",omitempty"`
ID string `json:"Id"`
Created int64
Size int64
VirtualSize int64
}
type APIInfo struct {
Debug bool
Containers int
Images int
NFd int `json:",omitempty"`
NGoroutines int `json:",omitempty"`
MemoryLimit bool `json:",omitempty"`
SwapLimit bool `json:",omitempty"`
IPv4Forwarding bool `json:",omitempty"`
LXCVersion string `json:",omitempty"`
NEventsListener int `json:",omitempty"`
KernelVersion string `json:",omitempty"`
IndexServerAddress string `json:",omitempty"`
}
type APITop struct {
Titles []string
Processes [][]string
}
type APIRmi struct {
Deleted string `json:",omitempty"`
Untagged string `json:",omitempty"`
}
type APIContainers struct {
ID string `json:"Id"`
Image string
Command string
Created int64
Status string
Ports []APIPort
SizeRw int64
SizeRootFs int64
Names []string
}
func (self *APIContainers) ToLegacy() APIContainersOld {
return APIContainersOld{
ID: self.ID,
Image: self.Image,
Command: self.Command,
Created: self.Created,
Status: self.Status,
Ports: displayablePorts(self.Ports),
SizeRw: self.SizeRw,
SizeRootFs: self.SizeRootFs,
}
}
type APIContainersOld struct {
ID string `json:"Id"`
Image string
Command string
Created int64
Status string
Ports string
SizeRw int64
SizeRootFs int64
}
type APISearch struct {
Name string
Description string
}
type APIID struct {
ID string `json:"Id"`
}
type APIRun struct {
ID string `json:"Id"`
Warnings []string `json:",omitempty"`
}
type APIPort struct {
PrivatePort int64
PublicPort int64
Type string
IP string
}
type APIVersion struct {
Version string
GitCommit string `json:",omitempty"`
GoVersion string `json:",omitempty"`
}
type APIWait struct {
StatusCode int
}
type APIAuth struct {
Status string
}
type APIImageConfig struct {
ID string `json:"Id"`
*Config
}
type APICopy struct {
Resource string
HostPath string
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,16 +1,11 @@
package docker
import (
"archive/tar"
"bytes"
"fmt"
"github.com/dotcloud/docker/utils"
"errors"
"io"
"io/ioutil"
"os"
"os/exec"
"path"
"path/filepath"
)
type Archive io.Reader
@@ -24,33 +19,6 @@ const (
Xz
)
func DetectCompression(source []byte) Compression {
sourceLen := len(source)
for compression, m := range map[Compression][]byte{
Bzip2: {0x42, 0x5A, 0x68},
Gzip: {0x1F, 0x8B, 0x08},
Xz: {0xFD, 0x37, 0x7A, 0x58, 0x5A, 0x00},
} {
fail := false
if len(m) > sourceLen {
utils.Debugf("Len too short")
continue
}
i := 0
for _, b := range m {
if b != source[i] {
fail = true
break
}
i++
}
if !fail {
return compression
}
}
return Uncompressed
}
func (compression *Compression) Flag() string {
switch *compression {
case Bzip2:
@@ -63,167 +31,21 @@ func (compression *Compression) Flag() string {
return ""
}
func (compression *Compression) Extension() string {
switch *compression {
case Uncompressed:
return "tar"
case Bzip2:
return "tar.bz2"
case Gzip:
return "tar.gz"
case Xz:
return "tar.xz"
}
return ""
}
// Tar creates an archive from the directory at `path`, and returns it as a
// stream of bytes.
func Tar(path string, compression Compression) (io.Reader, error) {
return TarFilter(path, compression, nil)
cmd := exec.Command("bsdtar", "-f", "-", "-C", path, "-c"+compression.Flag(), ".")
return CmdStream(cmd)
}
// Tar creates an archive from the directory at `path`, only including files whose relative
// paths are included in `filter`. If `filter` is nil, then all files are included.
func TarFilter(path string, compression Compression, filter []string) (io.Reader, error) {
args := []string{"tar", "--numeric-owner", "-f", "-", "-C", path}
if filter == nil {
filter = []string{"."}
}
for _, f := range filter {
args = append(args, "-c"+compression.Flag(), f)
}
return CmdStream(exec.Command(args[0], args[1:]...))
}
// Untar reads a stream of bytes from `archive`, parses it as a tar archive,
// and unpacks it into the directory at `path`.
// The archive may be compressed with one of the following algorithms:
// identity (uncompressed), gzip, bzip2, xz.
// FIXME: specify behavior when target path exists vs. doesn't exist.
func Untar(archive io.Reader, path string) error {
if archive == nil {
return fmt.Errorf("Empty archive")
}
buf := make([]byte, 10)
totalN := 0
for totalN < 10 {
if n, err := archive.Read(buf[totalN:]); err != nil {
if err == io.EOF {
return fmt.Errorf("Tarball too short")
}
return err
} else {
totalN += n
utils.Debugf("[tar autodetect] n: %d", n)
}
}
compression := DetectCompression(buf)
utils.Debugf("Archive compression detected: %s", compression.Extension())
cmd := exec.Command("tar", "--numeric-owner", "-f", "-", "-C", path, "-x"+compression.Flag())
cmd.Stdin = io.MultiReader(bytes.NewReader(buf), archive)
// Hardcode locale environment for predictable outcome regardless of host configuration.
// (see https://github.com/dotcloud/docker/issues/355)
cmd.Env = []string{"LANG=en_US.utf-8", "LC_ALL=en_US.utf-8"}
cmd := exec.Command("bsdtar", "-f", "-", "-C", path, "-x")
cmd.Stdin = archive
output, err := cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("%s: %s", err, output)
return errors.New(err.Error() + ": " + string(output))
}
return nil
}
// TarUntar is a convenience function which calls Tar and Untar, with
// the output of one piped into the other. If either Tar or Untar fails,
// TarUntar aborts and returns the error.
func TarUntar(src string, filter []string, dst string) error {
utils.Debugf("TarUntar(%s %s %s)", src, filter, dst)
archive, err := TarFilter(src, Uncompressed, filter)
if err != nil {
return err
}
return Untar(archive, dst)
}
// UntarPath is a convenience function which looks for an archive
// at filesystem path `src`, and unpacks it at `dst`.
func UntarPath(src, dst string) error {
if archive, err := os.Open(src); err != nil {
return err
} else if err := Untar(archive, dst); err != nil {
return err
}
return nil
}
// CopyWithTar creates a tar archive of filesystem path `src`, and
// unpacks it at filesystem path `dst`.
// The archive is streamed directly with fixed buffering and no
// intermediary disk IO.
//
func CopyWithTar(src, dst string) error {
srcSt, err := os.Stat(src)
if err != nil {
return err
}
if !srcSt.IsDir() {
return CopyFileWithTar(src, dst)
}
// Create dst, copy src's content into it
utils.Debugf("Creating dest directory: %s", dst)
if err := os.MkdirAll(dst, 0755); err != nil && !os.IsExist(err) {
return err
}
utils.Debugf("Calling TarUntar(%s, %s)", src, dst)
return TarUntar(src, nil, dst)
}
// CopyFileWithTar emulates the behavior of the 'cp' command-line
// for a single file. It copies a regular file from path `src` to
// path `dst`, and preserves all its metadata.
//
// If `dst` ends with a trailing slash '/', the final destination path
// will be `dst/base(src)`.
func CopyFileWithTar(src, dst string) error {
utils.Debugf("CopyFileWithTar(%s, %s)", src, dst)
srcSt, err := os.Stat(src)
if err != nil {
return err
}
if srcSt.IsDir() {
return fmt.Errorf("Can't copy a directory")
}
// Clean up the trailing /
if dst[len(dst)-1] == '/' {
dst = path.Join(dst, filepath.Base(src))
}
// Create the holding directory if necessary
if err := os.MkdirAll(filepath.Dir(dst), 0700); err != nil && !os.IsExist(err) {
return err
}
buf := new(bytes.Buffer)
tw := tar.NewWriter(buf)
hdr, err := tar.FileInfoHeader(srcSt, "")
if err != nil {
return err
}
hdr.Name = filepath.Base(dst)
if err := tw.WriteHeader(hdr); err != nil {
return err
}
srcF, err := os.Open(src)
if err != nil {
return err
}
if _, err := io.Copy(tw, srcF); err != nil {
return err
}
tw.Close()
return Untar(buf, filepath.Dir(dst))
}
// CmdStream executes a command, and returns its stdout as a stream.
// If the command fails to run or doesn't complete successfully, an error
// will be returned, including anything written on stderr.
@@ -254,7 +76,7 @@ func CmdStream(cmd *exec.Cmd) (io.Reader, error) {
}
errText := <-errChan
if err := cmd.Wait(); err != nil {
pipeW.CloseWithError(fmt.Errorf("%s: %s", err, errText))
pipeW.CloseWithError(errors.New(err.Error() + ": " + string(errText)))
} else {
pipeW.Close()
}

View File

@@ -1,13 +1,10 @@
package docker
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"os"
"os/exec"
"path"
"testing"
"time"
)
@@ -16,7 +13,7 @@ func TestCmdStreamLargeStderr(t *testing.T) {
cmd := exec.Command("/bin/sh", "-c", "dd if=/dev/zero bs=1k count=1000 of=/dev/stderr; echo hello")
out, err := CmdStream(cmd)
if err != nil {
t.Fatalf("Failed to start command: %s", err)
t.Fatalf("Failed to start command: " + err.Error())
}
errCh := make(chan error)
go func() {
@@ -26,7 +23,7 @@ func TestCmdStreamLargeStderr(t *testing.T) {
select {
case err := <-errCh:
if err != nil {
t.Fatalf("Command should not have failed (err=%.100s...)", err)
t.Fatalf("Command should not have failed (err=%s...)", err.Error()[:100])
}
case <-time.After(5 * time.Second):
t.Fatalf("Command did not complete in 5 seconds; probable deadlock")
@@ -37,12 +34,12 @@ func TestCmdStreamBad(t *testing.T) {
badCmd := exec.Command("/bin/sh", "-c", "echo hello; echo >&2 error couldn\\'t reverse the phase pulser; exit 1")
out, err := CmdStream(badCmd)
if err != nil {
t.Fatalf("Failed to start command: %s", err)
t.Fatalf("Failed to start command: " + err.Error())
}
if output, err := ioutil.ReadAll(out); err == nil {
t.Fatalf("Command should have failed")
} else if err.Error() != "exit status 1: error couldn't reverse the phase pulser\n" {
t.Fatalf("Wrong error value (%s)", err)
t.Fatalf("Wrong error value (%s)", err.Error())
} else if s := string(output); s != "hello\n" {
t.Fatalf("Command output should be '%s', not '%s'", "hello\\n", output)
}
@@ -61,58 +58,20 @@ func TestCmdStreamGood(t *testing.T) {
}
}
func tarUntar(t *testing.T, origin string, compression Compression) error {
archive, err := Tar(origin, compression)
func TestTarUntar(t *testing.T) {
archive, err := Tar(".", Uncompressed)
if err != nil {
t.Fatal(err)
}
buf := make([]byte, 10)
if _, err := archive.Read(buf); err != nil {
return err
}
archive = io.MultiReader(bytes.NewReader(buf), archive)
detectedCompression := DetectCompression(buf)
if detectedCompression.Extension() != compression.Extension() {
return fmt.Errorf("Wrong compression detected. Actual compression: %s, found %s", compression.Extension(), detectedCompression.Extension())
}
tmp, err := ioutil.TempDir("", "docker-test-untar")
if err != nil {
return err
t.Fatal(err)
}
defer os.RemoveAll(tmp)
if err := Untar(archive, tmp); err != nil {
return err
t.Fatal(err)
}
if _, err := os.Stat(tmp); err != nil {
return err
}
return nil
}
func TestTarUntar(t *testing.T) {
origin, err := ioutil.TempDir("", "docker-test-untar-origin")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(origin)
if err := ioutil.WriteFile(path.Join(origin, "1"), []byte("hello world"), 0700); err != nil {
t.Fatal(err)
}
if err := ioutil.WriteFile(path.Join(origin, "2"), []byte("welcome!"), 0700); err != nil {
t.Fatal(err)
}
for _, c := range []Compression{
Uncompressed,
Gzip,
Bzip2,
Xz,
} {
if err := tarUntar(t, origin, c); err != nil {
t.Fatalf("Error tar/untar for compression %s: %s", c.Extension(), err)
}
t.Fatalf("Error stating %s: %s", tmp, err.Error())
}
}

View File

@@ -1,3 +0,0 @@
Sam Alba <sam@dotcloud.com> (@samalba)
Joffrey Fuhrer <joffrey@dotcloud.com> (@shin-)
Ken Cochrane <ken@dotcloud.com> (@kencochrane)

View File

@@ -5,7 +5,6 @@ import (
"encoding/json"
"errors"
"fmt"
"github.com/dotcloud/docker/utils"
"io/ioutil"
"net/http"
"os"
@@ -16,34 +15,27 @@ import (
// Where we store the config file
const CONFIGFILE = ".dockercfg"
// Only used for user auth + account creation
const INDEXSERVER = "https://index.docker.io/v1/"
//const INDEXSERVER = "https://indexstaging-docker.dotcloud.com/v1/"
var (
ErrConfigFileMissing = errors.New("The Auth config file is missing")
)
// the registry server we want to login against
const REGISTRY_SERVER = "https://registry.docker.io"
type AuthConfig struct {
Username string `json:"username,omitempty"`
Password string `json:"password,omitempty"`
Auth string `json:"auth"`
Email string `json:"email"`
ServerAddress string `json:"serveraddress,omitempty"`
Username string `json:"username"`
Password string `json:"password"`
Email string `json:"email"`
rootPath string `json:-`
}
type ConfigFile struct {
Configs map[string]AuthConfig `json:"configs,omitempty"`
rootPath string
}
func IndexServerAddress() string {
return INDEXSERVER
func NewAuthConfig(username, password, email, rootPath string) *AuthConfig {
return &AuthConfig{
Username: username,
Password: password,
Email: email,
rootPath: rootPath,
}
}
// create a base64 encoded auth string to store in config
func encodeAuth(authConfig *AuthConfig) string {
func EncodeAuth(authConfig *AuthConfig) string {
authStr := authConfig.Username + ":" + authConfig.Password
msg := []byte(authStr)
encoded := make([]byte, base64.StdEncoding.EncodedLen(len(msg)))
@@ -52,97 +44,54 @@ func encodeAuth(authConfig *AuthConfig) string {
}
// decode the auth string
func decodeAuth(authStr string) (string, string, error) {
func DecodeAuth(authStr string) (*AuthConfig, error) {
decLen := base64.StdEncoding.DecodedLen(len(authStr))
decoded := make([]byte, decLen)
authByte := []byte(authStr)
n, err := base64.StdEncoding.Decode(decoded, authByte)
if err != nil {
return "", "", err
return nil, err
}
if n > decLen {
return "", "", fmt.Errorf("Something went wrong decoding auth config")
return nil, fmt.Errorf("Something went wrong decoding auth config")
}
arr := strings.Split(string(decoded), ":")
if len(arr) != 2 {
return "", "", fmt.Errorf("Invalid auth configuration file")
return nil, fmt.Errorf("Invalid auth configuration file")
}
password := strings.Trim(arr[1], "\x00")
return arr[0], password, nil
return &AuthConfig{Username: arr[0], Password: password}, nil
}
// load up the auth config information and return values
// FIXME: use the internal golang config parser
func LoadConfig(rootPath string) (*ConfigFile, error) {
configFile := ConfigFile{Configs: make(map[string]AuthConfig), rootPath: rootPath}
func LoadConfig(rootPath string) (*AuthConfig, error) {
confFile := path.Join(rootPath, CONFIGFILE)
if _, err := os.Stat(confFile); err != nil {
return &configFile, nil //missing file is not an error
return &AuthConfig{}, fmt.Errorf("The Auth config file is missing")
}
b, err := ioutil.ReadFile(confFile)
if err != nil {
return &configFile, err
return nil, err
}
if err := json.Unmarshal(b, &configFile.Configs); err != nil {
arr := strings.Split(string(b), "\n")
if len(arr) < 2 {
return &configFile, fmt.Errorf("The Auth config file is empty")
}
authConfig := AuthConfig{}
origAuth := strings.Split(arr[0], " = ")
if len(origAuth) != 2 {
return &configFile, fmt.Errorf("Invalid Auth config file")
}
authConfig.Username, authConfig.Password, err = decodeAuth(origAuth[1])
if err != nil {
return &configFile, err
}
origEmail := strings.Split(arr[1], " = ")
if len(origEmail) != 2 {
return &configFile, fmt.Errorf("Invalid Auth config file")
}
authConfig.Email = origEmail[1]
authConfig.ServerAddress = IndexServerAddress()
configFile.Configs[IndexServerAddress()] = authConfig
} else {
for k, authConfig := range configFile.Configs {
authConfig.Username, authConfig.Password, err = decodeAuth(authConfig.Auth)
if err != nil {
return &configFile, err
}
authConfig.Auth = ""
configFile.Configs[k] = authConfig
authConfig.ServerAddress = k
}
arr := strings.Split(string(b), "\n")
origAuth := strings.Split(arr[0], " = ")
origEmail := strings.Split(arr[1], " = ")
authConfig, err := DecodeAuth(origAuth[1])
if err != nil {
return nil, err
}
return &configFile, nil
authConfig.Email = origEmail[1]
authConfig.rootPath = rootPath
return authConfig, nil
}
// save the auth config
func SaveConfig(configFile *ConfigFile) error {
confFile := path.Join(configFile.rootPath, CONFIGFILE)
if len(configFile.Configs) == 0 {
os.Remove(confFile)
return nil
}
configs := make(map[string]AuthConfig, len(configFile.Configs))
for k, authConfig := range configFile.Configs {
authCopy := authConfig
authCopy.Auth = encodeAuth(&authCopy)
authCopy.Username = ""
authCopy.Password = ""
authCopy.ServerAddress = ""
configs[k] = authCopy
}
b, err := json.Marshal(configs)
if err != nil {
return err
}
err = ioutil.WriteFile(confFile, b, 0600)
func saveConfig(rootPath, authStr string, email string) error {
lines := "auth = " + authStr + "\n" + "email = " + email + "\n"
b := []byte(lines)
err := ioutil.WriteFile(path.Join(rootPath, CONFIGFILE), b, 0600)
if err != nil {
return err
}
@@ -150,59 +99,42 @@ func SaveConfig(configFile *ConfigFile) error {
}
// try to register/login to the registry server
func Login(authConfig *AuthConfig, factory *utils.HTTPRequestFactory) (string, error) {
client := &http.Client{}
func Login(authConfig *AuthConfig) (string, error) {
storeConfig := false
reqStatusCode := 0
var status string
var errMsg string
var reqBody []byte
serverAddress := authConfig.ServerAddress
if serverAddress == "" {
serverAddress = IndexServerAddress()
}
loginAgainstOfficialIndex := serverAddress == IndexServerAddress()
// to avoid sending the server address to the server it should be removed before marshalled
authCopy := *authConfig
authCopy.ServerAddress = ""
jsonBody, err := json.Marshal(authCopy)
jsonBody, err := json.Marshal(authConfig)
if err != nil {
return "", fmt.Errorf("Config Error: %s", err)
errMsg = fmt.Sprintf("Config Error: %s", err)
return "", errors.New(errMsg)
}
// using `bytes.NewReader(jsonBody)` here causes the server to respond with a 411 status.
b := strings.NewReader(string(jsonBody))
req1, err := http.Post(serverAddress+"users/", "application/json; charset=utf-8", b)
req1, err := http.Post(REGISTRY_SERVER+"/v1/users", "application/json; charset=utf-8", b)
if err != nil {
return "", fmt.Errorf("Server Error: %s", err)
errMsg = fmt.Sprintf("Server Error: %s", err)
return "", errors.New(errMsg)
}
reqStatusCode = req1.StatusCode
defer req1.Body.Close()
reqBody, err = ioutil.ReadAll(req1.Body)
if err != nil {
return "", fmt.Errorf("Server Error: [%#v] %s", reqStatusCode, err)
errMsg = fmt.Sprintf("Server Error: [%#v] %s", reqStatusCode, err)
return "", errors.New(errMsg)
}
if reqStatusCode == 201 {
if loginAgainstOfficialIndex {
status = "Account created. Please use the confirmation link we sent" +
" to your e-mail to activate it."
} else {
status = "Account created. Please see the documentation of the registry " + serverAddress + " for instructions how to activate it."
}
} else if reqStatusCode == 403 {
if loginAgainstOfficialIndex {
return "", fmt.Errorf("Login: Your account hasn't been activated. " +
"Please check your e-mail for a confirmation link.")
} else {
return "", fmt.Errorf("Login: Your account hasn't been activated. " +
"Please see the documentation of the registry " + serverAddress + " for instructions how to activate it.")
}
status = "Account Created\n"
storeConfig = true
} else if reqStatusCode == 400 {
if string(reqBody) == "\"Username or email already exists\"" {
req, err := factory.NewRequest("GET", serverAddress+"users/", nil)
// FIXME: This should be 'exists', not 'exist'. Need to change on the server first.
if string(reqBody) == "Username or email already exist" {
client := &http.Client{}
req, err := http.NewRequest("GET", REGISTRY_SERVER+"/v1/users", nil)
req.SetBasicAuth(authConfig.Username, authConfig.Password)
resp, err := client.Do(req)
if err != nil {
@@ -214,67 +146,23 @@ func Login(authConfig *AuthConfig, factory *utils.HTTPRequestFactory) (string, e
return "", err
}
if resp.StatusCode == 200 {
status = "Login Succeeded"
} else if resp.StatusCode == 401 {
return "", fmt.Errorf("Wrong login/password, please try again")
status = "Login Succeeded\n"
storeConfig = true
} else {
return "", fmt.Errorf("Login: %s (Code: %d; Headers: %s)", body,
resp.StatusCode, resp.Header)
status = fmt.Sprintf("Login: %s", body)
return "", errors.New(status)
}
} else {
return "", fmt.Errorf("Registration: %s", reqBody)
status = fmt.Sprintf("Registration: %s", reqBody)
return "", errors.New(status)
}
} else {
return "", fmt.Errorf("Unexpected status code [%d] : %s", reqStatusCode, reqBody)
status = fmt.Sprintf("[%s] : %s", reqStatusCode, reqBody)
return "", errors.New(status)
}
if storeConfig {
authStr := EncodeAuth(authConfig)
saveConfig(authConfig.rootPath, authStr, authConfig.Email)
}
return status, nil
}
// this method matches a auth configuration to a server address or a url
func (config *ConfigFile) ResolveAuthConfig(registry string) AuthConfig {
if registry == IndexServerAddress() || len(registry) == 0 {
// default to the index server
return config.Configs[IndexServerAddress()]
}
// if its not the index server there are three cases:
//
// 1. this is a full config url -> it should be used as is
// 2. it could be a full url, but with the wrong protocol
// 3. it can be the hostname optionally with a port
//
// as there is only one auth entry which is fully qualified we need to start
// parsing and matching
swapProtocol := func(url string) string {
if strings.HasPrefix(url, "http:") {
return strings.Replace(url, "http:", "https:", 1)
}
if strings.HasPrefix(url, "https:") {
return strings.Replace(url, "https:", "http:", 1)
}
return url
}
resolveIgnoringProtocol := func(url string) AuthConfig {
if c, found := config.Configs[url]; found {
return c
}
registrySwappedProtocol := swapProtocol(url)
// now try to match with the different protocol
if c, found := config.Configs[registrySwappedProtocol]; found {
return c
}
return AuthConfig{}
}
// match both protocols as it could also be a server name like httpfoo
if strings.HasPrefix(registry, "http:") || strings.HasPrefix(registry, "https:") {
return resolveIgnoringProtocol(registry)
}
url := "https://" + registry
if !strings.Contains(registry, "/") {
url = url + "/v1/"
}
return resolveIgnoringProtocol(url)
}

View File

@@ -1,20 +1,13 @@
package auth
import (
"crypto/rand"
"encoding/hex"
"io/ioutil"
"os"
"strings"
"testing"
)
func TestEncodeAuth(t *testing.T) {
newAuthConfig := &AuthConfig{Username: "ken", Password: "test", Email: "test@example.com"}
authStr := encodeAuth(newAuthConfig)
decAuthConfig := &AuthConfig{}
var err error
decAuthConfig.Username, decAuthConfig.Password, err = decodeAuth(authStr)
authStr := EncodeAuth(newAuthConfig)
decAuthConfig, err := DecodeAuth(authStr)
if err != nil {
t.Fatal(err)
}
@@ -28,164 +21,3 @@ func TestEncodeAuth(t *testing.T) {
t.Fatal("AuthString encoding isn't correct.")
}
}
func TestLogin(t *testing.T) {
os.Setenv("DOCKER_INDEX_URL", "https://indexstaging-docker.dotcloud.com")
defer os.Setenv("DOCKER_INDEX_URL", "")
authConfig := &AuthConfig{Username: "unittester", Password: "surlautrerivejetattendrai", Email: "noise+unittester@dotcloud.com"}
status, err := Login(authConfig, nil)
if err != nil {
t.Fatal(err)
}
if status != "Login Succeeded" {
t.Fatalf("Expected status \"Login Succeeded\", found \"%s\" instead", status)
}
}
func TestCreateAccount(t *testing.T) {
os.Setenv("DOCKER_INDEX_URL", "https://indexstaging-docker.dotcloud.com")
defer os.Setenv("DOCKER_INDEX_URL", "")
tokenBuffer := make([]byte, 16)
_, err := rand.Read(tokenBuffer)
if err != nil {
t.Fatal(err)
}
token := hex.EncodeToString(tokenBuffer)[:12]
username := "ut" + token
authConfig := &AuthConfig{Username: username, Password: "test42", Email: "docker-ut+" + token + "@example.com"}
status, err := Login(authConfig, nil)
if err != nil {
t.Fatal(err)
}
expectedStatus := "Account created. Please use the confirmation link we sent" +
" to your e-mail to activate it."
if status != expectedStatus {
t.Fatalf("Expected status: \"%s\", found \"%s\" instead.", expectedStatus, status)
}
status, err = Login(authConfig, nil)
if err == nil {
t.Fatalf("Expected error but found nil instead")
}
expectedError := "Login: Account is not Active"
if !strings.Contains(err.Error(), expectedError) {
t.Fatalf("Expected message \"%s\" but found \"%s\" instead", expectedError, err)
}
}
func setupTempConfigFile() (*ConfigFile, error) {
root, err := ioutil.TempDir("", "docker-test-auth")
if err != nil {
return nil, err
}
configFile := &ConfigFile{
rootPath: root,
Configs: make(map[string]AuthConfig),
}
for _, registry := range []string{"testIndex", IndexServerAddress()} {
configFile.Configs[registry] = AuthConfig{
Username: "docker-user",
Password: "docker-pass",
Email: "docker@docker.io",
}
}
return configFile, nil
}
func TestSameAuthDataPostSave(t *testing.T) {
configFile, err := setupTempConfigFile()
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(configFile.rootPath)
err = SaveConfig(configFile)
if err != nil {
t.Fatal(err)
}
authConfig := configFile.Configs["testIndex"]
if authConfig.Username != "docker-user" {
t.Fail()
}
if authConfig.Password != "docker-pass" {
t.Fail()
}
if authConfig.Email != "docker@docker.io" {
t.Fail()
}
if authConfig.Auth != "" {
t.Fail()
}
}
func TestResolveAuthConfigIndexServer(t *testing.T) {
configFile, err := setupTempConfigFile()
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(configFile.rootPath)
for _, registry := range []string{"", IndexServerAddress()} {
resolved := configFile.ResolveAuthConfig(registry)
if resolved != configFile.Configs[IndexServerAddress()] {
t.Fail()
}
}
}
func TestResolveAuthConfigFullURL(t *testing.T) {
configFile, err := setupTempConfigFile()
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(configFile.rootPath)
registryAuth := AuthConfig{
Username: "foo-user",
Password: "foo-pass",
Email: "foo@example.com",
}
localAuth := AuthConfig{
Username: "bar-user",
Password: "bar-pass",
Email: "bar@example.com",
}
configFile.Configs["https://registry.example.com/v1/"] = registryAuth
configFile.Configs["http://localhost:8000/v1/"] = localAuth
validRegistries := map[string][]string{
"https://registry.example.com/v1/": {
"https://registry.example.com/v1/",
"http://registry.example.com/v1/",
"registry.example.com",
"registry.example.com/v1/",
},
"http://localhost:8000/v1/": {
"https://localhost:8000/v1/",
"http://localhost:8000/v1/",
"localhost:8000",
"localhost:8000/v1/",
},
}
for configKey, registries := range validRegistries {
for _, registry := range registries {
var (
configured AuthConfig
ok bool
)
resolved := configFile.ResolveAuthConfig(registry)
if configured, ok = configFile.Configs[configKey]; !ok {
t.Fail()
}
if resolved.Email != configured.Email {
t.Errorf("%s -> %q != %q\n", registry, resolved.Email, configured.Email)
}
}
}
}

20
buildbot/README.rst Normal file
View File

@@ -0,0 +1,20 @@
Buildbot
========
Buildbot is a continuous integration system designed to automate the
build/test cycle. By automatically rebuilding and testing the tree each time
something has changed, build problems are pinpointed quickly, before other
developers are inconvenienced by the failure.
When running 'make hack' at the docker root directory, it spawns a virtual
machine in the background running a buildbot instance and adds a git
post-commit hook that automatically run docker tests for you.
You can check your buildbot instance at http://192.168.33.21:8010/waterfall
Buildbot dependencies
---------------------
vagrant, virtualbox packages and python package requests

28
buildbot/Vagrantfile vendored Normal file
View File

@@ -0,0 +1,28 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
$BUILDBOT_IP = '192.168.33.21'
def v10(config)
config.vm.box = "quantal64_3.5.0-25"
config.vm.box_url = "http://get.docker.io/vbox/ubuntu/12.10/quantal64_3.5.0-25.box"
config.vm.share_folder 'v-data', '/data/docker', File.dirname(__FILE__) + '/..'
config.vm.network :hostonly, $BUILDBOT_IP
# Ensure puppet is installed on the instance
config.vm.provision :shell, :inline => 'apt-get -qq update; apt-get install -y puppet'
config.vm.provision :puppet do |puppet|
puppet.manifests_path = '.'
puppet.manifest_file = 'buildbot.pp'
puppet.options = ['--templatedir','.']
end
end
Vagrant::VERSION < '1.1.0' and Vagrant::Config.run do |config|
v10(config)
end
Vagrant::VERSION >= '1.1.0' and Vagrant.configure('1') do |config|
v10(config)
end

View File

@@ -0,0 +1,43 @@
#!/bin/bash
# Auto setup of buildbot configuration. Package installation is being done
# on buildbot.pp
# Dependencies: buildbot, buildbot-slave, supervisor
SLAVE_NAME='buildworker'
SLAVE_SOCKET='localhost:9989'
BUILDBOT_PWD='pass-docker'
USER='vagrant'
ROOT_PATH='/data/buildbot'
DOCKER_PATH='/data/docker'
BUILDBOT_CFG="$DOCKER_PATH/buildbot/buildbot-cfg"
IP=$(grep BUILDBOT_IP /data/docker/buildbot/Vagrantfile | awk -F "'" '{ print $2; }')
function run { su $USER -c "$1"; }
export PATH=/bin:sbin:/usr/bin:/usr/sbin:/usr/local/bin
# Exit if buildbot has already been installed
[ -d "$ROOT_PATH" ] && exit 0
# Setup buildbot
run "mkdir -p ${ROOT_PATH}"
cd ${ROOT_PATH}
run "buildbot create-master master"
run "cp $BUILDBOT_CFG/master.cfg master"
run "sed -i 's/localhost/$IP/' master/master.cfg"
run "buildslave create-slave slave $SLAVE_SOCKET $SLAVE_NAME $BUILDBOT_PWD"
# Allow buildbot subprocesses (docker tests) to properly run in containers,
# in particular with docker -u
run "sed -i 's/^umask = None/umask = 000/' ${ROOT_PATH}/slave/buildbot.tac"
# Setup supervisor
cp $BUILDBOT_CFG/buildbot.conf /etc/supervisor/conf.d/buildbot.conf
sed -i "s/^chmod=0700.*0700./chmod=0770\nchown=root:$USER/" /etc/supervisor/supervisord.conf
kill -HUP `pgrep -f "/usr/bin/python /usr/bin/supervisord"`
# Add git hook
cp $BUILDBOT_CFG/post-commit $DOCKER_PATH/.git/hooks
sed -i "s/localhost/$IP/" $DOCKER_PATH/.git/hooks/post-commit

View File

@@ -1,14 +1,14 @@
[program:buildmaster]
command=twistd --nodaemon --no_save -y buildbot.tac
directory=/data/buildbot/master
command=su vagrant -c "buildbot start master"
directory=/data/buildbot
chown= root:root
redirect_stderr=true
stdout_logfile=/var/log/supervisor/buildbot-master.log
stderr_logfile=/var/log/supervisor/buildbot-master.log
[program:buildworker]
command=twistd --nodaemon --no_save -y buildbot.tac
directory=/data/buildbot/slave
command=buildslave start slave
directory=/data/buildbot
chown= root:root
redirect_stderr=true
stdout_logfile=/var/log/supervisor/buildbot-slave.log

View File

@@ -0,0 +1,46 @@
import os
from buildbot.buildslave import BuildSlave
from buildbot.schedulers.forcesched import ForceScheduler
from buildbot.config import BuilderConfig
from buildbot.process.factory import BuildFactory
from buildbot.steps.shell import ShellCommand
from buildbot.status import html
from buildbot.status.web import authz, auth
PORT_WEB = 8010 # Buildbot webserver port
PORT_MASTER = 9989 # Port where buildbot master listen buildworkers
TEST_USER = 'buildbot' # Credential to authenticate build triggers
TEST_PWD = 'docker' # Credential to authenticate build triggers
BUILDER_NAME = 'docker'
BUILDPASSWORD = 'pass-docker' # Credential to authenticate buildworkers
DOCKER_PATH = '/data/docker'
c = BuildmasterConfig = {}
c['title'] = "Docker"
c['titleURL'] = "waterfall"
c['buildbotURL'] = "http://localhost:{0}/".format(PORT_WEB)
c['db'] = {'db_url':"sqlite:///state.sqlite"}
c['slaves'] = [BuildSlave('buildworker', BUILDPASSWORD)]
c['slavePortnum'] = PORT_MASTER
c['schedulers'] = [ForceScheduler(name='trigger',builderNames=[BUILDER_NAME])]
# Docker test command
test_cmd = """(
cd {0}/..; rm -rf docker-tmp; git clone docker docker-tmp;
cd docker-tmp; make test; exit_status=$?;
cd ..; rm -rf docker-tmp; exit $exit_status)""".format(DOCKER_PATH)
# Builder
factory = BuildFactory()
factory.addStep(ShellCommand(description='Docker',logEnviron=False,
usePTY=True,command=test_cmd))
c['builders'] = [BuilderConfig(name=BUILDER_NAME,slavenames=['buildworker'],
factory=factory)]
# Status
authz_cfg=authz.Authz(auth=auth.BasicAuth([(TEST_USER,TEST_PWD)]),
forceBuild='auth')
c['status'] = [html.WebStatus(http_port=PORT_WEB, authz=authz_cfg)]

View File

@@ -0,0 +1,21 @@
#!/usr/bin/env python
'''Trigger buildbot docker test build
post-commit git hook designed to automatically trigger buildbot on
the provided vagrant docker VM.'''
import requests
USERNAME = 'buildbot'
PASSWORD = 'docker'
BASE_URL = 'http://localhost:8010'
path = lambda s: BASE_URL + '/' + s
try:
session = requests.session()
session.post(path('login'),data={'username':USERNAME,'passwd':PASSWORD})
session.post(path('builders/docker/force'),
data={'forcescheduler':'trigger','reason':'Test commit'})
except:
pass

32
buildbot/buildbot.pp Normal file
View File

@@ -0,0 +1,32 @@
node default {
$USER = 'vagrant'
$ROOT_PATH = '/data/buildbot'
$DOCKER_PATH = '/data/docker'
exec {'apt_update': command => '/usr/bin/apt-get update' }
Package { require => Exec['apt_update'] }
group {'puppet': ensure => 'present'}
# Install dependencies
Package { ensure => 'installed' }
package { ['python-dev','python-pip','supervisor','lxc','bsdtar','git','golang']: }
file{[ '/data' ]:
owner => $USER, group => $USER, ensure => 'directory' }
file {'/var/tmp/requirements.txt':
content => template('requirements.txt') }
exec {'requirements':
require => [ Package['python-dev'], Package['python-pip'],
File['/var/tmp/requirements.txt'] ],
cwd => '/var/tmp',
command => "/bin/sh -c '(/usr/bin/pip install -r requirements.txt;
rm /var/tmp/requirements.txt)'" }
exec {'buildbot-cfg-sh':
require => [ Package['supervisor'], Exec['requirements']],
path => '/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin',
cwd => '/data',
command => "$DOCKER_PATH/buildbot/buildbot-cfg/buildbot-cfg.sh" }
}

View File

@@ -4,6 +4,3 @@ buildbot==0.8.7p1
buildbot_slave==0.8.7p1
nose==1.2.1
requests==1.1.0
flask==0.10.1
simplejson==2.3.2
selenium==2.35.0

263
builder.go Normal file
View File

@@ -0,0 +1,263 @@
package docker
import (
"bufio"
"fmt"
"io"
"os"
"path"
"strings"
"time"
)
type Builder struct {
runtime *Runtime
repositories *TagStore
graph *Graph
}
func NewBuilder(runtime *Runtime) *Builder {
return &Builder{
runtime: runtime,
graph: runtime.graph,
repositories: runtime.repositories,
}
}
func (builder *Builder) Create(config *Config) (*Container, error) {
// Lookup image
img, err := builder.repositories.LookupImage(config.Image)
if err != nil {
return nil, err
}
// Generate id
id := GenerateId()
// Generate default hostname
// FIXME: the lxc template no longer needs to set a default hostname
if config.Hostname == "" {
config.Hostname = id[:12]
}
container := &Container{
// FIXME: we should generate the ID here instead of receiving it as an argument
Id: id,
Created: time.Now(),
Path: config.Cmd[0],
Args: config.Cmd[1:], //FIXME: de-duplicate from config
Config: config,
Image: img.Id, // Always use the resolved image id
NetworkSettings: &NetworkSettings{},
// FIXME: do we need to store this in the container?
SysInitPath: sysInitPath,
}
container.root = builder.runtime.containerRoot(container.Id)
// Step 1: create the container directory.
// This doubles as a barrier to avoid race conditions.
if err := os.Mkdir(container.root, 0700); err != nil {
return nil, err
}
// If custom dns exists, then create a resolv.conf for the container
if len(config.Dns) > 0 {
container.ResolvConfPath = path.Join(container.root, "resolv.conf")
f, err := os.Create(container.ResolvConfPath)
if err != nil {
return nil, err
}
defer f.Close()
for _, dns := range config.Dns {
if _, err := f.Write([]byte("nameserver " + dns + "\n")); err != nil {
return nil, err
}
}
} else {
container.ResolvConfPath = "/etc/resolv.conf"
}
// Step 2: save the container json
if err := container.ToDisk(); err != nil {
return nil, err
}
// Step 3: register the container
if err := builder.runtime.Register(container); err != nil {
return nil, err
}
return container, nil
}
// Commit creates a new filesystem image from the current state of a container.
// The image can optionally be tagged into a repository
func (builder *Builder) Commit(container *Container, repository, tag, comment, author string) (*Image, error) {
// FIXME: freeze the container before copying it to avoid data corruption?
// FIXME: this shouldn't be in commands.
rwTar, err := container.ExportRw()
if err != nil {
return nil, err
}
// Create a new image from the container's base layers + a new layer from container changes
img, err := builder.graph.Create(rwTar, container, comment, author)
if err != nil {
return nil, err
}
// Register the image if needed
if repository != "" {
if err := builder.repositories.Set(repository, tag, img.Id, true); err != nil {
return img, err
}
}
return img, nil
}
func (builder *Builder) clearTmp(containers, images map[string]struct{}) {
for c := range containers {
tmp := builder.runtime.Get(c)
builder.runtime.Destroy(tmp)
Debugf("Removing container %s", c)
}
for i := range images {
builder.runtime.graph.Delete(i)
Debugf("Removing image %s", i)
}
}
func (builder *Builder) Build(dockerfile io.Reader, stdout io.Writer) (*Image, error) {
var (
image, base *Image
tmpContainers map[string]struct{} = make(map[string]struct{})
tmpImages map[string]struct{} = make(map[string]struct{})
)
defer builder.clearTmp(tmpContainers, tmpImages)
file := bufio.NewReader(dockerfile)
for {
line, err := file.ReadString('\n')
if err != nil {
if err == io.EOF {
break
}
return nil, err
}
line = strings.TrimSpace(line)
// Skip comments and empty line
if len(line) == 0 || line[0] == '#' {
continue
}
tmp := strings.SplitN(line, " ", 2)
if len(tmp) != 2 {
return nil, fmt.Errorf("Invalid Dockerfile format")
}
instruction := tmp[0]
arguments := tmp[1]
switch strings.ToLower(instruction) {
case "from":
fmt.Fprintf(stdout, "FROM %s\n", arguments)
image, err = builder.runtime.repositories.LookupImage(arguments)
if err != nil {
return nil, err
}
break
case "run":
fmt.Fprintf(stdout, "RUN %s\n", arguments)
if image == nil {
return nil, fmt.Errorf("Please provide a source image with `from` prior to run")
}
config, err := ParseRun([]string{image.Id, "/bin/sh", "-c", arguments}, nil, builder.runtime.capabilities)
if err != nil {
return nil, err
}
// Create the container and start it
c, err := builder.Create(config)
if err != nil {
return nil, err
}
if err := c.Start(); err != nil {
return nil, err
}
tmpContainers[c.Id] = struct{}{}
// Wait for it to finish
if result := c.Wait(); result != 0 {
return nil, fmt.Errorf("!!! '%s' return non-zero exit code '%d'. Aborting.", arguments, result)
}
// Commit the container
base, err = builder.Commit(c, "", "", "", "")
if err != nil {
return nil, err
}
tmpImages[base.Id] = struct{}{}
fmt.Fprintf(stdout, "===> %s\n", base.ShortId())
// use the base as the new image
image = base
break
case "insert":
if image == nil {
return nil, fmt.Errorf("Please provide a source image with `from` prior to copy")
}
tmp = strings.SplitN(arguments, " ", 2)
if len(tmp) != 2 {
return nil, fmt.Errorf("Invalid INSERT format")
}
sourceUrl := tmp[0]
destPath := tmp[1]
fmt.Fprintf(stdout, "COPY %s to %s in %s\n", sourceUrl, destPath, base.ShortId())
file, err := Download(sourceUrl, stdout)
if err != nil {
return nil, err
}
defer file.Body.Close()
config, err := ParseRun([]string{base.Id, "echo", "insert", sourceUrl, destPath}, nil, builder.runtime.capabilities)
if err != nil {
return nil, err
}
c, err := builder.Create(config)
if err != nil {
return nil, err
}
if err := c.Start(); err != nil {
return nil, err
}
// Wait for echo to finish
if result := c.Wait(); result != 0 {
return nil, fmt.Errorf("!!! '%s' return non-zero exit code '%d'. Aborting.", arguments, result)
}
if err := c.Inject(file.Body, destPath); err != nil {
return nil, err
}
base, err = builder.Commit(c, "", "", "", "")
if err != nil {
return nil, err
}
fmt.Fprintf(stdout, "===> %s\n", base.ShortId())
image = base
break
default:
fmt.Fprintf(stdout, "Skipping unknown instruction %s\n", instruction)
}
}
if base != nil {
// The build is successful, keep the temporary containers and images
for i := range tmpImages {
delete(tmpImages, i)
}
for i := range tmpContainers {
delete(tmpContainers, i)
}
fmt.Fprintf(stdout, "Build finished. image id: %s\n", base.ShortId())
} else {
fmt.Fprintf(stdout, "An error occured during the build\n")
}
return base, nil
}

88
builder_test.go Normal file
View File

@@ -0,0 +1,88 @@
package docker
import (
"strings"
"testing"
)
const Dockerfile = `
# VERSION 0.1
# DOCKER-VERSION 0.1.6
from docker-ut
run sh -c 'echo root:testpass > /tmp/passwd'
run mkdir -p /var/run/sshd
copy https://raw.github.com/dotcloud/docker/master/CHANGELOG.md /tmp/CHANGELOG.md
`
func TestBuild(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
builder := NewBuilder(runtime)
img, err := builder.Build(strings.NewReader(Dockerfile), &nopWriter{})
if err != nil {
t.Fatal(err)
}
container, err := builder.Create(
&Config{
Image: img.Id,
Cmd: []string{"cat", "/tmp/passwd"},
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container)
output, err := container.Output()
if err != nil {
t.Fatal(err)
}
if string(output) != "root:testpass\n" {
t.Fatalf("Unexpected output. Read '%s', expected '%s'", output, "root:testpass\n")
}
container2, err := builder.Create(
&Config{
Image: img.Id,
Cmd: []string{"ls", "-d", "/var/run/sshd"},
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container2)
output, err = container2.Output()
if err != nil {
t.Fatal(err)
}
if string(output) != "/var/run/sshd\n" {
t.Fatal("/var/run/sshd has not been created")
}
container3, err := builder.Create(
&Config{
Image: img.Id,
Cmd: []string{"cat", "/tmp/CHANGELOG.md"},
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container3)
output, err = container3.Output()
if err != nil {
t.Fatal(err)
}
if len(output) == 0 {
t.Fatal("/tmp/CHANGELOG.md has not been copied")
}
}

View File

@@ -1,543 +0,0 @@
package docker
import (
"encoding/json"
"fmt"
"github.com/dotcloud/docker/utils"
"io"
"io/ioutil"
"net/url"
"os"
"path"
"reflect"
"regexp"
"strings"
)
type BuildFile interface {
Build(io.Reader) (string, error)
CmdFrom(string) error
CmdRun(string) error
}
type buildFile struct {
runtime *Runtime
srv *Server
image string
maintainer string
config *Config
context string
verbose bool
utilizeCache bool
rm bool
tmpContainers map[string]struct{}
tmpImages map[string]struct{}
out io.Writer
}
func (b *buildFile) clearTmp(containers map[string]struct{}) {
for c := range containers {
tmp := b.runtime.Get(c)
b.runtime.Destroy(tmp)
fmt.Fprintf(b.out, "Removing intermediate container %s\n", utils.TruncateID(c))
}
}
func (b *buildFile) CmdFrom(name string) error {
image, err := b.runtime.repositories.LookupImage(name)
if err != nil {
if b.runtime.graph.IsNotExist(err) {
remote, tag := utils.ParseRepositoryTag(name)
if err := b.srv.ImagePull(remote, tag, b.out, utils.NewStreamFormatter(false), nil, nil, true); err != nil {
return err
}
image, err = b.runtime.repositories.LookupImage(name)
if err != nil {
return err
}
} else {
return err
}
}
b.image = image.ID
b.config = &Config{}
if b.config.Env == nil || len(b.config.Env) == 0 {
b.config.Env = append(b.config.Env, "HOME=/", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin")
}
return nil
}
func (b *buildFile) CmdMaintainer(name string) error {
b.maintainer = name
return b.commit("", b.config.Cmd, fmt.Sprintf("MAINTAINER %s", name))
}
func (b *buildFile) CmdRun(args string) error {
if b.image == "" {
return fmt.Errorf("Please provide a source image with `from` prior to run")
}
config, _, _, err := ParseRun([]string{b.image, "/bin/sh", "-c", args}, nil)
if err != nil {
return err
}
cmd := b.config.Cmd
b.config.Cmd = nil
MergeConfig(b.config, config)
defer func(cmd []string) { b.config.Cmd = cmd }(cmd)
utils.Debugf("Command to be executed: %v", b.config.Cmd)
if b.utilizeCache {
if cache, err := b.srv.ImageGetCached(b.image, b.config); err != nil {
return err
} else if cache != nil {
fmt.Fprintf(b.out, " ---> Using cache\n")
utils.Debugf("[BUILDER] Use cached version")
b.image = cache.ID
return nil
} else {
utils.Debugf("[BUILDER] Cache miss")
}
}
cid, err := b.run()
if err != nil {
return err
}
if err := b.commit(cid, cmd, "run"); err != nil {
return err
}
return nil
}
func (b *buildFile) FindEnvKey(key string) int {
for k, envVar := range b.config.Env {
envParts := strings.SplitN(envVar, "=", 2)
if key == envParts[0] {
return k
}
}
return -1
}
func (b *buildFile) ReplaceEnvMatches(value string) (string, error) {
exp, err := regexp.Compile("(\\\\\\\\+|[^\\\\]|\\b|\\A)\\$({?)([[:alnum:]_]+)(}?)")
if err != nil {
return value, err
}
matches := exp.FindAllString(value, -1)
for _, match := range matches {
match = match[strings.Index(match, "$"):]
matchKey := strings.Trim(match, "${}")
for _, envVar := range b.config.Env {
envParts := strings.SplitN(envVar, "=", 2)
envKey := envParts[0]
envValue := envParts[1]
if envKey == matchKey {
value = strings.Replace(value, match, envValue, -1)
break
}
}
}
return value, nil
}
func (b *buildFile) CmdEnv(args string) error {
tmp := strings.SplitN(args, " ", 2)
if len(tmp) != 2 {
return fmt.Errorf("Invalid ENV format")
}
key := strings.Trim(tmp[0], " \t")
value := strings.Trim(tmp[1], " \t")
envKey := b.FindEnvKey(key)
replacedValue, err := b.ReplaceEnvMatches(value)
if err != nil {
return err
}
replacedVar := fmt.Sprintf("%s=%s", key, replacedValue)
if envKey >= 0 {
b.config.Env[envKey] = replacedVar
} else {
b.config.Env = append(b.config.Env, replacedVar)
}
return b.commit("", b.config.Cmd, fmt.Sprintf("ENV %s", replacedVar))
}
func (b *buildFile) CmdCmd(args string) error {
var cmd []string
if err := json.Unmarshal([]byte(args), &cmd); err != nil {
utils.Debugf("Error unmarshalling: %s, setting cmd to /bin/sh -c", err)
cmd = []string{"/bin/sh", "-c", args}
}
if err := b.commit("", cmd, fmt.Sprintf("CMD %v", cmd)); err != nil {
return err
}
b.config.Cmd = cmd
return nil
}
func (b *buildFile) CmdExpose(args string) error {
ports := strings.Split(args, " ")
b.config.PortSpecs = append(ports, b.config.PortSpecs...)
return b.commit("", b.config.Cmd, fmt.Sprintf("EXPOSE %v", ports))
}
func (b *buildFile) CmdUser(args string) error {
b.config.User = args
return b.commit("", b.config.Cmd, fmt.Sprintf("USER %v", args))
}
func (b *buildFile) CmdInsert(args string) error {
return fmt.Errorf("INSERT has been deprecated. Please use ADD instead")
}
func (b *buildFile) CmdCopy(args string) error {
return fmt.Errorf("COPY has been deprecated. Please use ADD instead")
}
func (b *buildFile) CmdEntrypoint(args string) error {
if args == "" {
return fmt.Errorf("Entrypoint cannot be empty")
}
var entrypoint []string
if err := json.Unmarshal([]byte(args), &entrypoint); err != nil {
b.config.Entrypoint = []string{"/bin/sh", "-c", args}
} else {
b.config.Entrypoint = entrypoint
}
if err := b.commit("", b.config.Cmd, fmt.Sprintf("ENTRYPOINT %s", args)); err != nil {
return err
}
return nil
}
func (b *buildFile) CmdWorkdir(workdir string) error {
b.config.WorkingDir = workdir
return b.commit("", b.config.Cmd, fmt.Sprintf("WORKDIR %v", workdir))
}
func (b *buildFile) CmdVolume(args string) error {
if args == "" {
return fmt.Errorf("Volume cannot be empty")
}
var volume []string
if err := json.Unmarshal([]byte(args), &volume); err != nil {
volume = []string{args}
}
if b.config.Volumes == nil {
b.config.Volumes = NewPathOpts()
}
for _, v := range volume {
b.config.Volumes[v] = struct{}{}
}
if err := b.commit("", b.config.Cmd, fmt.Sprintf("VOLUME %s", args)); err != nil {
return err
}
return nil
}
func (b *buildFile) addRemote(container *Container, orig, dest string) error {
file, err := utils.Download(orig, ioutil.Discard)
if err != nil {
return err
}
defer file.Body.Close()
// If the destination is a directory, figure out the filename.
if strings.HasSuffix(dest, "/") {
u, err := url.Parse(orig)
if err != nil {
return err
}
path := u.Path
if strings.HasSuffix(path, "/") {
path = path[:len(path)-1]
}
parts := strings.Split(path, "/")
filename := parts[len(parts)-1]
if filename == "" {
return fmt.Errorf("cannot determine filename from url: %s", u)
}
dest = dest + filename
}
return container.Inject(file.Body, dest)
}
func (b *buildFile) addContext(container *Container, orig, dest string) error {
origPath := path.Join(b.context, orig)
destPath := path.Join(container.RootfsPath(), dest)
// Preserve the trailing '/'
if strings.HasSuffix(dest, "/") {
destPath = destPath + "/"
}
if !strings.HasPrefix(origPath, b.context) {
return fmt.Errorf("Forbidden path: %s", origPath)
}
fi, err := os.Stat(origPath)
if err != nil {
return fmt.Errorf("%s: no such file or directory", orig)
}
if fi.IsDir() {
if err := CopyWithTar(origPath, destPath); err != nil {
return err
}
// First try to unpack the source as an archive
} else if err := UntarPath(origPath, destPath); err != nil {
utils.Debugf("Couldn't untar %s to %s: %s", origPath, destPath, err)
// If that fails, just copy it as a regular file
if err := os.MkdirAll(path.Dir(destPath), 0755); err != nil {
return err
}
if err := CopyWithTar(origPath, destPath); err != nil {
return err
}
}
return nil
}
func (b *buildFile) CmdAdd(args string) error {
if b.context == "" {
return fmt.Errorf("No context given. Impossible to use ADD")
}
tmp := strings.SplitN(args, " ", 2)
if len(tmp) != 2 {
return fmt.Errorf("Invalid ADD format")
}
orig, err := b.ReplaceEnvMatches(strings.Trim(tmp[0], " \t"))
if err != nil {
return err
}
dest, err := b.ReplaceEnvMatches(strings.Trim(tmp[1], " \t"))
if err != nil {
return err
}
cmd := b.config.Cmd
b.config.Cmd = []string{"/bin/sh", "-c", fmt.Sprintf("#(nop) ADD %s in %s", orig, dest)}
b.config.Image = b.image
// Create the container and start it
container, _, err := b.runtime.Create(b.config, "")
if err != nil {
return err
}
b.tmpContainers[container.ID] = struct{}{}
if err := container.EnsureMounted(); err != nil {
return err
}
defer container.Unmount()
if utils.IsURL(orig) {
if err := b.addRemote(container, orig, dest); err != nil {
return err
}
} else {
if err := b.addContext(container, orig, dest); err != nil {
return err
}
}
if err := b.commit(container.ID, cmd, fmt.Sprintf("ADD %s in %s", orig, dest)); err != nil {
return err
}
b.config.Cmd = cmd
return nil
}
func (b *buildFile) run() (string, error) {
if b.image == "" {
return "", fmt.Errorf("Please provide a source image with `from` prior to run")
}
b.config.Image = b.image
// Create the container and start it
c, _, err := b.runtime.Create(b.config, "")
if err != nil {
return "", err
}
b.tmpContainers[c.ID] = struct{}{}
fmt.Fprintf(b.out, " ---> Running in %s\n", utils.TruncateID(c.ID))
// override the entry point that may have been picked up from the base image
c.Path = b.config.Cmd[0]
c.Args = b.config.Cmd[1:]
var errCh chan error
if b.verbose {
errCh = utils.Go(func() error {
return <-c.Attach(nil, nil, b.out, b.out)
})
}
//start the container
hostConfig := &HostConfig{}
if err := c.Start(hostConfig); err != nil {
return "", err
}
if errCh != nil {
if err := <-errCh; err != nil {
return "", err
}
}
// Wait for it to finish
if ret := c.Wait(); ret != 0 {
return "", fmt.Errorf("The command %v returned a non-zero code: %d", b.config.Cmd, ret)
}
return c.ID, nil
}
// Commit the container <id> with the autorun command <autoCmd>
func (b *buildFile) commit(id string, autoCmd []string, comment string) error {
if b.image == "" {
return fmt.Errorf("Please provide a source image with `from` prior to commit")
}
b.config.Image = b.image
if id == "" {
cmd := b.config.Cmd
b.config.Cmd = []string{"/bin/sh", "-c", "#(nop) " + comment}
defer func(cmd []string) { b.config.Cmd = cmd }(cmd)
if b.utilizeCache {
if cache, err := b.srv.ImageGetCached(b.image, b.config); err != nil {
return err
} else if cache != nil {
fmt.Fprintf(b.out, " ---> Using cache\n")
utils.Debugf("[BUILDER] Use cached version")
b.image = cache.ID
return nil
} else {
utils.Debugf("[BUILDER] Cache miss")
}
}
container, warnings, err := b.runtime.Create(b.config, "")
if err != nil {
return err
}
for _, warning := range warnings {
fmt.Fprintf(b.out, " ---> [Warning] %s\n", warning)
}
b.tmpContainers[container.ID] = struct{}{}
fmt.Fprintf(b.out, " ---> Running in %s\n", utils.TruncateID(container.ID))
id = container.ID
if err := container.EnsureMounted(); err != nil {
return err
}
defer container.Unmount()
}
container := b.runtime.Get(id)
if container == nil {
return fmt.Errorf("An error occured while creating the container")
}
// Note: Actually copy the struct
autoConfig := *b.config
autoConfig.Cmd = autoCmd
// Commit the container
image, err := b.runtime.Commit(container, "", "", "", b.maintainer, &autoConfig)
if err != nil {
return err
}
b.tmpImages[image.ID] = struct{}{}
b.image = image.ID
return nil
}
// Long lines can be split with a backslash
var lineContinuation = regexp.MustCompile(`\s*\\\s*\n`)
func (b *buildFile) Build(context io.Reader) (string, error) {
// FIXME: @creack "name" is a terrible variable name
name, err := ioutil.TempDir("", "docker-build")
if err != nil {
return "", err
}
if err := Untar(context, name); err != nil {
return "", err
}
defer os.RemoveAll(name)
b.context = name
filename := path.Join(name, "Dockerfile")
if _, err := os.Stat(filename); os.IsNotExist(err) {
return "", fmt.Errorf("Can't build a directory with no Dockerfile")
}
fileBytes, err := ioutil.ReadFile(filename)
if err != nil {
return "", err
}
dockerfile := string(fileBytes)
dockerfile = lineContinuation.ReplaceAllString(dockerfile, "")
stepN := 0
for _, line := range strings.Split(dockerfile, "\n") {
line = strings.Trim(strings.Replace(line, "\t", " ", -1), " \t\r\n")
// Skip comments and empty line
if len(line) == 0 || line[0] == '#' {
continue
}
tmp := strings.SplitN(line, " ", 2)
if len(tmp) != 2 {
return "", fmt.Errorf("Invalid Dockerfile format")
}
instruction := strings.ToLower(strings.Trim(tmp[0], " "))
arguments := strings.Trim(tmp[1], " ")
method, exists := reflect.TypeOf(b).MethodByName("Cmd" + strings.ToUpper(instruction[:1]) + strings.ToLower(instruction[1:]))
if !exists {
fmt.Fprintf(b.out, "# Skipping unknown instruction %s\n", strings.ToUpper(instruction))
continue
}
stepN += 1
fmt.Fprintf(b.out, "Step %d : %s %s\n", stepN, strings.ToUpper(instruction), arguments)
ret := method.Func.Call([]reflect.Value{reflect.ValueOf(b), reflect.ValueOf(arguments)})[0].Interface()
if ret != nil {
return "", ret.(error)
}
fmt.Fprintf(b.out, " ---> %v\n", utils.TruncateID(b.image))
}
if b.image != "" {
fmt.Fprintf(b.out, "Successfully built %s\n", utils.TruncateID(b.image))
if b.rm {
b.clearTmp(b.tmpContainers)
}
return b.image, nil
}
return "", fmt.Errorf("An error occurred during the build\n")
}
func NewBuildFile(srv *Server, out io.Writer, verbose, utilizeCache, rm bool) BuildFile {
return &buildFile{
runtime: srv.runtime,
srv: srv,
config: &Config{},
out: out,
tmpContainers: make(map[string]struct{}),
tmpImages: make(map[string]struct{}),
verbose: verbose,
utilizeCache: utilizeCache,
rm: rm,
}
}

View File

@@ -1,543 +0,0 @@
package docker
import (
"fmt"
"io/ioutil"
"net"
"net/http"
"net/http/httptest"
"strings"
"testing"
)
// mkTestContext generates a build context from the contents of the provided dockerfile.
// This context is suitable for use as an argument to BuildFile.Build()
func mkTestContext(dockerfile string, files [][2]string, t *testing.T) Archive {
context, err := mkBuildContext(dockerfile, files)
if err != nil {
t.Fatal(err)
}
return context
}
// A testContextTemplate describes a build context and how to test it
type testContextTemplate struct {
// Contents of the Dockerfile
dockerfile string
// Additional files in the context, eg [][2]string{"./passwd", "gordon"}
files [][2]string
// Additional remote files to host on a local HTTP server.
remoteFiles [][2]string
}
// A table of all the contexts to build and test.
// A new docker runtime will be created and torn down for each context.
var testContexts = []testContextTemplate{
{
`
from {IMAGE}
run sh -c 'echo root:testpass > /tmp/passwd'
run mkdir -p /var/run/sshd
run [ "$(cat /tmp/passwd)" = "root:testpass" ]
run [ "$(ls -d /var/run/sshd)" = "/var/run/sshd" ]
`,
nil,
nil,
},
// Exactly the same as above, except uses a line split with a \ to test
// multiline support.
{
`
from {IMAGE}
run sh -c 'echo root:testpass \
> /tmp/passwd'
run mkdir -p /var/run/sshd
run [ "$(cat /tmp/passwd)" = "root:testpass" ]
run [ "$(ls -d /var/run/sshd)" = "/var/run/sshd" ]
`,
nil,
nil,
},
// Line containing literal "\n"
{
`
from {IMAGE}
run sh -c 'echo root:testpass > /tmp/passwd'
run echo "foo \n bar"; echo "baz"
run mkdir -p /var/run/sshd
run [ "$(cat /tmp/passwd)" = "root:testpass" ]
run [ "$(ls -d /var/run/sshd)" = "/var/run/sshd" ]
`,
nil,
nil,
},
{
`
from {IMAGE}
add foo /usr/lib/bla/bar
run [ "$(cat /usr/lib/bla/bar)" = 'hello' ]
add http://{SERVERADDR}/baz /usr/lib/baz/quux
run [ "$(cat /usr/lib/baz/quux)" = 'world!' ]
`,
[][2]string{{"foo", "hello"}},
[][2]string{{"/baz", "world!"}},
},
{
`
from {IMAGE}
add f /
run [ "$(cat /f)" = "hello" ]
add f /abc
run [ "$(cat /abc)" = "hello" ]
add f /x/y/z
run [ "$(cat /x/y/z)" = "hello" ]
add f /x/y/d/
run [ "$(cat /x/y/d/f)" = "hello" ]
add d /
run [ "$(cat /ga)" = "bu" ]
add d /somewhere
run [ "$(cat /somewhere/ga)" = "bu" ]
add d /anotherplace/
run [ "$(cat /anotherplace/ga)" = "bu" ]
add d /somewheeeere/over/the/rainbooow
run [ "$(cat /somewheeeere/over/the/rainbooow/ga)" = "bu" ]
`,
[][2]string{
{"f", "hello"},
{"d/ga", "bu"},
},
nil,
},
{
`
from {IMAGE}
add http://{SERVERADDR}/x /a/b/c
run [ "$(cat /a/b/c)" = "hello" ]
add http://{SERVERADDR}/x?foo=bar /
run [ "$(cat /x)" = "hello" ]
add http://{SERVERADDR}/x /d/
run [ "$(cat /d/x)" = "hello" ]
add http://{SERVERADDR} /e
run [ "$(cat /e)" = "blah" ]
`,
nil,
[][2]string{{"/x", "hello"}, {"/", "blah"}},
},
{
`
from {IMAGE}
env FOO BAR
run [ "$FOO" = "BAR" ]
`,
nil,
nil,
},
{
`
from {IMAGE}
ENTRYPOINT /bin/echo
CMD Hello world
`,
nil,
nil,
},
{
`
from {IMAGE}
VOLUME /test
CMD Hello world
`,
nil,
nil,
},
{
`
from {IMAGE}
env FOO /foo/baz
env BAR /bar
env BAZ $BAR
env FOOPATH $PATH:$FOO
run [ "$BAR" = "$BAZ" ]
run [ "$FOOPATH" = "$PATH:/foo/baz" ]
`,
nil,
nil,
},
{
`
from {IMAGE}
env FOO /bar
env TEST testdir
env BAZ /foobar
add testfile $BAZ/
add $TEST $FOO
run [ "$(cat /foobar/testfile)" = "test1" ]
run [ "$(cat /bar/withfile)" = "test2" ]
`,
[][2]string{
{"testfile", "test1"},
{"testdir/withfile", "test2"},
},
nil,
},
}
// FIXME: test building with 2 successive overlapping ADD commands
func constructDockerfile(template string, ip net.IP, port string) string {
serverAddr := fmt.Sprintf("%s:%s", ip, port)
replacer := strings.NewReplacer("{IMAGE}", unitTestImageID, "{SERVERADDR}", serverAddr)
return replacer.Replace(template)
}
func mkTestingFileServer(files [][2]string) (*httptest.Server, error) {
mux := http.NewServeMux()
for _, file := range files {
name, contents := file[0], file[1]
mux.HandleFunc(name, func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte(contents))
})
}
// This is how httptest.NewServer sets up a net.Listener, except that our listener must accept remote
// connections (from the container).
listener, err := net.Listen("tcp", ":0")
if err != nil {
return nil, err
}
s := httptest.NewUnstartedServer(mux)
s.Listener = listener
s.Start()
return s, nil
}
func TestBuild(t *testing.T) {
for _, ctx := range testContexts {
buildImage(ctx, t, nil, true)
}
}
func buildImage(context testContextTemplate, t *testing.T, srv *Server, useCache bool) *Image {
if srv == nil {
runtime := mkRuntime(t)
defer nuke(runtime)
srv = &Server{
runtime: runtime,
pullingPool: make(map[string]struct{}),
pushingPool: make(map[string]struct{}),
}
}
httpServer, err := mkTestingFileServer(context.remoteFiles)
if err != nil {
t.Fatal(err)
}
defer httpServer.Close()
idx := strings.LastIndex(httpServer.URL, ":")
if idx < 0 {
t.Fatalf("could not get port from test http server address %s", httpServer.URL)
}
port := httpServer.URL[idx+1:]
ip := srv.runtime.networkManager.bridgeNetwork.IP
dockerfile := constructDockerfile(context.dockerfile, ip, port)
buildfile := NewBuildFile(srv, ioutil.Discard, false, useCache, false)
id, err := buildfile.Build(mkTestContext(dockerfile, context.files, t))
if err != nil {
t.Fatal(err)
}
img, err := srv.ImageInspect(id)
if err != nil {
t.Fatal(err)
}
return img
}
func TestVolume(t *testing.T) {
img := buildImage(testContextTemplate{`
from {IMAGE}
volume /test
cmd Hello world
`, nil, nil}, t, nil, true)
if len(img.Config.Volumes) == 0 {
t.Fail()
}
for key := range img.Config.Volumes {
if key != "/test" {
t.Fail()
}
}
}
func TestBuildMaintainer(t *testing.T) {
img := buildImage(testContextTemplate{`
from {IMAGE}
maintainer dockerio
`, nil, nil}, t, nil, true)
if img.Author != "dockerio" {
t.Fail()
}
}
func TestBuildUser(t *testing.T) {
img := buildImage(testContextTemplate{`
from {IMAGE}
user dockerio
`, nil, nil}, t, nil, true)
if img.Config.User != "dockerio" {
t.Fail()
}
}
func TestBuildEnv(t *testing.T) {
img := buildImage(testContextTemplate{`
from {IMAGE}
env port 4243
`,
nil, nil}, t, nil, true)
hasEnv := false
for _, envVar := range img.Config.Env {
if envVar == "port=4243" {
hasEnv = true
break
}
}
if !hasEnv {
t.Fail()
}
}
func TestBuildCmd(t *testing.T) {
img := buildImage(testContextTemplate{`
from {IMAGE}
cmd ["/bin/echo", "Hello World"]
`,
nil, nil}, t, nil, true)
if img.Config.Cmd[0] != "/bin/echo" {
t.Log(img.Config.Cmd[0])
t.Fail()
}
if img.Config.Cmd[1] != "Hello World" {
t.Log(img.Config.Cmd[1])
t.Fail()
}
}
func TestBuildExpose(t *testing.T) {
img := buildImage(testContextTemplate{`
from {IMAGE}
expose 4243
`,
nil, nil}, t, nil, true)
if img.Config.PortSpecs[0] != "4243" {
t.Fail()
}
}
func TestBuildEntrypoint(t *testing.T) {
img := buildImage(testContextTemplate{`
from {IMAGE}
entrypoint ["/bin/echo"]
`,
nil, nil}, t, nil, true)
if img.Config.Entrypoint[0] != "/bin/echo" {
}
}
// testing #1405 - config.Cmd does not get cleaned up if
// utilizing cache
func TestBuildEntrypointRunCleanup(t *testing.T) {
runtime := mkRuntime(t)
defer nuke(runtime)
srv := &Server{
runtime: runtime,
pullingPool: make(map[string]struct{}),
pushingPool: make(map[string]struct{}),
}
img := buildImage(testContextTemplate{`
from {IMAGE}
run echo "hello"
`,
nil, nil}, t, srv, true)
img = buildImage(testContextTemplate{`
from {IMAGE}
run echo "hello"
add foo /foo
entrypoint ["/bin/echo"]
`,
[][2]string{{"foo", "HEYO"}}, nil}, t, srv, true)
if len(img.Config.Cmd) != 0 {
t.Fail()
}
}
func TestBuildImageWithCache(t *testing.T) {
runtime := mkRuntime(t)
defer nuke(runtime)
srv := &Server{
runtime: runtime,
pullingPool: make(map[string]struct{}),
pushingPool: make(map[string]struct{}),
}
template := testContextTemplate{`
from {IMAGE}
maintainer dockerio
`,
nil, nil}
img := buildImage(template, t, srv, true)
imageId := img.ID
img = nil
img = buildImage(template, t, srv, true)
if imageId != img.ID {
t.Logf("Image ids should match: %s != %s", imageId, img.ID)
t.Fail()
}
}
func TestBuildImageWithoutCache(t *testing.T) {
runtime := mkRuntime(t)
defer nuke(runtime)
srv := &Server{
runtime: runtime,
pullingPool: make(map[string]struct{}),
pushingPool: make(map[string]struct{}),
}
template := testContextTemplate{`
from {IMAGE}
maintainer dockerio
`,
nil, nil}
img := buildImage(template, t, srv, true)
imageId := img.ID
img = nil
img = buildImage(template, t, srv, false)
if imageId == img.ID {
t.Logf("Image ids should not match: %s == %s", imageId, img.ID)
t.Fail()
}
}
func TestForbiddenContextPath(t *testing.T) {
runtime := mkRuntime(t)
defer nuke(runtime)
srv := &Server{
runtime: runtime,
pullingPool: make(map[string]struct{}),
pushingPool: make(map[string]struct{}),
}
context := testContextTemplate{`
from {IMAGE}
maintainer dockerio
add ../../ test/
`,
[][2]string{{"test.txt", "test1"}, {"other.txt", "other"}}, nil}
httpServer, err := mkTestingFileServer(context.remoteFiles)
if err != nil {
t.Fatal(err)
}
defer httpServer.Close()
idx := strings.LastIndex(httpServer.URL, ":")
if idx < 0 {
t.Fatalf("could not get port from test http server address %s", httpServer.URL)
}
port := httpServer.URL[idx+1:]
ip := srv.runtime.networkManager.bridgeNetwork.IP
dockerfile := constructDockerfile(context.dockerfile, ip, port)
buildfile := NewBuildFile(srv, ioutil.Discard, false, true, false)
_, err = buildfile.Build(mkTestContext(dockerfile, context.files, t))
if err == nil {
t.Log("Error should not be nil")
t.Fail()
}
if err.Error() != "Forbidden path: /" {
t.Logf("Error message is not expected: %s", err.Error())
t.Fail()
}
}
func TestBuildADDFileNotFound(t *testing.T) {
runtime := mkRuntime(t)
defer nuke(runtime)
srv := &Server{
runtime: runtime,
pullingPool: make(map[string]struct{}),
pushingPool: make(map[string]struct{}),
}
context := testContextTemplate{`
from {IMAGE}
add foo /usr/local/bar
`,
nil, nil}
httpServer, err := mkTestingFileServer(context.remoteFiles)
if err != nil {
t.Fatal(err)
}
defer httpServer.Close()
idx := strings.LastIndex(httpServer.URL, ":")
if idx < 0 {
t.Fatalf("could not get port from test http server address %s", httpServer.URL)
}
port := httpServer.URL[idx+1:]
ip := srv.runtime.networkManager.bridgeNetwork.IP
dockerfile := constructDockerfile(context.dockerfile, ip, port)
buildfile := NewBuildFile(srv, ioutil.Discard, false, true, false)
_, err = buildfile.Build(mkTestContext(dockerfile, context.files, t))
if err == nil {
t.Log("Error should not be nil")
t.Fail()
}
if err.Error() != "foo: no such file or directory" {
t.Logf("Error message is not expected: %s", err.Error())
t.Fail()
}
}

View File

@@ -65,7 +65,7 @@ func Changes(layers []string, rw string) ([]Change, error) {
file := filepath.Base(path)
// If there is a whiteout, then the file was removed
if strings.HasPrefix(file, ".wh.") {
originalFile := file[len(".wh."):]
originalFile := strings.TrimLeft(file, ".wh.")
change.Path = filepath.Join(filepath.Dir(path), originalFile)
change.Kind = ChangeDelete
} else {
@@ -99,7 +99,7 @@ func Changes(layers []string, rw string) ([]Change, error) {
changes = append(changes, change)
return nil
})
if err != nil && !os.IsNotExist(err) {
if err != nil {
return nil, err
}
return changes, nil

File diff suppressed because it is too large Load Diff

View File

@@ -3,7 +3,7 @@ package docker
import (
"bufio"
"fmt"
"github.com/dotcloud/docker/utils"
"github.com/dotcloud/docker/rcli"
"io"
"io/ioutil"
"strings"
@@ -38,7 +38,7 @@ func setTimeout(t *testing.T, msg string, d time.Duration, f func()) {
f()
c <- false
}()
if <-c && msg != "" {
if <-c {
t.Fatal(msg)
}
}
@@ -59,151 +59,69 @@ func assertPipe(input, output string, r io.Reader, w io.Writer, count int) error
return nil
}
func cmdWait(srv *Server, container *Container) error {
stdout, stdoutPipe := io.Pipe()
go func() {
srv.CmdWait(nil, stdoutPipe, container.Id)
}()
if _, err := bufio.NewReader(stdout).ReadString('\n'); err != nil {
return err
}
// Cleanup pipes
return closeWrap(stdout, stdoutPipe)
}
// TestRunHostname checks that 'docker run -h' correctly sets a custom hostname
func TestRunHostname(t *testing.T) {
stdout, stdoutPipe := io.Pipe()
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
cli := NewDockerCli(nil, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
srv := &Server{runtime: runtime}
stdin, _ := io.Pipe()
stdout, stdoutPipe := io.Pipe()
c := make(chan struct{})
go func() {
defer close(c)
if err := cli.CmdRun("-h", "foobar", unitTestImageID, "hostname"); err != nil {
if err := srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-h", "foobar", GetTestImage(runtime).Id, "hostname"); err != nil {
t.Fatal(err)
}
close(c)
}()
setTimeout(t, "Reading command output time out", 2*time.Second, func() {
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
if cmdOutput != "foobar\n" {
t.Fatalf("'hostname' should display '%s', not '%s'", "foobar\n", cmdOutput)
}
})
container := globalRuntime.List()[0]
setTimeout(t, "CmdRun timed out", 10*time.Second, func() {
<-c
go func() {
cli.CmdWait(container.ID)
}()
if _, err := bufio.NewReader(stdout).ReadString('\n'); err != nil {
t.Fatal(err)
}
})
// Cleanup pipes
if err := closeWrap(stdout, stdoutPipe); err != nil {
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
}
// TestRunWorkdir checks that 'docker run -w' correctly sets a custom working directory
func TestRunWorkdir(t *testing.T) {
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(nil, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
c := make(chan struct{})
go func() {
defer close(c)
if err := cli.CmdRun("-w", "/foo/bar", unitTestImageID, "pwd"); err != nil {
t.Fatal(err)
}
}()
setTimeout(t, "Reading command output time out", 2*time.Second, func() {
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
if cmdOutput != "/foo/bar\n" {
t.Fatalf("'pwd' should display '%s', not '%s'", "/foo/bar\n", cmdOutput)
}
})
container := globalRuntime.List()[0]
setTimeout(t, "CmdRun timed out", 10*time.Second, func() {
<-c
go func() {
cli.CmdWait(container.ID)
}()
if _, err := bufio.NewReader(stdout).ReadString('\n'); err != nil {
t.Fatal(err)
}
})
// Cleanup pipes
if err := closeWrap(stdout, stdoutPipe); err != nil {
t.Fatal(err)
if cmdOutput != "foobar\n" {
t.Fatalf("'hostname' should display '%s', not '%s'", "foobar\n", cmdOutput)
}
}
// TestRunWorkdirExists checks that 'docker run -w' correctly sets a custom working directory, even if it exists
func TestRunWorkdirExists(t *testing.T) {
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(nil, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
c := make(chan struct{})
go func() {
defer close(c)
if err := cli.CmdRun("-w", "/proc", unitTestImageID, "pwd"); err != nil {
t.Fatal(err)
}
}()
setTimeout(t, "Reading command output time out", 2*time.Second, func() {
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
if cmdOutput != "/proc\n" {
t.Fatalf("'pwd' should display '%s', not '%s'", "/proc\n", cmdOutput)
}
})
container := globalRuntime.List()[0]
setTimeout(t, "CmdRun timed out", 5*time.Second, func() {
setTimeout(t, "CmdRun timed out", 2*time.Second, func() {
<-c
go func() {
cli.CmdWait(container.ID)
}()
if _, err := bufio.NewReader(stdout).ReadString('\n'); err != nil {
t.Fatal(err)
}
cmdWait(srv, srv.runtime.List()[0])
})
// Cleanup pipes
if err := closeWrap(stdout, stdoutPipe); err != nil {
t.Fatal(err)
}
}
func TestRunExit(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(stdin, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
c1 := make(chan struct{})
go func() {
cli.CmdRun("-i", unitTestImageID, "/bin/cat")
srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-i", GetTestImage(runtime).Id, "/bin/cat")
close(c1)
}()
@@ -213,24 +131,21 @@ func TestRunExit(t *testing.T) {
}
})
container := globalRuntime.List()[0]
container := runtime.List()[0]
// Closing /bin/cat stdin, expect it to exit
if err := stdin.Close(); err != nil {
p, err := container.StdinPipe()
if err != nil {
t.Fatal(err)
}
if err := p.Close(); err != nil {
t.Fatal(err)
}
// as the process exited, CmdRun must finish and unblock. Wait for it
setTimeout(t, "Waiting for CmdRun timed out", 10*time.Second, func() {
setTimeout(t, "Waiting for CmdRun timed out", 2*time.Second, func() {
<-c1
go func() {
cli.CmdWait(container.ID)
}()
if _, err := bufio.NewReader(stdout).ReadString('\n'); err != nil {
t.Fatal(err)
}
cmdWait(srv, container)
})
// Make sure that the client has been disconnected
@@ -247,18 +162,21 @@ func TestRunExit(t *testing.T) {
// Expected behaviour: the process dies when the client disconnects
func TestRunDisconnect(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(stdin, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
c1 := make(chan struct{})
go func() {
// We're simulating a disconnect so the return value doesn't matter. What matters is the
// fact that CmdRun returns.
cli.CmdRun("-i", unitTestImageID, "/bin/cat")
srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-i", GetTestImage(runtime).Id, "/bin/cat")
close(c1)
}()
@@ -282,7 +200,7 @@ func TestRunDisconnect(t *testing.T) {
// Client disconnect after run -i should cause stdin to be closed, which should
// cause /bin/cat to exit.
setTimeout(t, "Waiting for /bin/cat to exit timed out", 2*time.Second, func() {
container := globalRuntime.List()[0]
container := runtime.List()[0]
container.Wait()
if container.State.Running {
t.Fatalf("/bin/cat is still running after closing stdin")
@@ -292,39 +210,40 @@ func TestRunDisconnect(t *testing.T) {
// Expected behaviour: the process dies when the client disconnects
func TestRunDisconnectTty(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(stdin, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
c1 := make(chan struct{})
go func() {
// We're simulating a disconnect so the return value doesn't matter. What matters is the
// fact that CmdRun returns.
if err := cli.CmdRun("-i", "-t", unitTestImageID, "/bin/cat"); err != nil {
utils.Debugf("Error CmdRun: %s", err)
}
srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-i", "-t", GetTestImage(runtime).Id, "/bin/cat")
close(c1)
}()
setTimeout(t, "Waiting for the container to be started timed out", 10*time.Second, func() {
setTimeout(t, "Waiting for the container to be started timed out", 2*time.Second, func() {
for {
// Client disconnect after run -i should keep stdin out in TTY mode
l := globalRuntime.List()
l := runtime.List()
if len(l) == 1 && l[0].State.Running {
break
}
time.Sleep(10 * time.Millisecond)
}
})
// Client disconnect after run -i should keep stdin out in TTY mode
container := globalRuntime.List()[0]
container := runtime.List()[0]
setTimeout(t, "Read/Write assertion timed out", 2000*time.Second, func() {
setTimeout(t, "Read/Write assertion timed out", 2*time.Second, func() {
if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 15); err != nil {
t.Fatal(err)
}
@@ -349,21 +268,24 @@ func TestRunDisconnectTty(t *testing.T) {
// 'docker run -i -a stdin' should sends the client's stdin to the command,
// then detach from it and print the container id.
func TestRunAttachStdin(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(stdin, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
ch := make(chan struct{})
go func() {
defer close(ch)
cli.CmdRun("-i", "-a", "stdin", unitTestImageID, "sh", "-c", "echo hello && cat && sleep 5")
srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-i", "-a", "stdin", GetTestImage(runtime).Id, "sh", "-c", "echo hello; cat")
close(ch)
}()
// Send input to the command, close stdin
setTimeout(t, "Write timed out", 10*time.Second, func() {
setTimeout(t, "Write timed out", 2*time.Second, func() {
if _, err := stdinPipe.Write([]byte("hi there\n")); err != nil {
t.Fatal(err)
}
@@ -372,211 +294,78 @@ func TestRunAttachStdin(t *testing.T) {
}
})
container := globalRuntime.List()[0]
container := runtime.List()[0]
// Check output
setTimeout(t, "Reading command output time out", 10*time.Second, func() {
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
if cmdOutput != container.ShortID()+"\n" {
t.Fatalf("Wrong output: should be '%s', not '%s'\n", container.ShortID()+"\n", cmdOutput)
}
})
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
if cmdOutput != container.ShortId()+"\n" {
t.Fatalf("Wrong output: should be '%s', not '%s'\n", container.ShortId()+"\n", cmdOutput)
}
// wait for CmdRun to return
setTimeout(t, "Waiting for CmdRun timed out", 5*time.Second, func() {
setTimeout(t, "Waiting for CmdRun timed out", 2*time.Second, func() {
<-ch
})
setTimeout(t, "Waiting for command to exit timed out", 10*time.Second, func() {
setTimeout(t, "Waiting for command to exit timed out", 2*time.Second, func() {
container.Wait()
})
// Check logs
if cmdLogs, err := container.ReadLog("json"); err != nil {
if cmdLogs, err := container.ReadLog("stdout"); err != nil {
t.Fatal(err)
} else {
if output, err := ioutil.ReadAll(cmdLogs); err != nil {
t.Fatal(err)
} else {
expectedLogs := []string{"{\"log\":\"hello\\n\",\"stream\":\"stdout\"", "{\"log\":\"hi there\\n\",\"stream\":\"stdout\""}
for _, expectedLog := range expectedLogs {
if !strings.Contains(string(output), expectedLog) {
t.Fatalf("Unexpected logs: should contains '%s', it is not '%s'\n", expectedLog, output)
}
expectedLog := "hello\nhi there\n"
if string(output) != expectedLog {
t.Fatalf("Unexpected logs: should be '%s', not '%s'\n", expectedLog, output)
}
}
}
}
// TestRunDetach checks attaching and detaching with the escape sequence.
func TestRunDetach(t *testing.T) {
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(stdin, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
ch := make(chan struct{})
go func() {
defer close(ch)
cli.CmdRun("-i", "-t", unitTestImageID, "cat")
}()
setTimeout(t, "First read/write assertion timed out", 2*time.Second, func() {
if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 15); err != nil {
t.Fatal(err)
}
})
container := globalRuntime.List()[0]
setTimeout(t, "Escape sequence timeout", 5*time.Second, func() {
stdinPipe.Write([]byte{16, 17})
if err := stdinPipe.Close(); err != nil {
t.Fatal(err)
}
})
closeWrap(stdin, stdinPipe, stdout, stdoutPipe)
// wait for CmdRun to return
setTimeout(t, "Waiting for CmdRun timed out", 15*time.Second, func() {
<-ch
})
time.Sleep(500 * time.Millisecond)
if !container.State.Running {
t.Fatal("The detached container should be still running")
}
setTimeout(t, "Waiting for container to die timed out", 20*time.Second, func() {
container.Kill()
})
}
// TestAttachDetach checks that attach in tty mode can be detached
func TestAttachDetach(t *testing.T) {
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(stdin, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
ch := make(chan struct{})
go func() {
defer close(ch)
if err := cli.CmdRun("-i", "-t", "-d", unitTestImageID, "cat"); err != nil {
t.Fatal(err)
}
}()
var container *Container
setTimeout(t, "Reading container's id timed out", 10*time.Second, func() {
buf := make([]byte, 1024)
n, err := stdout.Read(buf)
if err != nil {
t.Fatal(err)
}
container = globalRuntime.List()[0]
if strings.Trim(string(buf[:n]), " \r\n") != container.ShortID() {
t.Fatalf("Wrong ID received. Expect %s, received %s", container.ShortID(), buf[:n])
}
})
setTimeout(t, "Starting container timed out", 10*time.Second, func() {
<-ch
})
stdin, stdinPipe = io.Pipe()
stdout, stdoutPipe = io.Pipe()
cli = NewDockerCli(stdin, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
ch = make(chan struct{})
go func() {
defer close(ch)
if err := cli.CmdAttach(container.ShortID()); err != nil {
if err != io.ErrClosedPipe {
t.Fatal(err)
}
}
}()
setTimeout(t, "First read/write assertion timed out", 2*time.Second, func() {
if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 15); err != nil {
if err != io.ErrClosedPipe {
t.Fatal(err)
}
}
})
setTimeout(t, "Escape sequence timeout", 5*time.Second, func() {
stdinPipe.Write([]byte{16, 17})
if err := stdinPipe.Close(); err != nil {
t.Fatal(err)
}
})
closeWrap(stdin, stdinPipe, stdout, stdoutPipe)
// wait for CmdRun to return
setTimeout(t, "Waiting for CmdAttach timed out", 15*time.Second, func() {
<-ch
})
time.Sleep(500 * time.Millisecond)
if !container.State.Running {
t.Fatal("The detached container should be still running")
}
setTimeout(t, "Waiting for container to die timedout", 5*time.Second, func() {
container.Kill()
})
}
// Expected behaviour, the process stays alive when the client disconnects
func TestAttachDisconnect(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
container, err := NewBuilder(runtime).Create(
&Config{
Image: GetTestImage(runtime).Id,
Memory: 33554432,
Cmd: []string{"/bin/cat"},
OpenStdin: true,
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container)
// Start the process
if err := container.Start(); err != nil {
t.Fatal(err)
}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(stdin, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
go func() {
// Start a process in daemon mode
if err := cli.CmdRun("-d", "-i", unitTestImageID, "/bin/cat"); err != nil {
utils.Debugf("Error CmdRun: %s", err)
}
}()
setTimeout(t, "Waiting for CmdRun timed out", 10*time.Second, func() {
if _, err := bufio.NewReader(stdout).ReadString('\n'); err != nil {
t.Fatal(err)
}
})
setTimeout(t, "Waiting for the container to be started timed out", 10*time.Second, func() {
for {
l := globalRuntime.List()
if len(l) == 1 && l[0].State.Running {
break
}
time.Sleep(10 * time.Millisecond)
}
})
container := globalRuntime.List()[0]
// Attach to it
c1 := make(chan struct{})
go func() {
// We're simulating a disconnect so the return value doesn't matter. What matters is the
// fact that CmdAttach returns.
cli.CmdAttach(container.ID)
srv.CmdAttach(stdin, rcli.NewDockerLocalConn(stdoutPipe), container.Id)
close(c1)
}()
@@ -597,51 +386,12 @@ func TestAttachDisconnect(t *testing.T) {
// We closed stdin, expect /bin/cat to still be running
// Wait a little bit to make sure container.monitor() did his thing
err := container.WaitTimeout(500 * time.Millisecond)
err = container.WaitTimeout(500 * time.Millisecond)
if err == nil || !container.State.Running {
t.Fatalf("/bin/cat is not running after closing stdin")
}
// Try to avoid the timeout in destroy. Best effort, don't check error
// Try to avoid the timeoout in destroy. Best effort, don't check error
cStdin, _ := container.StdinPipe()
cStdin.Close()
container.Wait()
}
// Expected behaviour: container gets deleted automatically after exit
func TestRunAutoRemove(t *testing.T) {
t.Skip("Fixme. Skipping test for now, race condition")
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(nil, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
c := make(chan struct{})
go func() {
defer close(c)
if err := cli.CmdRun("-rm", unitTestImageID, "hostname"); err != nil {
t.Fatal(err)
}
}()
var temporaryContainerID string
setTimeout(t, "Reading command output time out", 2*time.Second, func() {
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
temporaryContainerID = cmdOutput
if err := closeWrap(stdout, stdoutPipe); err != nil {
t.Fatal(err)
}
})
setTimeout(t, "CmdRun timed out", 10*time.Second, func() {
<-c
})
time.Sleep(500 * time.Millisecond)
if len(globalRuntime.List()) > 0 {
t.Fatalf("failed to remove container automatically: container %s still exists", temporaryContainerID)
}
}

View File

@@ -1,18 +0,0 @@
package docker
import (
"net"
)
type DaemonConfig struct {
Pidfile string
GraphPath string
ProtoAddresses []string
AutoRestart bool
EnableCors bool
Dns []string
EnableIptables bool
BridgeIface string
DefaultIp net.IP
InterContainerCommunication bool
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,23 +0,0 @@
# [PackageDev] target_format: plist, ext: tmLanguage
---
name: Dockerfile
scopeName: source.dockerfile
uuid: a39d8795-59d2-49af-aa00-fe74ee29576e
patterns:
# Keywords
- name: keyword.control.dockerfile
match: ^\s*(FROM|MAINTAINER|RUN|CMD|EXPOSE|ENV|ADD)\s
- name: keyword.operator.dockerfile
match: ^\s*(ENTRYPOINT|VOLUME|USER|WORKDIR)\s
# String
- name: string.quoted.double.dockerfile
begin: "\""
end: "\""
patterns:
- name: constant.character.escaped.dockerfile
match: \\.
# Comment
- name: comment.block.dockerfile
match: ^\s*#.*$
...

View File

@@ -1,50 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>name</key>
<string>Dockerfile</string>
<key>patterns</key>
<array>
<dict>
<key>match</key>
<string>^\s*(FROM|MAINTAINER|RUN|CMD|EXPOSE|ENV|ADD)\s</string>
<key>name</key>
<string>keyword.control.dockerfile</string>
</dict>
<dict>
<key>match</key>
<string>^\s*(ENTRYPOINT|VOLUME|USER|WORKDIR)\s</string>
<key>name</key>
<string>keyword.operator.dockerfile</string>
</dict>
<dict>
<key>begin</key>
<string>"</string>
<key>end</key>
<string>"</string>
<key>name</key>
<string>string.quoted.double.dockerfile</string>
<key>patterns</key>
<array>
<dict>
<key>match</key>
<string>\\.</string>
<key>name</key>
<string>constant.character.escaped.dockerfile</string>
</dict>
</array>
</dict>
<dict>
<key>match</key>
<string>^\s*#.*$</string>
<key>name</key>
<string>comment.block.dockerfile</string>
</dict>
</array>
<key>scopeName</key>
<string>source.dockerfile</string>
<key>uuid</key>
<string>a39d8795-59d2-49af-aa00-fe74ee29576e</string>
</dict>
</plist>

View File

@@ -1 +0,0 @@
Asbjorn Enge <asbjorn@hanafjedle.net> (@asbjornenge)

View File

@@ -1,9 +0,0 @@
# Dockerfile.tmLanguage
Pretty basic Dockerfile.tmLanguage for Sublime Text syntax highlighting.
PR's with syntax updates, suggestions etc. are all very much appreciated!
I'll get to making this installable via Package Control soon!
enjoy.

View File

@@ -1 +0,0 @@
Tianon Gravi <admwiggin@gmail.com> (@tianon)

View File

@@ -1,543 +0,0 @@
#!bash
#
# bash completion file for core docker commands
#
# This script provides supports completion of:
# - commands and their options
# - container ids
# - image repos and tags
# - filepaths
#
# To enable the completions either:
# - place this file in /etc/bash_completion.d
# or
# - copy this file and add the line below to your .bashrc after
# bash completion features are loaded
# . docker.bash
#
# Note:
# Currently, the completions will not work if the docker daemon is not
# bound to the default communication port/socket
# If the docker daemon is using a unix socket for communication your user
# must have access to the socket for the completions to function correctly
__docker_containers_all()
{
local containers
containers="$( docker ps -a -q )"
COMPREPLY=( $( compgen -W "$containers" -- "$cur" ) )
}
__docker_containers_running()
{
local containers
containers="$( docker ps -q )"
COMPREPLY=( $( compgen -W "$containers" -- "$cur" ) )
}
__docker_containers_stopped()
{
local containers
containers="$( comm -13 <(docker ps -q | sort -u) <(docker ps -a -q | sort -u) )"
COMPREPLY=( $( compgen -W "$containers" -- "$cur" ) )
}
__docker_image_repos()
{
local repos
repos="$( docker images | awk 'NR>1{print $1}' )"
COMPREPLY=( $( compgen -W "$repos" -- "$cur" ) )
}
__docker_images()
{
local images
images="$( docker images | awk 'NR>1{print $1":"$2}' )"
COMPREPLY=( $( compgen -W "$images" -- "$cur" ) )
__ltrim_colon_completions "$cur"
}
__docker_image_repos_and_tags()
{
local repos images
repos="$( docker images | awk 'NR>1{print $1}' )"
images="$( docker images | awk 'NR>1{print $1":"$2}' )"
COMPREPLY=( $( compgen -W "$repos $images" -- "$cur" ) )
__ltrim_colon_completions "$cur"
}
__docker_containers_and_images()
{
local containers images
containers="$( docker ps -a -q )"
images="$( docker images | awk 'NR>1{print $1":"$2}' )"
COMPREPLY=( $( compgen -W "$images $containers" -- "$cur" ) )
__ltrim_colon_completions "$cur"
}
_docker_docker()
{
case "$prev" in
-H)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-H" -- "$cur" ) )
;;
*)
COMPREPLY=( $( compgen -W "$commands help" -- "$cur" ) )
;;
esac
}
_docker_attach()
{
if [ $cpos -eq $cword ]; then
__docker_containers_running
fi
}
_docker_build()
{
case "$prev" in
-t)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-no-cache -t -q -rm" -- "$cur" ) )
;;
*)
_filedir
;;
esac
}
_docker_commit()
{
case "$prev" in
-author|-m|-run)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-author -m -run" -- "$cur" ) )
;;
*)
local counter=$cpos
while [ $counter -le $cword ]; do
case "${words[$counter]}" in
-author|-m|-run)
(( counter++ ))
;;
-*)
;;
*)
break
;;
esac
(( counter++ ))
done
if [ $counter -eq $cword ]; then
__docker_containers_all
fi
;;
esac
}
_docker_cp()
{
if [ $cpos -eq $cword ]; then
__docker_containers_all
else
_filedir
fi
}
_docker_diff()
{
if [ $cpos -eq $cword ]; then
__docker_containers_all
fi
}
_docker_events()
{
case "$prev" in
-since)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-since" -- "$cur" ) )
;;
*)
;;
esac
}
_docker_export()
{
if [ $cpos -eq $cword ]; then
__docker_containers_all
fi
}
_docker_help()
{
if [ $cpos -eq $cword ]; then
COMPREPLY=( $( compgen -W "$commands" -- "$cur" ) )
fi
}
_docker_history()
{
if [ $cpos -eq $cword ]; then
__docker_image_repos_and_tags
fi
}
_docker_images()
{
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-a -notrunc -q -viz" -- "$cur" ) )
;;
*)
local counter=$cpos
while [ $counter -le $cword ]; do
case "${words[$counter]}" in
-*)
;;
*)
break
;;
esac
(( counter++ ))
done
if [ $counter -eq $cword ]; then
__docker_image_repos
fi
;;
esac
}
_docker_import()
{
return
}
_docker_info()
{
return
}
_docker_insert()
{
if [ $cpos -eq $cword ]; then
__docker_image_repos_and_tags
fi
}
_docker_inspect()
{
__docker_containers_and_images
}
_docker_kill()
{
__docker_containers_running
}
_docker_login()
{
case "$prev" in
-e|-p|-u)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-e -p -u" -- "$cur" ) )
;;
*)
;;
esac
}
_docker_logs()
{
if [ $cpos -eq $cword ]; then
__docker_containers_all
fi
}
_docker_port()
{
if [ $cpos -eq $cword ]; then
__docker_containers_all
fi
}
_docker_ps()
{
case "$prev" in
-beforeId|-n|-sinceId)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-a -beforeId -l -n -notrunc -q -s -sinceId" -- "$cur" ) )
;;
*)
;;
esac
}
_docker_pull()
{
case "$prev" in
-t)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-t" -- "$cur" ) )
;;
*)
;;
esac
}
_docker_push()
{
__docker_image_repos
}
_docker_restart()
{
case "$prev" in
-t)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-t" -- "$cur" ) )
;;
*)
__docker_containers_all
;;
esac
}
_docker_rm()
{
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-v" -- "$cur" ) )
;;
*)
__docker_containers_stopped
;;
esac
}
_docker_rmi()
{
__docker_image_repos_and_tags
}
_docker_run()
{
case "$prev" in
-cidfile)
_filedir
;;
-volumes-from)
__docker_containers_all
;;
-a|-c|-dns|-e|-entrypoint|-h|-lxc-conf|-m|-p|-u|-v|-w)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-a -c -cidfile -d -dns -e -entrypoint -h -i -lxc-conf -m -n -p -privileged -t -u -v -volumes-from -w" -- "$cur" ) )
;;
*)
local counter=$cpos
while [ $counter -le $cword ]; do
case "${words[$counter]}" in
-a|-c|-cidfile|-dns|-e|-entrypoint|-h|-lxc-conf|-m|-p|-u|-v|-volumes-from|-w)
(( counter++ ))
;;
-*)
;;
*)
break
;;
esac
(( counter++ ))
done
if [ $counter -eq $cword ]; then
__docker_image_repos_and_tags
fi
;;
esac
}
_docker_search()
{
COMPREPLY=( $( compgen -W "-notrunc" -- "$cur" ) )
}
_docker_start()
{
__docker_containers_stopped
}
_docker_stop()
{
case "$prev" in
-t)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-t" -- "$cur" ) )
;;
*)
__docker_containers_running
;;
esac
}
_docker_tag()
{
COMPREPLY=( $( compgen -W "-f" -- "$cur" ) )
}
_docker_top()
{
if [ $cpos -eq $cword ]; then
__docker_containers_running
fi
}
_docker_version()
{
return
}
_docker_wait()
{
__docker_containers_all
}
_docker()
{
local cur prev words cword command="docker" counter=1 word cpos
local commands="
attach
build
commit
cp
diff
events
export
history
images
import
info
insert
inspect
kill
login
logs
port
ps
pull
push
restart
rm
rmi
run
search
start
stop
tag
top
version
wait
"
COMPREPLY=()
_get_comp_words_by_ref -n : cur prev words cword
while [ $counter -lt $cword ]; do
word="${words[$counter]}"
case "$word" in
-H)
(( counter++ ))
;;
-*)
;;
*)
command="$word"
cpos=$counter
(( cpos++ ))
break
;;
esac
(( counter++ ))
done
local completions_func=_docker_${command}
declare -F $completions_func >/dev/null && $completions_func
return 0
}
complete -F _docker docker

View File

@@ -1,242 +0,0 @@
#compdef docker
#
# zsh completion for docker (http://docker.io)
#
# version: 0.2.2
# author: Felix Riedel
# license: BSD License
# github: https://github.com/felixr/docker-zsh-completion
#
__parse_docker_list() {
sed -e '/^ID/d' -e 's/[ ]\{2,\}/|/g' -e 's/ \([hdwm]\)\(inutes\|ays\|ours\|eeks\)/\1/' | awk ' BEGIN {FS="|"} { printf("%s:%7s, %s\n", $1, $4, $2)}'
}
__docker_stoppedcontainers() {
local expl
declare -a stoppedcontainers
stoppedcontainers=(${(f)"$(docker ps -a | grep --color=never 'Exit' | __parse_docker_list )"})
_describe -t containers-stopped "Stopped Containers" stoppedcontainers
}
__docker_runningcontainers() {
local expl
declare -a containers
containers=(${(f)"$(docker ps | __parse_docker_list)"})
_describe -t containers-active "Running Containers" containers
}
__docker_containers () {
__docker_stoppedcontainers
__docker_runningcontainers
}
__docker_images () {
local expl
declare -a images
images=(${(f)"$(docker images | awk '(NR > 1){printf("%s\\:%s\n", $1,$2)}')"})
images=($images ${(f)"$(docker images | awk '(NR > 1){printf("%s:%-15s in %s\n", $3,$2,$1)}')"})
_describe -t docker-images "Images" images
}
__docker_tags() {
local expl
declare -a tags
tags=(${(f)"$(docker images | awk '(NR>1){print $2}'| sort | uniq)"})
_describe -t docker-tags "tags" tags
}
__docker_search() {
# declare -a dockersearch
local cache_policy
zstyle -s ":completion:${curcontext}:" cache-policy cache_policy
if [[ -z "$cache_policy" ]]; then
zstyle ":completion:${curcontext}:" cache-policy __docker_caching_policy
fi
local searchterm cachename
searchterm="${words[$CURRENT]%/}"
cachename=_docker-search-$searchterm
local expl
local -a result
if ( [[ ${(P)+cachename} -eq 0 ]] || _cache_invalid ${cachename#_} ) \
&& ! _retrieve_cache ${cachename#_}; then
_message "Searching for ${searchterm}..."
result=(${(f)"$(docker search ${searchterm} | awk '(NR>2){print $1}')"})
_store_cache ${cachename#_} result
fi
_wanted dockersearch expl 'Available images' compadd -a result
}
__docker_caching_policy()
{
# oldp=( "$1"(Nmh+24) ) # 24 hour
oldp=( "$1"(Nmh+1) ) # 24 hour
(( $#oldp ))
}
__docker_repositories () {
local expl
declare -a repos
repos=(${(f)"$(docker images | sed -e '1d' -e 's/[ ].*//' | sort | uniq)"})
_describe -t docker-repos "Repositories" repos
}
__docker_commands () {
# local -a _docker_subcommands
local cache_policy
zstyle -s ":completion:${curcontext}:" cache-policy cache_policy
if [[ -z "$cache_policy" ]]; then
zstyle ":completion:${curcontext}:" cache-policy __docker_caching_policy
fi
if ( [[ ${+_docker_subcommands} -eq 0 ]] || _cache_invalid docker_subcommands) \
&& ! _retrieve_cache docker_subcommands;
then
_docker_subcommands=(${${(f)"$(_call_program commands
docker 2>&1 | sed -e '1,6d' -e '/^[ ]*$/d' -e 's/[ ]*\([^ ]\+\)\s*\([^ ].*\)/\1:\2/' )"}})
_docker_subcommands=($_docker_subcommands 'help:Show help for a command')
_store_cache docker_subcommands _docker_subcommands
fi
_describe -t docker-commands "docker command" _docker_subcommands
}
__docker_subcommand () {
local -a _command_args
case "$words[1]" in
(attach|wait)
_arguments ':containers:__docker_runningcontainers'
;;
(build)
_arguments \
'-t=-:repository:__docker_repositories' \
':path or URL:_directories'
;;
(commit)
_arguments \
':container:__docker_containers' \
':repository:__docker_repositories' \
':tag: '
;;
(diff|export|logs)
_arguments '*:containers:__docker_containers'
;;
(history)
_arguments '*:images:__docker_images'
;;
(images)
_arguments \
'-a[Show all images]' \
':repository:__docker_repositories'
;;
(inspect)
_arguments '*:containers:__docker_containers'
;;
(history)
_arguments ':images:__docker_images'
;;
(insert)
_arguments '1:containers:__docker_containers' \
'2:URL:(http:// file://)' \
'3:file:_files'
;;
(kill)
_arguments '*:containers:__docker_runningcontainers'
;;
(port)
_arguments '1:containers:__docker_runningcontainers'
;;
(start)
_arguments '*:containers:__docker_stoppedcontainers'
;;
(rm)
_arguments '-v[Remove the volumes associated to the container]' \
'*:containers:__docker_stoppedcontainers'
;;
(rmi)
_arguments '-v[Remove the volumes associated to the container]' \
'*:images:__docker_images'
;;
(top)
_arguments '1:containers:__docker_runningcontainers'
;;
(restart|stop)
_arguments '-t=-[Number of seconds to try to stop for before killing the container]:seconds to before killing:(1 5 10 30 60)' \
'*:containers:__docker_runningcontainers'
;;
(top)
_arguments ':containers:__docker_runningcontainers'
;;
(ps)
_arguments '-a[Show all containers. Only running containers are shown by default]' \
'-h[Show help]' \
'-beforeId=-[Show only container created before Id, include non-running one]:containers:__docker_containers' \
'-n=-[Show n last created containers, include non-running one]:n:(1 5 10 25 50)'
;;
(tag)
_arguments \
'-f[force]'\
':image:__docker_images'\
':repository:__docker_repositories' \
':tag:__docker_tags'
;;
(run)
_arguments \
'-a=-[Attach to stdin, stdout or stderr]:toggle:(true false)' \
'-c=-[CPU shares (relative weight)]:CPU shares: ' \
'-d[Detached mode: leave the container running in the background]' \
'*-dns=[Set custom dns servers]:dns server: ' \
'*-e=[Set environment variables]:environment variable: ' \
'-entrypoint=-[Overwrite the default entrypoint of the image]:entry point: ' \
'-h=-[Container host name]:hostname:_hosts' \
'-i[Keep stdin open even if not attached]' \
'-m=-[Memory limit (in bytes)]:limit: ' \
'*-p=-[Expose a container''s port to the host]:port:_ports' \
'-t=-[Allocate a pseudo-tty]:toggle:(true false)' \
'-u=-[Username or UID]:user:_users' \
'*-v=-[Bind mount a volume (e.g. from the host: -v /host:/container, from docker: -v /container)]:volume: '\
'-volumes-from=-[Mount volumes from the specified container]:volume: ' \
'(-):images:__docker_images' \
'(-):command: _command_names -e' \
'*::arguments: _normal'
;;
(pull|search)
_arguments ':name:__docker_search'
;;
(help)
_arguments ':subcommand:__docker_commands'
;;
(*)
_message 'Unknown sub command'
esac
}
_docker () {
local curcontext="$curcontext" state line
typeset -A opt_args
_arguments -C \
'-H=-[tcp://host:port to bind/connect to]:socket: ' \
'(-): :->command' \
'(-)*:: :->option-or-argument'
if (( CURRENT == 1 )); then
fi
case $state in
(command)
__docker_commands
;;
(option-or-argument)
curcontext=${curcontext%:*:*}:docker-$words[1]:
__docker_subcommand
;;
esac
}
_docker "$@"

View File

@@ -1,23 +1,18 @@
package main
import (
"fmt"
"io"
"log"
"net"
"os"
"os/exec"
"path"
"time"
)
var DOCKERPATH = path.Join(os.Getenv("DOCKERPATH"), "docker")
const DOCKER_PATH = "/home/creack/dotcloud/docker/docker/docker"
// WARNING: this crashTest will 1) crash your host, 2) remove all containers
func runDaemon() (*exec.Cmd, error) {
os.Remove("/var/run/docker.pid")
exec.Command("rm", "-rf", "/var/lib/docker/containers").Run()
cmd := exec.Command(DOCKERPATH, "-d")
cmd := exec.Command(DOCKER_PATH, "-d")
outPipe, err := cmd.StdoutPipe()
if err != nil {
return nil, err
@@ -43,43 +38,19 @@ func crashTest() error {
return err
}
var endpoint string
if ep := os.Getenv("TEST_ENDPOINT"); ep == "" {
endpoint = "192.168.56.1:7979"
} else {
endpoint = ep
}
c := make(chan bool)
var conn io.Writer
go func() {
conn, _ = net.Dial("tcp", endpoint)
c <- false
}()
go func() {
time.Sleep(2 * time.Second)
c <- true
}()
<-c
restartCount := 0
totalTestCount := 1
for {
daemon, err := runDaemon()
if err != nil {
return err
}
restartCount++
// time.Sleep(5000 * time.Millisecond)
var stop bool
go func() error {
stop = false
for i := 0; i < 100 && !stop; {
for i := 0; i < 100 && !stop; i++ {
func() error {
cmd := exec.Command(DOCKERPATH, "run", "ubuntu", "echo", fmt.Sprintf("%d", totalTestCount))
i++
totalTestCount++
cmd := exec.Command(DOCKER_PATH, "run", "base", "echo", "hello", "world")
log.Printf("%d", i)
outPipe, err := cmd.StdoutPipe()
if err != nil {
return err
@@ -91,10 +62,9 @@ func crashTest() error {
if err := cmd.Start(); err != nil {
return err
}
if conn != nil {
go io.Copy(conn, outPipe)
}
go func() {
io.Copy(os.Stdout, outPipe)
}()
// Expecting error, do not check
inPipe.Write([]byte("hello world!!!!!\n"))
go inPipe.Write([]byte("hello world!!!!!\n"))
@@ -116,6 +86,7 @@ func crashTest() error {
return err
}
}
return nil
}
func main() {

View File

@@ -0,0 +1,68 @@
# docker-build: build your software with docker
## Description
docker-build is a script to build docker images from source. It will be deprecated once the 'build' feature is incorporated into docker itself (See https://github.com/dotcloud/docker/issues/278)
Author: Solomon Hykes <solomon@dotcloud.com>
## Install
docker-builder requires:
1) A reasonably recent Python setup (tested on 2.7.2).
2) A running docker daemon at version 0.1.4 or more recent (http://www.docker.io/gettingstarted)
## Usage
First create a valid Changefile, which defines a sequence of changes to apply to a base image.
$ cat Changefile
# Start build from a know base image
from base:ubuntu-12.10
# Update ubuntu sources
run echo 'deb http://archive.ubuntu.com/ubuntu quantal main universe multiverse' > /etc/apt/sources.list
run apt-get update
# Install system packages
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q git
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q curl
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q golang
# Insert files from the host (./myscript must be present in the current directory)
copy myscript /usr/local/bin/myscript
Run docker-build, and pass the contents of your Changefile as standard input.
$ IMG=$(./docker-build < Changefile)
This will take a while: for each line of the changefile, docker-build will:
1. Create a new container to execute the given command or insert the given file
2. Wait for the container to complete execution
3. Commit the resulting changes as a new image
4. Use the resulting image as the input of the next step
If all the steps succeed, the result will be an image containing the combined results of each build step.
You can trace back those build steps by inspecting the image's history:
$ docker history $IMG
ID CREATED CREATED BY
1e9e2045de86 A few seconds ago /bin/sh -c cat > /usr/local/bin/myscript; chmod +x /usr/local/bin/git
77db140aa62a A few seconds ago /bin/sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -q golang
77db140aa62a A few seconds ago /bin/sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -q curl
77db140aa62a A few seconds ago /bin/sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -q git
83e85d155451 A few seconds ago /bin/sh -c apt-get update
bfd53b36d9d3 A few seconds ago /bin/sh -c echo 'deb http://archive.ubuntu.com/ubuntu quantal main universe multiverse' > /etc/apt/sources.list
base 2 weeks ago /bin/bash
27cf78414709 2 weeks ago
Note that your build started from 'base', as instructed by your Changefile. But that base image itself seems to have been built in 2 steps - hence the extra step in the history.
You can use this build technique to create any image you want: a database, a web application, or anything else that can be build by a sequence of unix commands - in other words, anything else.

104
contrib/docker-build/docker-build Executable file
View File

@@ -0,0 +1,104 @@
#!/usr/bin/env python
# docker-build is a script to build docker images from source.
# It will be deprecated once the 'build' feature is incorporated into docker itself.
# (See https://github.com/dotcloud/docker/issues/278)
#
# Author: Solomon Hykes <solomon@dotcloud.com>
# First create a valid Changefile, which defines a sequence of changes to apply to a base image.
#
# $ cat Changefile
# # Start build from a know base image
# from base:ubuntu-12.10
# # Update ubuntu sources
# run echo 'deb http://archive.ubuntu.com/ubuntu quantal main universe multiverse' > /etc/apt/sources.list
# run apt-get update
# # Install system packages
# run DEBIAN_FRONTEND=noninteractive apt-get install -y -q git
# run DEBIAN_FRONTEND=noninteractive apt-get install -y -q curl
# run DEBIAN_FRONTEND=noninteractive apt-get install -y -q golang
# # Insert files from the host (./myscript must be present in the current directory)
# copy myscript /usr/local/bin/myscript
#
#
# Run docker-build, and pass the contents of your Changefile as standard input.
#
# $ IMG=$(./docker-build < Changefile)
#
# This will take a while: for each line of the changefile, docker-build will:
#
# 1. Create a new container to execute the given command or insert the given file
# 2. Wait for the container to complete execution
# 3. Commit the resulting changes as a new image
# 4. Use the resulting image as the input of the next step
import sys
import subprocess
import json
import hashlib
def docker(args, stdin=None):
print "# docker " + " ".join(args)
p = subprocess.Popen(["docker"] + list(args), stdin=stdin, stdout=subprocess.PIPE)
return p.stdout
def image_exists(img):
return docker(["inspect", img]).read().strip() != ""
def run_and_commit(img_in, cmd, stdin=None):
run_id = docker(["run"] + (["-i", "-a", "stdin"] if stdin else ["-d"]) + [img_in, "/bin/sh", "-c", cmd], stdin=stdin).read().rstrip()
print "---> Waiting for " + run_id
result=int(docker(["wait", run_id]).read().rstrip())
if result != 0:
print "!!! '{}' return non-zero exit code '{}'. Aborting.".format(cmd, result)
sys.exit(1)
return docker(["commit", run_id]).read().rstrip()
def insert(base, src, dst):
print "COPY {} to {} in {}".format(src, dst, base)
if dst == "":
raise Exception("Missing destination path")
stdin = file(src)
stdin.seek(0)
return run_and_commit(base, "cat > {0}; chmod +x {0}".format(dst), stdin=stdin)
def main():
base=""
steps = []
try:
for line in sys.stdin.readlines():
line = line.strip()
# Skip comments and empty lines
if line == "" or line[0] == "#":
continue
op, param = line.split(" ", 1)
if op == "from":
print "FROM " + param
base = param
steps.append(base)
elif op == "run":
print "RUN " + param
result = run_and_commit(base, param)
steps.append(result)
base = result
print "===> " + base
elif op == "copy":
src, dst = param.split(" ", 1)
result = insert(base, src, dst)
steps.append(result)
base = result
print "===> " + base
else:
print "Skipping uknown op " + op
except:
docker(["rmi"] + steps[1:])
raise
print base
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,11 @@
# Start build from a know base image
from base:ubuntu-12.10
# Update ubuntu sources
run echo 'deb http://archive.ubuntu.com/ubuntu quantal main universe multiverse' > /etc/apt/sources.list
run apt-get update
# Install system packages
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q git
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q curl
run DEBIAN_FRONTEND=noninteractive apt-get install -y -q golang
# Insert files from the host (./myscript must be present in the current directory)
copy myscript /usr/local/bin/myscript

View File

@@ -1,27 +0,0 @@
#
# This Dockerfile will create an image that allows to generate upstart and
# systemd scripts (more to come)
#
# docker-version 0.6.2
#
FROM ubuntu:12.10
MAINTAINER Guillaume J. Charmes <guillaume@dotcloud.com>
RUN apt-get update && apt-get install -y wget git mercurial
# Install Go
RUN wget --no-check-certificate https://go.googlecode.com/files/go1.1.2.linux-amd64.tar.gz -O go-1.1.2.tar.gz
RUN tar -xzvf go-1.1.2.tar.gz && mv /go /goroot
RUN mkdir /go
ENV GOROOT /goroot
ENV GOPATH /go
ENV PATH $GOROOT/bin:$PATH
RUN go get github.com/dotcloud/docker && cd /go/src/github.com/dotcloud/docker && git checkout v0.6.3
ADD manager.go /manager/
RUN cd /manager && go build -o /usr/bin/manager
ENTRYPOINT ["/usr/bin/manager"]

View File

@@ -1,4 +0,0 @@
FROM busybox
MAINTAINER Guillaume J. Charmes <guillaume@dotcloud.com>
ADD manager /usr/bin/
ENTRYPOINT ["/usr/bin/manager"]

View File

@@ -1,130 +0,0 @@
package main
import (
"bytes"
"encoding/json"
"flag"
"fmt"
"github.com/dotcloud/docker"
"os"
"strings"
"text/template"
)
var templates = map[string]string{
"upstart": `description "{{.description}}"
author "{{.author}}"
start on filesystem and started lxc-net and started docker
stop on runlevel [!2345]
respawn
exec /home/vagrant/goroot/bin/docker start -a {{.container_id}}
`,
"systemd": `[Unit]
Description={{.description}}
Author={{.author}}
After=docker.service
[Service]
Restart=always
ExecStart=/usr/bin/docker start -a {{.container_id}}
ExecStop=/usr/bin/docker stop -t 2 {{.container_id}}
[Install]
WantedBy=local.target
`,
}
func main() {
// Parse command line for custom options
kind := flag.String("t", "upstart", "Type of manager requested")
author := flag.String("a", "<none>", "Author of the image")
description := flag.String("d", "<none>", "Description of the image")
flag.Usage = func() {
fmt.Fprintf(os.Stderr, "\nUsage: manager <container id>\n\n")
flag.PrintDefaults()
}
flag.Parse()
// We require at least the container ID
if flag.NArg() != 1 {
println(flag.NArg())
flag.Usage()
return
}
// Check that the requested process manager is supported
if _, exists := templates[*kind]; !exists {
panic("Unkown script template")
}
// Load the requested template
tpl, err := template.New("processManager").Parse(templates[*kind])
if err != nil {
panic(err)
}
// Create stdout/stderr buffers
bufOut := bytes.NewBuffer(nil)
bufErr := bytes.NewBuffer(nil)
// Instanciate the Docker CLI
cli := docker.NewDockerCli(nil, bufOut, bufErr, "unix", "/var/run/docker.sock")
// Retrieve the container info
if err := cli.CmdInspect(flag.Arg(0)); err != nil {
// As of docker v0.6.3, CmdInspect always returns nil
panic(err)
}
// If there is nothing in the error buffer, then the Docker daemon is there and the container has been found
if bufErr.Len() == 0 {
// Unmarshall the resulting container data
c := []*docker.Container{{}}
if err := json.Unmarshal(bufOut.Bytes(), &c); err != nil {
panic(err)
}
// Reset the buffers
bufOut.Reset()
bufErr.Reset()
// Retrieve the info of the linked image
if err := cli.CmdInspect(c[0].Image); err != nil {
panic(err)
}
// If there is nothing in the error buffer, then the image has been found.
if bufErr.Len() == 0 {
// Unmarshall the resulting image data
img := []*docker.Image{{}}
if err := json.Unmarshal(bufOut.Bytes(), &img); err != nil {
panic(err)
}
// If no author has been set, use the one from the image
if *author == "<none>" && img[0].Author != "" {
*author = strings.Replace(img[0].Author, "\"", "", -1)
}
// If no description has been set, use the comment from the image
if *description == "<none>" && img[0].Comment != "" {
*description = strings.Replace(img[0].Comment, "\"", "", -1)
}
}
}
/// Old version: Wrtie the resulting script to file
// f, err := os.OpenFile(kind, os.O_CREATE|os.O_WRONLY, 0755)
// if err != nil {
// panic(err)
// }
// defer f.Close()
// Create a map with needed data
data := map[string]string{
"author": *author,
"description": *description,
"container_id": flag.Arg(0),
}
// Process the template and output it on Stdout
if err := tpl.Execute(os.Stdout, data); err != nil {
panic(err)
}
}

View File

@@ -1,53 +0,0 @@
#!/bin/sh
set -e
usage() {
echo >&2 "usage: $0 [-a author] [-d description] container [manager]"
echo >&2 " ie: $0 -a 'John Smith' 4ec9612a37cd systemd"
echo >&2 " ie: $0 -d 'Super Cool System' 4ec9612a37cd # defaults to upstart"
exit 1
}
auth='<none>'
desc='<none>'
have_auth=
have_desc=
while getopts a:d: opt; do
case "$opt" in
a)
auth="$OPTARG"
have_auth=1
;;
d)
desc="$OPTARG"
have_desc=1
;;
esac
done
shift $(($OPTIND - 1))
[ $# -ge 1 -a $# -le 2 ] || usage
cid="$1"
script="${2:-upstart}"
if [ ! -e "manager/$script" ]; then
echo >&2 "Error: manager type '$script' is unknown (PRs always welcome!)."
echo >&2 'The currently supported types are:'
echo >&2 " $(cd manager && echo *)"
exit 1
fi
# TODO https://github.com/dotcloud/docker/issues/734 (docker inspect formatting)
#if command -v docker > /dev/null 2>&1; then
# image="$(docker inspect -f '{{.Image}}' "$cid")"
# if [ "$image" ]; then
# if [ -z "$have_auth" ]; then
# auth="$(docker inspect -f '{{.Author}}' "$image")"
# fi
# if [ -z "$have_desc" ]; then
# desc="$(docker inspect -f '{{.Comment}}' "$image")"
# fi
# fi
#fi
exec "manager/$script" "$cid" "$auth" "$desc"

View File

@@ -1,20 +0,0 @@
#!/bin/sh
set -e
cid="$1"
auth="$2"
desc="$3"
cat <<-EOF
[Unit]
Description=$desc
Author=$auth
After=docker.service
[Service]
ExecStart=/usr/bin/docker start -a $cid
ExecStop=/usr/bin/docker stop -t 2 $cid
[Install]
WantedBy=local.target
EOF

View File

@@ -1,15 +0,0 @@
#!/bin/sh
set -e
cid="$1"
auth="$2"
desc="$3"
cat <<-EOF
description "$(echo "$desc" | sed 's/"/\\"/g')"
author "$(echo "$auth" | sed 's/"/\\"/g')"
start on filesystem and started lxc-net and started docker
stop on runlevel [!2345]
respawn
exec /usr/bin/docker start -a "$cid"
EOF

View File

@@ -1,13 +0,0 @@
# /etc/conf.d/docker: config file for /etc/init.d/docker
# where the docker daemon output gets piped
#DOCKER_LOGFILE="/var/log/docker.log"
# where docker's pid get stored
#DOCKER_PIDFILE="/run/docker.pid"
# where the docker daemon itself is run from
#DOCKER_BINARY="/usr/bin/docker"
# any other random options you want to pass to docker
DOCKER_OPTS=""

View File

@@ -1,31 +0,0 @@
#!/sbin/runscript
# Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Header: $
DOCKER_LOGFILE=${DOCKER_LOGFILE:-/var/log/${SVCNAME}.log}
DOCKER_PIDFILE=${DOCKER_PIDFILE:-/run/${SVCNAME}.pid}
DOCKER_BINARY=${DOCKER_BINARY:-/usr/bin/docker}
DOCKER_OPTS=${DOCKER_OPTS:-}
start() {
checkpath -f -m 0644 -o root:docker "$DOCKER_LOGFILE"
ebegin "Starting docker daemon"
start-stop-daemon --start --background \
--exec "$DOCKER_BINARY" \
--pidfile "$DOCKER_PIDFILE" \
--stdout "$DOCKER_LOGFILE" \
--stderr "$DOCKER_LOGFILE" \
-- -d -p "$DOCKER_PIDFILE" \
$DOCKER_OPTS
eend $?
}
stop() {
ebegin "Stopping docker daemon"
start-stop-daemon --stop \
--exec "$DOCKER_BINARY" \
--pidfile "$DOCKER_PIDFILE"
eend $?
}

View File

@@ -1,13 +0,0 @@
[Unit]
Description=Easily create lightweight, portable, self-sufficient containers from any application!
Documentation=http://docs.docker.io
Requires=network.target
After=multi-user.target
[Service]
Type=simple
ExecStartPre=/bin/mount --make-rprivate /
ExecStart=/usr/bin/docker -d
[Install]
WantedBy=multi-user.target

View File

@@ -1,85 +0,0 @@
#!/bin/sh
### BEGIN INIT INFO
# Provides: docker
# Required-Start: $syslog $remote_fs
# Required-Stop: $syslog $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Linux container runtime
# Description: Linux container runtime
### END INIT INFO
DOCKER=/usr/bin/docker
DOCKER_PIDFILE=/var/run/docker.pid
DOCKER_OPTS=
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin
# Check lxc-docker is present
[ -x $DOCKER ] || (log_failure_msg "docker not present"; exit 1)
# Get lsb functions
. /lib/lsb/init-functions
if [ -f /etc/default/lxc ]; then
. /etc/default/lxc
fi
if [ "$1" = start ] && which initctl >/dev/null && initctl version | grep -q upstart; then
exit 1
fi
check_root_id ()
{
if [ "$(id -u)" != "0" ]; then
log_failure_msg "Docker must be run as root"; exit 1
fi
}
case "$1" in
start)
check_root_id || exit 1
log_begin_msg "Starting Docker"
mount | grep cgroup >/dev/null || mount -t cgroup none /sys/fs/cgroup 2>/dev/null
start-stop-daemon --start --background $NO_CLOSE \
--exec "$DOCKER" \
--pidfile "$DOCKER_PIDFILE" \
-- -d -p "$DOCKER_PIDFILE" \
$DOCKER_OPTS
log_end_msg $?
;;
stop)
check_root_id || exit 1
log_begin_msg "Stopping Docker"
start-stop-daemon --stop \
--pidfile "$DOCKER_PIDFILE"
log_end_msg $?
;;
restart)
check_root_id || exit 1
docker_pid=`cat "$DOCKER_PIDFILE" 2>/dev/null`
[ -n "$docker_pid" ] \
&& ps -p $docker_pid > /dev/null 2>&1 \
&& $0 stop
$0 start
;;
force-reload)
check_root_id || exit 1
$0 restart
;;
status)
status_of_proc -p "$DOCKER_PIDFILE" "$DOCKER" docker
;;
*)
echo "Usage: $0 {start|stop|restart|status}"
exit 1
;;
esac
exit 0

View File

@@ -1,10 +0,0 @@
description "Docker daemon"
start on filesystem and started lxc-net
stop on runlevel [!2345]
respawn
script
/usr/bin/docker -d
end script

55
contrib/install.sh Executable file
View File

@@ -0,0 +1,55 @@
#!/bin/sh
# This script is meant for quick & easy install via 'curl URL-OF-SCRIPT | sh'
# Original version by Jeff Lindsay <progrium@gmail.com>
# Revamped by Jerome Petazzoni <jerome@dotcloud.com>
#
# This script canonical location is http://get.docker.io/; to update it, run:
# s3cmd put -m text/x-shellscript -P install.sh s3://get.docker.io/index
echo "Ensuring basic dependencies are installed..."
apt-get -qq update
apt-get -qq install lxc wget bsdtar
echo "Looking in /proc/filesystems to see if we have AUFS support..."
if grep -q aufs /proc/filesystems
then
echo "Found."
else
echo "Ahem, it looks like the current kernel does not support AUFS."
echo "Let's see if we can load the AUFS module with modprobe..."
if modprobe aufs
then
echo "Module loaded."
else
echo "Ahem, things didn't turn out as expected."
KPKG=linux-image-extra-$(uname -r)
echo "Trying to install $KPKG..."
if apt-get -qq install $KPKG
then
echo "Installed."
else
echo "Oops, we couldn't install the -extra kernel."
echo "Are you sure you are running a supported version of Ubuntu?"
echo "Proceeding anyway, but Docker will probably NOT WORK!"
fi
fi
fi
echo "Downloading docker binary and uncompressing into /usr/local/bin..."
curl -s http://get.docker.io/builds/$(uname -s)/$(uname -m)/docker-master.tgz |
tar -C /usr/local/bin --strip-components=1 -zxf- \
docker-master/docker
if [ -f /etc/init/dockerd.conf ]
then
echo "Upstart script already exists."
else
echo "Creating /etc/init/dockerd.conf..."
echo "exec env LANG=\"en_US.UTF-8\" /usr/local/bin/docker -d" > /etc/init/dockerd.conf
fi
echo "Starting dockerd..."
start dockerd > /dev/null
echo "Done."
echo

View File

@@ -1,67 +0,0 @@
#!/bin/bash
# Generate a minimal filesystem for archlinux and load it into the local
# docker as "archlinux"
# requires root
set -e
PACSTRAP=$(which pacstrap)
[ "$PACSTRAP" ] || {
echo "Could not find pacstrap. Run pacman -S arch-install-scripts"
exit 1
}
EXPECT=$(which expect)
[ "$EXPECT" ] || {
echo "Could not find expect. Run pacman -S expect"
exit 1
}
ROOTFS=~/rootfs-arch-$$-$RANDOM
mkdir $ROOTFS
#packages to ignore for space savings
PKGIGNORE=linux,jfsutils,lvm2,cryptsetup,groff,man-db,man-pages,mdadm,pciutils,pcmciautils,reiserfsprogs,s-nail,xfsprogs
expect <<EOF
set timeout 60
set send_slow {1 1}
spawn pacstrap -c -d -G -i $ROOTFS base haveged --ignore $PKGIGNORE
expect {
"Install anyway?" { send n\r; exp_continue }
"(default=all)" { send \r; exp_continue }
"Proceed with installation?" { send "\r"; exp_continue }
"skip the above package" {send "y\r"; exp_continue }
"checking" { exp_continue }
"loading" { exp_continue }
"installing" { exp_continue }
}
EOF
arch-chroot $ROOTFS /bin/sh -c "haveged -w 1024; pacman-key --init; pkill haveged; pacman -Rs --noconfirm haveged; pacman-key --populate archlinux"
arch-chroot $ROOTFS /bin/sh -c "ln -s /usr/share/zoneinfo/UTC /etc/localtime"
cat > $ROOTFS/etc/locale.gen <<DELIM
en_US.UTF-8 UTF-8
en_US ISO-8859-1
DELIM
arch-chroot $ROOTFS locale-gen
arch-chroot $ROOTFS /bin/sh -c 'echo "Server = http://mirrors.kernel.org/archlinux/\$repo/os/\$arch" > /etc/pacman.d/mirrorlist'
# udev doesn't work in containers, rebuild /dev
DEV=${ROOTFS}/dev
mv ${DEV} ${DEV}.old
mkdir -p ${DEV}
mknod -m 666 ${DEV}/null c 1 3
mknod -m 666 ${DEV}/zero c 1 5
mknod -m 666 ${DEV}/random c 1 8
mknod -m 666 ${DEV}/urandom c 1 9
mkdir -m 755 ${DEV}/pts
mkdir -m 1777 ${DEV}/shm
mknod -m 666 ${DEV}/tty c 5 0
mknod -m 600 ${DEV}/console c 5 1
mknod -m 666 ${DEV}/tty0 c 4 0
mknod -m 666 ${DEV}/full c 1 7
mknod -m 600 ${DEV}/initctl p
mknod -m 666 ${DEV}/ptmx c 5 2
tar --numeric-owner -C $ROOTFS -c . | docker import - archlinux
docker run -i -t archlinux echo Success.
rm -rf $ROOTFS

View File

@@ -35,5 +35,5 @@ do
cp -a /dev/$X dev
done
tar --numeric-owner -cf- . | docker import - busybox
tar -cf- . | docker import - busybox
docker run -i -u root busybox /bin/echo Success.

View File

@@ -1,15 +0,0 @@
#!/bin/bash
# Create a CentOS base image for Docker
# From unclejack https://github.com/dotcloud/docker/issues/290
set -e
MIRROR_URL="http://centos.netnitco.net/6.4/os/x86_64/"
MIRROR_URL_UPDATES="http://centos.netnitco.net/6.4/updates/x86_64/"
yum install -y febootstrap xz
febootstrap -i bash -i coreutils -i tar -i bzip2 -i gzip -i vim-minimal -i wget -i patch -i diffutils -i iproute -i yum centos centos64 $MIRROR_URL -u $MIRROR_URL_UPDATES
touch centos64/etc/resolv.conf
touch centos64/sbin/init
tar --numeric-owner -Jcpf centos-64.tar.xz -C centos64 .

View File

@@ -1,233 +0,0 @@
#!/bin/bash
set -e
variant='minbase'
include='iproute,iputils-ping'
arch='amd64' # intentionally undocumented for now
skipDetection=
strictDebootstrap=
justTar=
usage() {
echo >&2
echo >&2 "usage: $0 [options] repo suite [mirror]"
echo >&2
echo >&2 'options: (not recommended)'
echo >&2 " -p set an http_proxy for debootstrap"
echo >&2 " -v $variant # change default debootstrap variant"
echo >&2 " -i $include # change default package includes"
echo >&2 " -d # strict debootstrap (do not apply any docker-specific tweaks)"
echo >&2 " -s # skip version detection and tagging (ie, precise also tagged as 12.04)"
echo >&2 " # note that this will also skip adding universe and/or security/updates to sources.list"
echo >&2 " -t # just create a tarball, especially for dockerbrew (uses repo as tarball name)"
echo >&2
echo >&2 " ie: $0 username/debian squeeze"
echo >&2 " $0 username/debian squeeze http://ftp.uk.debian.org/debian/"
echo >&2
echo >&2 " ie: $0 username/ubuntu precise"
echo >&2 " $0 username/ubuntu precise http://mirrors.melbourne.co.uk/ubuntu/"
echo >&2
echo >&2 " ie: $0 -t precise.tar.bz2 precise"
echo >&2 " $0 -t wheezy.tgz wheezy"
echo >&2 " $0 -t wheezy-uk.tar.xz wheezy http://ftp.uk.debian.org/debian/"
echo >&2
}
# these should match the names found at http://www.debian.org/releases/
debianStable=wheezy
debianUnstable=sid
# this should match the name found at http://releases.ubuntu.com/
ubuntuLatestLTS=precise
while getopts v:i:a:p:dst name; do
case "$name" in
p)
http_proxy="$OPTARG"
;;
v)
variant="$OPTARG"
;;
i)
include="$OPTARG"
;;
a)
arch="$OPTARG"
;;
d)
strictDebootstrap=1
;;
s)
skipDetection=1
;;
t)
justTar=1
;;
?)
usage
exit 0
;;
esac
done
shift $(($OPTIND - 1))
repo="$1"
suite="$2"
mirror="${3:-}" # stick to the default debootstrap mirror if one is not provided
if [ ! "$repo" ] || [ ! "$suite" ]; then
usage
exit 1
fi
# some rudimentary detection for whether we need to "sudo" our docker calls
docker=''
if docker version > /dev/null 2>&1; then
docker='docker'
elif sudo docker version > /dev/null 2>&1; then
docker='sudo docker'
elif command -v docker > /dev/null 2>&1; then
docker='docker'
else
echo >&2 "warning: either docker isn't installed, or your current user cannot run it;"
echo >&2 " this script is not likely to work as expected"
sleep 3
docker='docker' # give us a command-not-found later
fi
# make sure we have an absolute path to our final tarball so we can still reference it properly after we change directory
if [ "$justTar" ]; then
if [ ! -d "$(dirname "$repo")" ]; then
echo >&2 "error: $(dirname "$repo") does not exist"
exit 1
fi
repo="$(cd "$(dirname "$repo")" && pwd -P)/$(basename "$repo")"
fi
# will be filled in later, if [ -z "$skipDetection" ]
lsbDist=''
target="/tmp/docker-rootfs-debootstrap-$suite-$$-$RANDOM"
cd "$(dirname "$(readlink -f "$BASH_SOURCE")")"
returnTo="$(pwd -P)"
set -x
# bootstrap
mkdir -p "$target"
sudo http_proxy=$http_proxy debootstrap --verbose --variant="$variant" --include="$include" --arch="$arch" "$suite" "$target" "$mirror"
cd "$target"
if [ -z "$strictDebootstrap" ]; then
# prevent init scripts from running during install/update
# policy-rc.d (for most scripts)
echo $'#!/bin/sh\nexit 101' | sudo tee usr/sbin/policy-rc.d > /dev/null
sudo chmod +x usr/sbin/policy-rc.d
# initctl (for some pesky upstart scripts)
sudo chroot . dpkg-divert --local --rename --add /sbin/initctl
sudo ln -sf /bin/true sbin/initctl
# see https://github.com/dotcloud/docker/issues/446#issuecomment-16953173
# shrink the image, since apt makes us fat (wheezy: ~157.5MB vs ~120MB)
sudo chroot . apt-get clean
# while we're at it, apt is unnecessarily slow inside containers
# this forces dpkg not to call sync() after package extraction and speeds up install
# the benefit is huge on spinning disks, and the penalty is nonexistent on SSD or decent server virtualization
echo 'force-unsafe-io' | sudo tee etc/dpkg/dpkg.cfg.d/02apt-speedup > /dev/null
# we want to effectively run "apt-get clean" after every install to keep images small
echo 'DPkg::Post-Invoke {"/bin/rm -f /var/cache/apt/archives/*.deb || true";};' | sudo tee etc/apt/apt.conf.d/no-cache > /dev/null
# helpful undo lines for each the above tweaks (for lack of a better home to keep track of them):
# rm /usr/sbin/policy-rc.d
# rm /sbin/initctl; dpkg-divert --rename --remove /sbin/initctl
# rm /etc/dpkg/dpkg.cfg.d/02apt-speedup
# rm /etc/apt/apt.conf.d/no-cache
if [ -z "$skipDetection" ]; then
# see also rudimentary platform detection in hack/install.sh
lsbDist=''
if [ -r etc/lsb-release ]; then
lsbDist="$(. etc/lsb-release && echo "$DISTRIB_ID")"
fi
if [ -z "$lsbDist" ] && [ -r etc/debian_version ]; then
lsbDist='Debian'
fi
case "$lsbDist" in
Debian)
# add the updates and security repositories
if [ "$suite" != "$debianUnstable" -a "$suite" != 'unstable' ]; then
# ${suite}-updates only applies to non-unstable
sudo sed -i "p; s/ $suite main$/ ${suite}-updates main/" etc/apt/sources.list
# same for security updates
echo "deb http://security.debian.org/ $suite/updates main" | sudo tee -a etc/apt/sources.list > /dev/null
fi
;;
Ubuntu)
# add the universe, updates, and security repositories
sudo sed -i "
s/ $suite main$/ $suite main universe/; p;
s/ $suite main/ ${suite}-updates main/; p;
s/ $suite-updates main/ ${suite}-security main/
" etc/apt/sources.list
;;
esac
fi
fi
if [ "$justTar" ]; then
# create the tarball file so it has the right permissions (ie, not root)
touch "$repo"
# fill the tarball
sudo tar --numeric-owner -caf "$repo" .
else
# create the image (and tag $repo:$suite)
sudo tar --numeric-owner -c . | $docker import - $repo $suite
# test the image
$docker run -i -t $repo:$suite echo success
if [ -z "$skipDetection" ]; then
case "$lsbDist" in
Debian)
if [ "$suite" = "$debianStable" -o "$suite" = 'stable' ] && [ -r etc/debian_version ]; then
# tag latest
$docker tag $repo:$suite $repo latest
if [ -r etc/debian_version ]; then
# tag the specific debian release version (which is only reasonable to tag on debian stable)
ver=$(cat etc/debian_version)
$docker tag $repo:$suite $repo $ver
fi
fi
;;
Ubuntu)
if [ "$suite" = "$ubuntuLatestLTS" ]; then
# tag latest
$docker tag $repo:$suite $repo latest
fi
if [ -r etc/lsb-release ]; then
lsbRelease="$(. etc/lsb-release && echo "$DISTRIB_RELEASE")"
if [ "$lsbRelease" ]; then
# tag specific Ubuntu version number, if available (12.04, etc.)
$docker tag $repo:$suite $repo $lsbRelease
fi
fi
;;
esac
fi
fi
# cleanup
cd "$returnTo"
sudo rm -rf "$target"

View File

@@ -1,49 +0,0 @@
#!/bin/bash
# Generate a very minimal filesystem based on busybox-static,
# and load it into the local docker under the name "docker-ut".
missing_pkg() {
echo "Sorry, I could not locate $1"
echo "Try 'apt-get install ${2:-$1}'?"
exit 1
}
BUSYBOX=$(which busybox)
[ "$BUSYBOX" ] || missing_pkg busybox busybox-static
SOCAT=$(which socat)
[ "$SOCAT" ] || missing_pkg socat
shopt -s extglob
set -ex
ROOTFS=`mktemp -d /tmp/rootfs-busybox.XXXXXXXXXX`
trap "rm -rf $ROOTFS" INT QUIT TERM
cd $ROOTFS
mkdir bin etc dev dev/pts lib proc sys tmp
touch etc/resolv.conf
cp /etc/nsswitch.conf etc/nsswitch.conf
echo root:x:0:0:root:/:/bin/sh > etc/passwd
echo daemon:x:1:1:daemon:/usr/sbin:/bin/sh >> etc/passwd
echo root:x:0: > etc/group
echo daemon:x:1: >> etc/group
ln -s lib lib64
ln -s bin sbin
cp $BUSYBOX $SOCAT bin
for X in $(busybox --list)
do
ln -s busybox bin/$X
done
rm bin/init
ln bin/busybox bin/init
cp -P /lib/x86_64-linux-gnu/lib{pthread*,c*(-*),dl*(-*),nsl*(-*),nss_*,util*(-*),wrap,z}.so* lib
cp /lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 lib
cp -P /usr/lib/x86_64-linux-gnu/lib{crypto,ssl}.so* lib
for X in console null ptmx random stdin stdout stderr tty urandom zero
do
cp -a /dev/$X dev
done
chmod 0755 $ROOTFS # See #486
tar --numeric-owner -cf- . | docker import - docker-ut
docker run -i -u root docker-ut /bin/echo Success.
rm -rf $ROOTFS

View File

@@ -1,22 +0,0 @@
Copyright (c) 2013 Honza Pokorny
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,23 +0,0 @@
dockerfile.vim
==============
Syntax highlighting for Dockerfiles
Installation
------------
Via pathogen, the usual way...
Features
--------
The syntax highlighting includes:
* The directives (e.g. `FROM`)
* Strings
* Comments
License
-------
BSD, short and sweet

View File

@@ -1,18 +0,0 @@
*dockerfile.txt* Syntax highlighting for Dockerfiles
Author: Honza Pokorny <http://honza.ca>
License: BSD
INSTALLATION *installation*
Drop it on your Pathogen path and you're all set.
FEATURES *features*
The syntax highlighting includes:
* The directives (e.g. FROM)
* Strings
* Comments
vim:tw=78:et:ft=help:norl:

View File

@@ -1 +0,0 @@
au BufNewFile,BufRead Dockerfile set filetype=dockerfile

View File

@@ -1,24 +0,0 @@
" dockerfile.vim - Syntax highlighting for Dockerfiles
" Maintainer: Honza Pokorny <http://honza.ca>
" Version: 0.5
if exists("b:current_syntax")
finish
endif
let b:current_syntax = "dockerfile"
syntax case ignore
syntax match dockerfileKeyword /\v^\s*(FROM|MAINTAINER|RUN|CMD|EXPOSE|ENV|ADD)\s/
syntax match dockerfileKeyword /\v^\s*(ENTRYPOINT|VOLUME|USER|WORKDIR)\s/
highlight link dockerfileKeyword Keyword
syntax region dockerfileString start=/\v"/ skip=/\v\\./ end=/\v"/
highlight link dockerfileString String
syntax match dockerfileComment "\v^\s*#.*$"
highlight link dockerfileComment Comment
set commentstring=#\ %s

View File

@@ -4,117 +4,57 @@ import (
"flag"
"fmt"
"github.com/dotcloud/docker"
"github.com/dotcloud/docker/sysinit"
"github.com/dotcloud/docker/utils"
"github.com/dotcloud/docker/rcli"
"github.com/dotcloud/docker/term"
"io"
"io/ioutil"
"log"
"net"
"os"
"os/signal"
"strconv"
"strings"
"syscall"
)
var (
GITCOMMIT string
VERSION string
GIT_COMMIT string
)
func main() {
if selfPath := utils.SelfPath(); selfPath == "/sbin/init" || selfPath == "/.dockerinit" {
if docker.SelfPath() == "/sbin/init" {
// Running in init mode
sysinit.SysInit()
docker.SysInit()
return
}
// FIXME: Switch d and D ? (to be more sshd like)
flVersion := flag.Bool("v", false, "Print version information and quit")
flDaemon := flag.Bool("d", false, "Daemon mode")
flDebug := flag.Bool("D", false, "Debug mode")
flAutoRestart := flag.Bool("r", true, "Restart previously running containers")
bridgeName := flag.String("b", "", "Attach containers to a pre-existing network bridge. Use 'none' to disable container networking")
bridgeName := flag.String("b", "", "Attach containers to a pre-existing network bridge")
pidfile := flag.String("p", "/var/run/docker.pid", "File containing process PID")
flGraphPath := flag.String("g", "/var/lib/docker", "Path to graph storage base dir.")
flEnableCors := flag.Bool("api-enable-cors", false, "Enable CORS requests in the remote api.")
flDns := flag.String("dns", "", "Set custom dns servers")
flHosts := utils.ListOpts{fmt.Sprintf("unix://%s", docker.DEFAULTUNIXSOCKET)}
flag.Var(&flHosts, "H", "tcp://host:port to bind/connect to or unix://path/to/socket to use")
flEnableIptables := flag.Bool("iptables", true, "Disable iptables within docker")
flDefaultIp := flag.String("ip", "0.0.0.0", "Default ip address to use when binding a containers ports")
flInterContainerComm := flag.Bool("icc", true, "Enable inter-container communication")
flag.Parse()
if *flVersion {
showVersion()
return
}
if len(flHosts) > 1 {
flHosts = flHosts[1:] //trick to display a nice default value in the usage
}
for i, flHost := range flHosts {
host, err := utils.ParseHost(docker.DEFAULTHTTPHOST, docker.DEFAULTHTTPPORT, flHost)
if err == nil {
flHosts[i] = host
} else {
log.Fatal(err)
}
}
bridge := docker.DefaultNetworkBridge
if *bridgeName != "" {
bridge = *bridgeName
docker.NetworkBridgeIface = *bridgeName
} else {
docker.NetworkBridgeIface = docker.DefaultNetworkBridge
}
if *flDebug {
os.Setenv("DEBUG", "1")
}
docker.GITCOMMIT = GITCOMMIT
docker.VERSION = VERSION
docker.GIT_COMMIT = GIT_COMMIT
if *flDaemon {
if flag.NArg() != 0 {
flag.Usage()
return
}
var dns []string
if *flDns != "" {
dns = []string{*flDns}
}
ip := net.ParseIP(*flDefaultIp)
config := &docker.DaemonConfig{
Pidfile: *pidfile,
GraphPath: *flGraphPath,
AutoRestart: *flAutoRestart,
EnableCors: *flEnableCors,
Dns: dns,
EnableIptables: *flEnableIptables,
BridgeIface: bridge,
ProtoAddresses: flHosts,
DefaultIp: ip,
InterContainerCommunication: *flInterContainerComm,
}
if err := daemon(config); err != nil {
if err := daemon(*pidfile); err != nil {
log.Fatal(err)
}
} else {
if len(flHosts) > 1 {
log.Fatal("Please specify only one -H")
}
protoAddrParts := strings.SplitN(flHosts[0], "://", 2)
if err := docker.ParseCommands(protoAddrParts[0], protoAddrParts[1], flag.Args()...); err != nil {
if sterr, ok := err.(*utils.StatusError); ok {
os.Exit(sterr.Status)
}
if err := runCommand(flag.Args()); err != nil {
log.Fatal(err)
}
}
}
func showVersion() {
fmt.Printf("Docker version %s, build %s\n", VERSION, GITCOMMIT)
}
func createPidFile(pidfile string) error {
if pidString, err := ioutil.ReadFile(pidfile); err == nil {
pid, err := strconv.Atoi(string(pidString))
@@ -142,51 +82,65 @@ func removePidFile(pidfile string) {
}
}
func daemon(config *docker.DaemonConfig) error {
if err := createPidFile(config.Pidfile); err != nil {
func daemon(pidfile string) error {
if err := createPidFile(pidfile); err != nil {
log.Fatal(err)
}
defer removePidFile(config.Pidfile)
server, err := docker.NewServer(config)
if err != nil {
return err
}
defer server.Close()
defer removePidFile(pidfile)
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt, os.Kill, os.Signal(syscall.SIGTERM))
go func() {
sig := <-c
log.Printf("Received signal '%v', exiting\n", sig)
server.Close()
removePidFile(config.Pidfile)
removePidFile(pidfile)
os.Exit(0)
}()
chErrors := make(chan error, len(config.ProtoAddresses))
for _, protoAddr := range config.ProtoAddresses {
protoAddrParts := strings.SplitN(protoAddr, "://", 2)
if protoAddrParts[0] == "unix" {
syscall.Unlink(protoAddrParts[1])
} else if protoAddrParts[0] == "tcp" {
if !strings.HasPrefix(protoAddrParts[1], "127.0.0.1") {
log.Println("/!\\ DON'T BIND ON ANOTHER IP ADDRESS THAN 127.0.0.1 IF YOU DON'T KNOW WHAT YOU'RE DOING /!\\")
}
} else {
server.Close()
removePidFile(config.Pidfile)
log.Fatal("Invalid protocol format.")
}
go func() {
chErrors <- docker.ListenAndServe(protoAddrParts[0], protoAddrParts[1], server, true)
}()
service, err := docker.NewServer()
if err != nil {
return err
}
for i := 0; i < len(config.ProtoAddresses); i += 1 {
err := <-chErrors
if err != nil {
return rcli.ListenAndServe("tcp", "127.0.0.1:4242", service)
}
func runCommand(args []string) error {
// FIXME: we want to use unix sockets here, but net.UnixConn doesn't expose
// CloseWrite(), which we need to cleanly signal that stdin is closed without
// closing the connection.
// See http://code.google.com/p/go/issues/detail?id=3345
if conn, err := rcli.Call("tcp", "127.0.0.1:4242", args...); err == nil {
options := conn.GetOptions()
if options.RawTerminal &&
term.IsTerminal(int(os.Stdin.Fd())) &&
os.Getenv("NORAW") == "" {
if oldState, err := rcli.SetRawTerminal(); err != nil {
return err
} else {
defer rcli.RestoreTerminal(oldState)
}
}
receiveStdout := docker.Go(func() error {
_, err := io.Copy(os.Stdout, conn)
return err
})
sendStdin := docker.Go(func() error {
_, err := io.Copy(conn, os.Stdin)
if err := conn.CloseWrite(); err != nil {
log.Printf("Couldn't send EOF: " + err.Error())
}
return err
})
if err := <-receiveStdout; err != nil {
return err
}
if !term.IsTerminal(int(os.Stdin.Fd())) {
if err := <-sendStdin; err != nil {
return err
}
}
} else {
return fmt.Errorf("Can't connect to docker daemon. Is 'docker -d' running on this host?")
}
return nil
}

View File

@@ -1,16 +0,0 @@
package main
import (
"github.com/dotcloud/docker/sysinit"
)
var (
GITCOMMIT string
VERSION string
)
func main() {
// Running in init mode
sysinit.SysInit()
return
}

View File

@@ -1,20 +0,0 @@
from ubuntu:12.04
maintainer Nick Stinemates
#
# docker build -t docker:docs . && docker run -p 8000:8000 docker:docs
#
run apt-get update
run apt-get install -y python-setuptools make
run easy_install pip
#from docs/requirements.txt, but here to increase cacheability
run pip install Sphinx==1.1.3
run pip install sphinxcontrib-httpdomain==1.1.8
add . /docs
run cd /docs; make docs
expose 8000
workdir /docs/_build/html
entrypoint ["python", "-m", "SimpleHTTPServer"]

View File

@@ -1,2 +0,0 @@
Andy Rothfusz <andy@dotcloud.com> (@metalivedev)
Ken Cochrane <ken@dotcloud.com> (@kencochrane)

View File

@@ -6,7 +6,6 @@ SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
PYTHON = python
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
@@ -39,37 +38,38 @@ help:
# @echo " linkcheck to check all external links for integrity"
# @echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " docs to build the docs and copy the static files to the outputdir"
@echo " server to serve the docs in your browser under \`http://localhost:8000\`"
@echo " publish to publish the app to dotcloud"
clean:
-rm -rf $(BUILDDIR)/*
docs:
-rm -rf $(BUILDDIR)/*
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/html
cp sources/index.html $(BUILDDIR)/html/
cp -r sources/gettingstarted $(BUILDDIR)/html/
cp sources/dotcloud.yml $(BUILDDIR)/html/
cp sources/CNAME $(BUILDDIR)/html/
cp sources/.nojekyll $(BUILDDIR)/html/
cp sources/nginx.conf $(BUILDDIR)/html/
@echo
@echo "Build finished. The documentation pages are now in $(BUILDDIR)/html."
server: docs
@cd $(BUILDDIR)/html; $(PYTHON) -m SimpleHTTPServer 8000
site:
cp -r website $(BUILDDIR)/
cp -r theme/docker/static/ $(BUILDDIR)/website/
@echo
@echo "The Website pages are in $(BUILDDIR)/site."
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
connect:
@echo connecting dotcloud to www.docker.io website, make sure to use user 1
@echo or create your own "dockerwebsite" app
@cd $(BUILDDIR)/website/ ; \
dotcloud connect dockerwebsite ; \
dotcloud list
@echo pushing changes to staging site
@cd _build/html/ ; \
@dotcloud list ; \
dotcloud connect dockerwebsite
push:
@cd $(BUILDDIR)/website/ ; \
@cd _build/html/ ; \
dotcloud push
github-deploy: docs
rm -fr github-deploy
git clone ssh://git@github.com/dotcloud/docker github-deploy
cd github-deploy && git checkout -f gh-pages && git rm -r * && rsync -avH ../_build/html/ ./ && touch .nojekyll && echo "docker.io" > CNAME && git add * && git commit -m "Updating docs"
$(VERSIONS):
@echo "Hello world"

View File

@@ -1,103 +1,50 @@
Docker Documentation
====================
Docker documentation and website
================================
Overview
--------
Documentation
-------------
This is your definite place to contribute to the docker documentation. The documentation is generated from the
.rst files under sources.
The source for Docker documentation is here under ``sources/`` in the
form of .rst files. These files use
[reStructuredText](http://docutils.sourceforge.net/rst.html)
formatting with [Sphinx](http://sphinx-doc.org/) extensions for
structure, cross-linking and indexing.
The folder also contains the other files to create the http://docker.io website, but you can generally ignore
most of those.
The HTML files are built and hosted on
[readthedocs.org](https://readthedocs.org/projects/docker/), appearing
via proxy on https://docs.docker.io. The HTML files update
automatically after each change to the master or release branch of the
[docker files on GitHub](https://github.com/dotcloud/docker) thanks to
post-commit hooks. The "release" branch maps to the "latest"
documentation and the "master" branch maps to the "master"
documentation.
**Warning**: The "master" documentation may include features not yet
part of any official docker release. "Master" docs should be used only
for understanding bleeding-edge development and "latest" should be
used for the latest official release.
Installation
------------
If you need to manually trigger a build of an existing branch, then
you can do that through the [readthedocs
interface](https://readthedocs.org/builds/docker/). If you would like
to add new build targets, including new branches or tags, then you
must contact one of the existing maintainers and get your
readthedocs.org account added to the maintainers list, or just file an
issue on GitHub describing the branch/tag and why it needs to be added
to the docs, and one of the maintainers will add it for you.
Getting Started
---------------
To edit and test the docs, you'll need to install the Sphinx tool and
its dependencies. There are two main ways to install this tool:
###Native Installation
* Install sphinx: `pip install sphinx`
* Mac OS X: `[sudo] pip-2.7 install sphinx`
* Install sphinx httpdomain contrib package: `pip install sphinxcontrib-httpdomain`
* Mac OS X: `[sudo] pip-2.7 install sphinxcontrib-httpdomain`
* Work in your own fork of the code, we accept pull requests.
* Install sphinx: ``pip install sphinx``
* If pip is not available you can probably install it using your favorite package manager as **python-pip**
###Alternative Installation: Docker Container
If you're running ``docker`` on your development machine then you may
find it easier and cleaner to use the Dockerfile. This installs Sphinx
in a container, adds the local ``docs/`` directory and builds the HTML
docs inside the container, even starting a simple HTTP server on port
8000 so that you can connect and see your changes. Just run ``docker
build .`` and run the resulting image. This is the equivalent to
``make clean server`` since each container starts clean.
In the ``docs/`` directory, run:
```docker build -t docker:docs . && docker run -p 8000:8000 docker:docs```
Usage
-----
* Follow the contribution guidelines (``../CONTRIBUTING.md``)
* Work in your own fork of the code, we accept pull requests.
* Change the ``.rst`` files with your favorite editor -- try to keep the
lines short and respect RST and Sphinx conventions.
* Run ``make clean docs`` to clean up old files and generate new ones,
or just ``make docs`` to update after small changes.
* Your static website can now be found in the ``_build`` directory.
* To preview what you have generated run ``make server`` and open
http://localhost:8000/ in your favorite browser.
* change the .rst files with your favorite editor to your liking
* run *make docs* to clean up old files and generate new ones
* your static website can now be found in the _build dir
* to preview what you have generated, cd into _build/html and then run 'python -m SimpleHTTPServer 8000'
``make clean docs`` must complete without any warnings or errors.
Working using GitHub's file editor
Working using github's file editor
----------------------------------
Alternatively, for small changes and typos you might want to use
GitHub's built in file editor. It allows you to preview your changes
right online (though there can be some differences between GitHub
markdown and Sphinx RST). Just be careful not to create many commits.
Alternatively, for small changes and typo's you might want to use github's built in file editor. It allows
you to preview your changes right online. Just be carefull not to create many commits.
Images
------
When you need to add images, try to make them as small as possible (e.g. as gif).
When you need to add images, try to make them as small as possible
(e.g. as gif). Usually images should go in the same directory as the
.rst file which references them, or in a subdirectory if one already
exists.
Notes
-----
* The index.html and gettingstarted.html files are copied from the source dir to the output dir without modification.
So changes to those pages should be made directly in html
* For the template the css is compiled from less. When changes are needed they can be compiled using
lessc ``lessc main.less`` or watched using watch-lessc ``watch-lessc -i main.less -o main.css``
Guides on using sphinx
----------------------
* To make links to certain sections create a link target like so:
* To make links to certain pages create a link target like so:
```
.. _hello_world:
@@ -108,10 +55,7 @@ Guides on using sphinx
This is.. (etc.)
```
The ``_hello_world:`` will make it possible to link to this position
(page and section heading) from all other pages. See the [Sphinx
docs](http://sphinx-doc.org/markup/inline.html#role-ref) for more
information and examples.
The ``_hello_world:`` will make it possible to link to this position (page and marker) from all other pages.
* Notes, warnings and alarms
@@ -127,17 +71,4 @@ Guides on using sphinx
* Code examples
* Start without $, so it's easy to copy and paste.
* Use "sudo" with docker to ensure that your command is runnable
even if they haven't [used the *docker*
group](http://docs.docker.io/en/latest/use/basics/#why-sudo).
Manpages
--------
* To make the manpages, run ``make man``. Please note there is a bug
in spinx 1.1.3 which makes this fail. Upgrade to the latest version
of Sphinx.
* Then preview the manpage by running ``man _build/man/docker.1``,
where ``_build/man/docker.1`` is the path to the generated manfile
Start without $, so it's easy to copy and paste.

View File

@@ -1,2 +0,0 @@
Sphinx==1.1.3
sphinxcontrib-httpdomain==1.1.8

0
docs/sources/.nojekyll Normal file
View File

1
docs/sources/CNAME Normal file
View File

@@ -0,0 +1 @@
docker.io

View File

@@ -1 +0,0 @@
Solomon Hykes <solomon@dotcloud.com> (@shykes)

View File

@@ -1,5 +0,0 @@
This directory holds the authoritative specifications of APIs defined and implemented by Docker. Currently this includes:
* The remote API by which a docker node can be queried over HTTP
* The registry API by which a docker node can download and upload container images for storage and sharing
* The index search API by which a docker node can search the public index for images to download

View File

@@ -1,232 +0,0 @@
:title: Remote API
:description: API Documentation for Docker
:keywords: API, Docker, rcli, REST, documentation
.. COMMENT use http://pythonhosted.org/sphinxcontrib-httpdomain/ to
.. document the REST API.
=================
Docker Remote API
=================
1. Brief introduction
=====================
- The Remote API is replacing rcli
- By default the Docker daemon listens on unix:///var/run/docker.sock and the client must have root access to interact with the daemon
- If a group named *docker* exists on your system, docker will apply ownership of the socket to the group
- The API tends to be REST, but for some complex commands, like attach
or pull, the HTTP connection is hijacked to transport stdout stdin
and stderr
- Since API version 1.2, the auth configuration is now handled client
side, so the client has to send the authConfig as POST in
/images/(name)/push
2. Versions
===========
The current version of the API is 1.6
Calling /images/<name>/insert is the same as calling
/v1.6/images/<name>/insert
You can still call an old version of the api using
/v1.0/images/<name>/insert
v1.6
****
Full Documentation
------------------
:doc:`docker_remote_api_v1.6`
What's new
----------
.. http:post:: /containers/(id)/attach
**New!** You can now split stderr from stdout. This is done by prefixing
a header to each transmition. See :http:post:`/containers/(id)/attach`.
The WebSocket attach is unchanged.
Note that attach calls on the previous API version didn't change. Stdout and
stderr are merged.
v1.5
****
Full Documentation
------------------
:doc:`docker_remote_api_v1.5`
What's new
----------
.. http:post:: /images/create
**New!** You can now pass registry credentials (via an AuthConfig object)
through the `X-Registry-Auth` header
.. http:post:: /images/(name)/push
**New!** The AuthConfig object now needs to be passed through
the `X-Registry-Auth` header
.. http:get:: /containers/json
**New!** The format of the `Ports` entry has been changed to a list of
dicts each containing `PublicPort`, `PrivatePort` and `Type` describing a
port mapping.
v1.4
****
Full Documentation
------------------
:doc:`docker_remote_api_v1.4`
What's new
----------
.. http:post:: /images/create
**New!** When pulling a repo, all images are now downloaded in parallel.
.. http:get:: /containers/(id)/top
**New!** You can now use ps args with docker top, like `docker top <container_id> aux`
.. http:get:: /events:
**New!** Image's name added in the events
v1.3
****
docker v0.5.0 51f6c4a_
Full Documentation
------------------
:doc:`docker_remote_api_v1.3`
What's new
----------
.. http:get:: /containers/(id)/top
List the processes running inside a container.
.. http:get:: /events:
**New!** Monitor docker's events via streaming or via polling
Builder (/build):
- Simplify the upload of the build context
- Simply stream a tarball instead of multipart upload with 4
intermediary buffers
- Simpler, less memory usage, less disk usage and faster
.. Warning::
The /build improvements are not reverse-compatible. Pre 1.3 clients
will break on /build.
List containers (/containers/json):
- You can use size=1 to get the size of the containers
Start containers (/containers/<id>/start):
- You can now pass host-specific configuration (e.g. bind mounts) in
the POST body for start calls
v1.2
****
docker v0.4.2 2e7649b_
Full Documentation
------------------
:doc:`docker_remote_api_v1.2`
What's new
----------
The auth configuration is now handled by the client.
The client should send it's authConfig as POST on each call of
/images/(name)/push
.. http:get:: /auth
**Deprecated.**
.. http:post:: /auth
Only checks the configuration but doesn't store it on the server
Deleting an image is now improved, will only untag the image if it
has children and remove all the untagged parents if has any.
.. http:post:: /images/<name>/delete
Now returns a JSON structure with the list of images
deleted/untagged.
v1.1
****
docker v0.4.0 a8ae398_
Full Documentation
------------------
:doc:`docker_remote_api_v1.1`
What's new
----------
.. http:post:: /images/create
.. http:post:: /images/(name)/insert
.. http:post:: /images/(name)/push
Uses json stream instead of HTML hijack, it looks like this:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-Type: application/json
{"status":"Pushing..."}
{"status":"Pushing", "progress":"1/? (n/a)"}
{"error":"Invalid..."}
...
v1.0
****
docker v0.3.4 8d73740_
Full Documentation
------------------
:doc:`docker_remote_api_v1.0`
What's new
----------
Initial version
.. _a8ae398: https://github.com/dotcloud/docker/commit/a8ae398bf52e97148ee7bd0d5868de2e15bd297f
.. _8d73740: https://github.com/dotcloud/docker/commit/8d73740343778651c09160cde9661f5f387b36f4
.. _2e7649b: https://github.com/dotcloud/docker/commit/2e7649beda7c820793bd46766cbc2cfeace7b168
.. _51f6c4a: https://github.com/dotcloud/docker/commit/51f6c4a7372450d164c61e0054daf0223ddbd909

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,18 +0,0 @@
:title: API Documentation
:description: docker documentation
:keywords: docker, ipa, documentation
APIs
====
Your programs and scripts can access Docker's functionality via these interfaces:
.. toctree::
:maxdepth: 3
registry_index_spec
registry_api
index_api
docker_remote_api
remote_api_client_libraries

View File

@@ -1,556 +0,0 @@
:title: Index API
:description: API Documentation for Docker Index
:keywords: API, Docker, index, REST, documentation
=================
Docker Index API
=================
1. Brief introduction
=====================
- This is the REST API for the Docker index
- Authorization is done with basic auth over SSL
- Not all commands require authentication, only those noted as such.
2. Endpoints
============
2.1 Repository
^^^^^^^^^^^^^^
Repositories
*************
User Repo
~~~~~~~~~
.. http:put:: /v1/repositories/(namespace)/(repo_name)/
Create a user repository with the given ``namespace`` and ``repo_name``.
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foo/bar/ HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
X-Docker-Token: true
[{"id": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f"}]
:parameter namespace: the namespace for the repo
:parameter repo_name: the name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
WWW-Authenticate: Token signature=123abc,repository="foo/bar",access=write
X-Docker-Token: signature=123abc,repository="foo/bar",access=write
X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]
""
:statuscode 200: Created
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active
.. http:delete:: /v1/repositories/(namespace)/(repo_name)/
Delete a user repository with the given ``namespace`` and ``repo_name``.
**Example Request**:
.. sourcecode:: http
DELETE /v1/repositories/foo/bar/ HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
X-Docker-Token: true
""
:parameter namespace: the namespace for the repo
:parameter repo_name: the name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 202
Vary: Accept
Content-Type: application/json
WWW-Authenticate: Token signature=123abc,repository="foo/bar",access=delete
X-Docker-Token: signature=123abc,repository="foo/bar",access=delete
X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]
""
:statuscode 200: Deleted
:statuscode 202: Accepted
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active
Library Repo
~~~~~~~~~~~~
.. http:put:: /v1/repositories/(repo_name)/
Create a library repository with the given ``repo_name``.
This is a restricted feature only available to docker admins.
When namespace is missing, it is assumed to be ``library``
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foobar/ HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
X-Docker-Token: true
[{"id": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f"}]
:parameter repo_name: the library name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
WWW-Authenticate: Token signature=123abc,repository="library/foobar",access=write
X-Docker-Token: signature=123abc,repository="foo/bar",access=write
X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]
""
:statuscode 200: Created
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active
.. http:delete:: /v1/repositories/(repo_name)/
Delete a library repository with the given ``repo_name``.
This is a restricted feature only available to docker admins.
When namespace is missing, it is assumed to be ``library``
**Example Request**:
.. sourcecode:: http
DELETE /v1/repositories/foobar/ HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
X-Docker-Token: true
""
:parameter repo_name: the library name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 202
Vary: Accept
Content-Type: application/json
WWW-Authenticate: Token signature=123abc,repository="library/foobar",access=delete
X-Docker-Token: signature=123abc,repository="foo/bar",access=delete
X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]
""
:statuscode 200: Deleted
:statuscode 202: Accepted
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active
Repository Images
*****************
User Repo Images
~~~~~~~~~~~~~~~~
.. http:put:: /v1/repositories/(namespace)/(repo_name)/images
Update the images for a user repo.
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foo/bar/images HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
[{"id": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f",
"checksum": "b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087"}]
:parameter namespace: the namespace for the repo
:parameter repo_name: the name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 204
Vary: Accept
Content-Type: application/json
""
:statuscode 204: Created
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active or permission denied
.. http:get:: /v1/repositories/(namespace)/(repo_name)/images
get the images for a user repo.
**Example Request**:
.. sourcecode:: http
GET /v1/repositories/foo/bar/images HTTP/1.1
Host: index.docker.io
Accept: application/json
:parameter namespace: the namespace for the repo
:parameter repo_name: the name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
[{"id": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f",
"checksum": "b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087"},
{"id": "ertwetewtwe38722009fe6857087b486531f9a779a0c1dfddgfgsdgdsgds",
"checksum": "34t23f23fc17e3ed29dae8f12c4f9e89cc6f0bsdfgfsdgdsgdsgerwgew"}]
:statuscode 200: OK
:statuscode 404: Not found
Library Repo Images
~~~~~~~~~~~~~~~~~~~
.. http:put:: /v1/repositories/(repo_name)/images
Update the images for a library repo.
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foobar/images HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
[{"id": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f",
"checksum": "b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087"}]
:parameter repo_name: the library name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 204
Vary: Accept
Content-Type: application/json
""
:statuscode 204: Created
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active or permission denied
.. http:get:: /v1/repositories/(repo_name)/images
get the images for a library repo.
**Example Request**:
.. sourcecode:: http
GET /v1/repositories/foobar/images HTTP/1.1
Host: index.docker.io
Accept: application/json
:parameter repo_name: the library name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
[{"id": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f",
"checksum": "b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087"},
{"id": "ertwetewtwe38722009fe6857087b486531f9a779a0c1dfddgfgsdgdsgds",
"checksum": "34t23f23fc17e3ed29dae8f12c4f9e89cc6f0bsdfgfsdgdsgdsgerwgew"}]
:statuscode 200: OK
:statuscode 404: Not found
Repository Authorization
************************
Library Repo
~~~~~~~~~~~~
.. http:put:: /v1/repositories/(repo_name)/auth
authorize a token for a library repo
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foobar/auth HTTP/1.1
Host: index.docker.io
Accept: application/json
Authorization: Token signature=123abc,repository="library/foobar",access=write
:parameter repo_name: the library name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
"OK"
:statuscode 200: OK
:statuscode 403: Permission denied
:statuscode 404: Not found
User Repo
~~~~~~~~~
.. http:put:: /v1/repositories/(namespace)/(repo_name)/auth
authorize a token for a user repo
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foo/bar/auth HTTP/1.1
Host: index.docker.io
Accept: application/json
Authorization: Token signature=123abc,repository="foo/bar",access=write
:parameter namespace: the namespace for the repo
:parameter repo_name: the name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
"OK"
:statuscode 200: OK
:statuscode 403: Permission denied
:statuscode 404: Not found
2.2 Users
^^^^^^^^^
User Login
**********
.. http:get:: /v1/users
If you want to check your login, you can try this endpoint
**Example Request**:
.. sourcecode:: http
GET /v1/users HTTP/1.1
Host: index.docker.io
Accept: application/json
Authorization: Basic akmklmasadalkm==
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Vary: Accept
Content-Type: application/json
OK
:statuscode 200: no error
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active
User Register
*************
.. http:post:: /v1/users
Registering a new account.
**Example request**:
.. sourcecode:: http
POST /v1/users HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
{"email": "sam@dotcloud.com",
"password": "toto42",
"username": "foobar"'}
:jsonparameter email: valid email address, that needs to be confirmed
:jsonparameter username: min 4 character, max 30 characters, must match the regular expression [a-z0-9\_].
:jsonparameter password: min 5 characters
**Example Response**:
.. sourcecode:: http
HTTP/1.1 201 OK
Vary: Accept
Content-Type: application/json
"User Created"
:statuscode 201: User Created
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
Update User
***********
.. http:put:: /v1/users/(username)/
Change a password or email address for given user. If you pass in an email,
it will add it to your account, it will not remove the old one. Passwords will
be updated.
It is up to the client to verify that that password that is sent is the one that
they want. Common approach is to have them type it twice.
**Example Request**:
.. sourcecode:: http
PUT /v1/users/fakeuser/ HTTP/1.1
Host: index.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Basic akmklmasadalkm==
{"email": "sam@dotcloud.com",
"password": "toto42"}
:parameter username: username for the person you want to update
**Example Response**:
.. sourcecode:: http
HTTP/1.1 204
Vary: Accept
Content-Type: application/json
""
:statuscode 204: User Updated
:statuscode 400: Errors (invalid json, missing or invalid fields, etc)
:statuscode 401: Unauthorized
:statuscode 403: Account is not Active
:statuscode 404: User not found
2.3 Search
^^^^^^^^^^
If you need to search the index, this is the endpoint you would use.
Search
******
.. http:get:: /v1/search
Search the Index given a search term. It accepts :http:method:`get` only.
**Example request**:
.. sourcecode:: http
GET /v1/search?q=search_term HTTP/1.1
Host: example.com
Accept: application/json
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Vary: Accept
Content-Type: application/json
{"query":"search_term",
"num_results": 3,
"results" : [
{"name": "ubuntu", "description": "An ubuntu image..."},
{"name": "centos", "description": "A centos image..."},
{"name": "fedora", "description": "A fedora image..."}
]
}
:query q: what you want to search for
:statuscode 200: no error
:statuscode 500: server error

View File

@@ -1,500 +0,0 @@
:title: Registry API
:description: API Documentation for Docker Registry
:keywords: API, Docker, index, registry, REST, documentation
===================
Docker Registry API
===================
1. Brief introduction
=====================
- This is the REST API for the Docker Registry
- It stores the images and the graph for a set of repositories
- It does not have user accounts data
- It has no notion of user accounts or authorization
- It delegates authentication and authorization to the Index Auth service using tokens
- It supports different storage backends (S3, cloud files, local FS)
- It doesnt have a local database
- It will be open-sourced at some point
We expect that there will be multiple registries out there. To help to grasp the context, here are some examples of registries:
- **sponsor registry**: such a registry is provided by a third-party hosting infrastructure as a convenience for their customers and the docker community as a whole. Its costs are supported by the third party, but the management and operation of the registry are supported by dotCloud. It features read/write access, and delegates authentication and authorization to the Index.
- **mirror registry**: such a registry is provided by a third-party hosting infrastructure but is targeted at their customers only. Some mechanism (unspecified to date) ensures that public images are pulled from a sponsor registry to the mirror registry, to make sure that the customers of the third-party provider can “docker pull” those images locally.
- **vendor registry**: such a registry is provided by a software vendor, who wants to distribute docker images. It would be operated and managed by the vendor. Only users authorized by the vendor would be able to get write access. Some images would be public (accessible for anyone), others private (accessible only for authorized users). Authentication and authorization would be delegated to the Index. The goal of vendor registries is to let someone do “docker pull basho/riak1.3” and automatically push from the vendor registry (instead of a sponsor registry); i.e. get all the convenience of a sponsor registry, while retaining control on the asset distribution.
- **private registry**: such a registry is located behind a firewall, or protected by an additional security layer (HTTP authorization, SSL client-side certificates, IP address authorization...). The registry is operated by a private entity, outside of dotClouds control. It can optionally delegate additional authorization to the Index, but it is not mandatory.
.. note::
Mirror registries and private registries which do not use the Index dont even need to run the registry code. They can be implemented by any kind of transport implementing HTTP GET and PUT. Read-only registries can be powered by a simple static HTTP server.
.. note::
The latter implies that while HTTP is the protocol of choice for a registry, multiple schemes are possible (and in some cases, trivial):
- HTTP with GET (and PUT for read-write registries);
- local mount point;
- remote docker addressed through SSH.
The latter would only require two new commands in docker, e.g. “registryget” and “registryput”, wrapping access to the local filesystem (and optionally doing consistency checks). Authentication and authorization are then delegated to SSH (e.g. with public keys).
2. Endpoints
============
2.1 Images
----------
Layer
*****
.. http:get:: /v1/images/(image_id)/layer
get image layer for a given ``image_id``
**Example Request**:
.. sourcecode:: http
GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/layer HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Token signature=123abc,repository="foo/bar",access=read
:parameter image_id: the id for the layer you want to get
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
X-Docker-Registry-Version: 0.6.0
Cookie: (Cookie provided by the Registry)
{layer binary data stream}
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Image not found
.. http:put:: /v1/images/(image_id)/layer
put image layer for a given ``image_id``
**Example Request**:
.. sourcecode:: http
PUT /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/layer HTTP/1.1
Host: registry-1.docker.io
Transfer-Encoding: chunked
Authorization: Token signature=123abc,repository="foo/bar",access=write
{layer binary data stream}
:parameter image_id: the id for the layer you want to get
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
X-Docker-Registry-Version: 0.6.0
""
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Image not found
Image
*****
.. http:put:: /v1/images/(image_id)/json
put image for a given ``image_id``
**Example Request**:
.. sourcecode:: http
PUT /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/json HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
{
id: "088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c",
parent: "aeee6396d62273d180a49c96c62e45438d87c7da4a5cf5d2be6bee4e21bc226f",
created: "2013-04-30T17:46:10.843673+03:00",
container: "8305672a76cc5e3d168f97221106ced35a76ec7ddbb03209b0f0d96bf74f6ef7",
container_config: {
Hostname: "host-test",
User: "",
Memory: 0,
MemorySwap: 0,
AttachStdin: false,
AttachStdout: false,
AttachStderr: false,
PortSpecs: null,
Tty: false,
OpenStdin: false,
StdinOnce: false,
Env: null,
Cmd: [
"/bin/bash",
"-c",
"apt-get -q -yy -f install libevent-dev"
],
Dns: null,
Image: "imagename/blah",
Volumes: { },
VolumesFrom: ""
},
docker_version: "0.1.7"
}
:parameter image_id: the id for the layer you want to get
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
X-Docker-Registry-Version: 0.6.0
""
:statuscode 200: OK
:statuscode 401: Requires authorization
.. http:get:: /v1/images/(image_id)/json
get image for a given ``image_id``
**Example Request**:
.. sourcecode:: http
GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/json HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
:parameter image_id: the id for the layer you want to get
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
X-Docker-Registry-Version: 0.6.0
X-Docker-Size: 456789
X-Docker-Checksum: b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087
{
id: "088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c",
parent: "aeee6396d62273d180a49c96c62e45438d87c7da4a5cf5d2be6bee4e21bc226f",
created: "2013-04-30T17:46:10.843673+03:00",
container: "8305672a76cc5e3d168f97221106ced35a76ec7ddbb03209b0f0d96bf74f6ef7",
container_config: {
Hostname: "host-test",
User: "",
Memory: 0,
MemorySwap: 0,
AttachStdin: false,
AttachStdout: false,
AttachStderr: false,
PortSpecs: null,
Tty: false,
OpenStdin: false,
StdinOnce: false,
Env: null,
Cmd: [
"/bin/bash",
"-c",
"apt-get -q -yy -f install libevent-dev"
],
Dns: null,
Image: "imagename/blah",
Volumes: { },
VolumesFrom: ""
},
docker_version: "0.1.7"
}
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Image not found
Ancestry
********
.. http:get:: /v1/images/(image_id)/ancestry
get ancestry for an image given an ``image_id``
**Example Request**:
.. sourcecode:: http
GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/ancestry HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
:parameter image_id: the id for the layer you want to get
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
X-Docker-Registry-Version: 0.6.0
["088b4502f51920fbd9b7c503e87c7a2c05aa3adc3d35e79c031fa126b403200f",
"aeee63968d87c7da4a5cf5d2be6bee4e21bc226fd62273d180a49c96c62e4543",
"bfa4c5326bc764280b0863b46a4b20d940bc1897ef9c1dfec060604bdc383280",
"6ab5893c6927c15a15665191f2c6cf751f5056d8b95ceee32e43c5e8a3648544"]
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Image not found
2.2 Tags
--------
.. http:get:: /v1/repositories/(namespace)/(repository)/tags
get all of the tags for the given repo.
**Example Request**:
.. sourcecode:: http
GET /v1/repositories/foo/bar/tags HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
X-Docker-Registry-Version: 0.6.0
Cookie: (Cookie provided by the Registry)
:parameter namespace: namespace for the repo
:parameter repository: name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
X-Docker-Registry-Version: 0.6.0
{
"latest": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f",
"0.1.1": "b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087"
}
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Repository not found
.. http:get:: /v1/repositories/(namespace)/(repository)/tags/(tag)
get a tag for the given repo.
**Example Request**:
.. sourcecode:: http
GET /v1/repositories/foo/bar/tags/latest HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
X-Docker-Registry-Version: 0.6.0
Cookie: (Cookie provided by the Registry)
:parameter namespace: namespace for the repo
:parameter repository: name for the repo
:parameter tag: name of tag you want to get
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
X-Docker-Registry-Version: 0.6.0
"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f"
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Tag not found
.. http:delete:: /v1/repositories/(namespace)/(repository)/tags/(tag)
delete the tag for the repo
**Example Request**:
.. sourcecode:: http
DELETE /v1/repositories/foo/bar/tags/latest HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
:parameter namespace: namespace for the repo
:parameter repository: name for the repo
:parameter tag: name of tag you want to delete
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
X-Docker-Registry-Version: 0.6.0
""
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Tag not found
.. http:put:: /v1/repositories/(namespace)/(repository)/tags/(tag)
put a tag for the given repo.
**Example Request**:
.. sourcecode:: http
PUT /v1/repositories/foo/bar/tags/latest HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f"
:parameter namespace: namespace for the repo
:parameter repository: name for the repo
:parameter tag: name of tag you want to add
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
X-Docker-Registry-Version: 0.6.0
""
:statuscode 200: OK
:statuscode 400: Invalid data
:statuscode 401: Requires authorization
:statuscode 404: Image not found
2.3 Repositories
----------------
.. http:delete:: /v1/repositories/(namespace)/(repository)/
delete a repository
**Example Request**:
.. sourcecode:: http
DELETE /v1/repositories/foo/bar/ HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
Cookie: (Cookie provided by the Registry)
""
:parameter namespace: namespace for the repo
:parameter repository: name for the repo
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
X-Docker-Registry-Version: 0.6.0
""
:statuscode 200: OK
:statuscode 401: Requires authorization
:statuscode 404: Repository not found
2.4 Status
----------
.. http:get:: /v1/_ping
Check status of the registry. This endpoint is also used to determine if
the registry supports SSL.
**Example Request**:
.. sourcecode:: http
GET /v1/_ping HTTP/1.1
Host: registry-1.docker.io
Accept: application/json
Content-Type: application/json
""
**Example Response**:
.. sourcecode:: http
HTTP/1.1 200
Vary: Accept
Content-Type: application/json
X-Docker-Registry-Version: 0.6.0
""
:statuscode 200: OK
3 Authorization
===============
This is where we describe the authorization process, including the tokens and cookies.
TODO: add more info.

View File

@@ -1,569 +0,0 @@
:title: Registry Documentation
:description: Documentation for docker Registry and Registry API
:keywords: docker, registry, api, index
.. _registryindexspec:
=====================
Registry & Index Spec
=====================
1. The 3 roles
===============
1.1 Index
---------
The Index is responsible for centralizing information about:
- User accounts
- Checksums of the images
- Public namespaces
The Index has different components:
- Web UI
- Meta-data store (comments, stars, list public repositories)
- Authentication service
- Tokenization
The index is authoritative for those information.
We expect that there will be only one instance of the index, run and managed by dotCloud.
1.2 Registry
------------
- It stores the images and the graph for a set of repositories
- It does not have user accounts data
- It has no notion of user accounts or authorization
- It delegates authentication and authorization to the Index Auth service using tokens
- It supports different storage backends (S3, cloud files, local FS)
- It doesnt have a local database
- It will be open-sourced at some point
We expect that there will be multiple registries out there. To help to grasp the context, here are some examples of registries:
- **sponsor registry**: such a registry is provided by a third-party hosting infrastructure as a convenience for their customers and the docker community as a whole. Its costs are supported by the third party, but the management and operation of the registry are supported by dotCloud. It features read/write access, and delegates authentication and authorization to the Index.
- **mirror registry**: such a registry is provided by a third-party hosting infrastructure but is targeted at their customers only. Some mechanism (unspecified to date) ensures that public images are pulled from a sponsor registry to the mirror registry, to make sure that the customers of the third-party provider can “docker pull” those images locally.
- **vendor registry**: such a registry is provided by a software vendor, who wants to distribute docker images. It would be operated and managed by the vendor. Only users authorized by the vendor would be able to get write access. Some images would be public (accessible for anyone), others private (accessible only for authorized users). Authentication and authorization would be delegated to the Index. The goal of vendor registries is to let someone do “docker pull basho/riak1.3” and automatically push from the vendor registry (instead of a sponsor registry); i.e. get all the convenience of a sponsor registry, while retaining control on the asset distribution.
- **private registry**: such a registry is located behind a firewall, or protected by an additional security layer (HTTP authorization, SSL client-side certificates, IP address authorization...). The registry is operated by a private entity, outside of dotClouds control. It can optionally delegate additional authorization to the Index, but it is not mandatory.
.. note::
Mirror registries and private registries which do not use the Index dont even need to run the registry code. They can be implemented by any kind of transport implementing HTTP GET and PUT. Read-only registries can be powered by a simple static HTTP server.
.. note::
The latter implies that while HTTP is the protocol of choice for a registry, multiple schemes are possible (and in some cases, trivial):
- HTTP with GET (and PUT for read-write registries);
- local mount point;
- remote docker addressed through SSH.
The latter would only require two new commands in docker, e.g. “registryget” and “registryput”, wrapping access to the local filesystem (and optionally doing consistency checks). Authentication and authorization are then delegated to SSH (e.g. with public keys).
1.3 Docker
----------
On top of being a runtime for LXC, Docker is the Registry client. It supports:
- Push / Pull on the registry
- Client authentication on the Index
2. Workflow
===========
2.1 Pull
--------
.. image:: /static_files/docker_pull_chart.png
1. Contact the Index to know where I should download “samalba/busybox”
2. Index replies:
a. “samalba/busybox” is on Registry A
b. here are the checksums for “samalba/busybox” (for all layers)
c. token
3. Contact Registry A to receive the layers for “samalba/busybox” (all of them to the base image). Registry A is authoritative for “samalba/busybox” but keeps a copy of all inherited layers and serve them all from the same location.
4. registry contacts index to verify if token/user is allowed to download images
5. Index returns true/false lettings registry know if it should proceed or error out
6. Get the payload for all layers
Its possible to run docker pull \https://<registry>/repositories/samalba/busybox. In this case, docker bypasses the Index. However the security is not guaranteed (in case Registry A is corrupted) because there wont be any checksum checks.
Currently registry redirects to s3 urls for downloads, going forward all downloads need to be streamed through the registry. The Registry will then abstract the calls to S3 by a top-level class which implements sub-classes for S3 and local storage.
Token is only returned when the 'X-Docker-Token' header is sent with request.
Basic Auth is required to pull private repos. Basic auth isn't required for pulling public repos, but if one is provided, it needs to be valid and for an active account.
API (pulling repository foo/bar):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. (Docker -> Index) GET /v1/repositories/foo/bar/images
**Headers**:
Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
X-Docker-Token: true
**Action**:
(looking up the foo/bar in db and gets images and checksums for that repo (all if no tag is specified, if tag, only checksums for those tags) see part 4.4.1)
2. (Index -> Docker) HTTP 200 OK
**Headers**:
- Authorization: Token signature=123abc,repository=”foo/bar”,access=write
- X-Docker-Endpoints: registry.docker.io [, registry2.docker.io]
**Body**:
Jsonified checksums (see part 4.4.1)
3. (Docker -> Registry) GET /v1/repositories/foo/bar/tags/latest
**Headers**:
Authorization: Token signature=123abc,repository=”foo/bar”,access=write
4. (Registry -> Index) GET /v1/repositories/foo/bar/images
**Headers**:
Authorization: Token signature=123abc,repository=”foo/bar”,access=read
**Body**:
<ids and checksums in payload>
**Action**:
( Lookup token see if they have access to pull.)
If good:
HTTP 200 OK
Index will invalidate the token
If bad:
HTTP 401 Unauthorized
5. (Docker -> Registry) GET /v1/images/928374982374/ancestry
**Action**:
(for each image id returned in the registry, fetch /json + /layer)
.. note::
If someone makes a second request, then we will always give a new token, never reuse tokens.
2.2 Push
--------
.. image:: /static_files/docker_push_chart.png
1. Contact the index to allocate the repository name “samalba/busybox” (authentication required with user credentials)
2. If authentication works and namespace available, “samalba/busybox” is allocated and a temporary token is returned (namespace is marked as initialized in index)
3. Push the image on the registry (along with the token)
4. Registry A contacts the Index to verify the token (token must corresponds to the repository name)
5. Index validates the token. Registry A starts reading the stream pushed by docker and store the repository (with its images)
6. docker contacts the index to give checksums for upload images
.. note::
**Its possible not to use the Index at all!** In this case, a deployed version of the Registry is deployed to store and serve images. Those images are not authenticated and the security is not guaranteed.
.. note::
**Index can be replaced!** For a private Registry deployed, a custom Index can be used to serve and validate token according to different policies.
Docker computes the checksums and submit them to the Index at the end of the push. When a repository name does not have checksums on the Index, it means that the push is in progress (since checksums are submitted at the end).
API (pushing repos foo/bar):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. (Docker -> Index) PUT /v1/repositories/foo/bar/
**Headers**:
Authorization: Basic sdkjfskdjfhsdkjfh==
X-Docker-Token: true
**Action**::
- in index, we allocated a new repository, and set to initialized
**Body**::
(The body contains the list of images that are going to be pushed, with empty checksums. The checksums will be set at the end of the push)::
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”}]
2. (Index -> Docker) 200 Created
**Headers**:
- WWW-Authenticate: Token signature=123abc,repository=”foo/bar”,access=write
- X-Docker-Endpoints: registry.docker.io [, registry2.docker.io]
3. (Docker -> Registry) PUT /v1/images/98765432_parent/json
**Headers**:
Authorization: Token signature=123abc,repository=”foo/bar”,access=write
4. (Registry->Index) GET /v1/repositories/foo/bar/images
**Headers**:
Authorization: Token signature=123abc,repository=”foo/bar”,access=write
**Action**::
- Index:
will invalidate the token.
- Registry:
grants a session (if token is approved) and fetches the images id
5. (Docker -> Registry) PUT /v1/images/98765432_parent/json
**Headers**::
- Authorization: Token signature=123abc,repository=”foo/bar”,access=write
- Cookie: (Cookie provided by the Registry)
6. (Docker -> Registry) PUT /v1/images/98765432/json
**Headers**:
Cookie: (Cookie provided by the Registry)
7. (Docker -> Registry) PUT /v1/images/98765432_parent/layer
**Headers**:
Cookie: (Cookie provided by the Registry)
8. (Docker -> Registry) PUT /v1/images/98765432/layer
**Headers**:
X-Docker-Checksum: sha256:436745873465fdjkhdfjkgh
9. (Docker -> Registry) PUT /v1/repositories/foo/bar/tags/latest
**Headers**:
Cookie: (Cookie provided by the Registry)
**Body**:
“98765432”
10. (Docker -> Index) PUT /v1/repositories/foo/bar/images
**Headers**:
Authorization: Basic 123oislifjsldfj==
X-Docker-Endpoints: registry1.docker.io (no validation on this right now)
**Body**:
(The image, ids, tags and checksums)
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”,
“checksum”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”}]
**Return** HTTP 204
.. note::
If push fails and they need to start again, what happens in the index, there will already be a record for the namespace/name, but it will be initialized. Should we allow it, or mark as name already used? One edge case could be if someone pushes the same thing at the same time with two different shells.
If it's a retry on the Registry, Docker has a cookie (provided by the registry after token validation). So the Index wont have to provide a new token.
2.3 Delete
----------
If you need to delete something from the index or registry, we need a nice clean way to do that. Here is the workflow.
1. Docker contacts the index to request a delete of a repository “samalba/busybox” (authentication required with user credentials)
2. If authentication works and repository is valid, “samalba/busybox” is marked as deleted and a temporary token is returned
3. Send a delete request to the registry for the repository (along with the token)
4. Registry A contacts the Index to verify the token (token must corresponds to the repository name)
5. Index validates the token. Registry A deletes the repository and everything associated to it.
6. docker contacts the index to let it know it was removed from the registry, the index removes all records from the database.
.. note::
The Docker client should present an "Are you sure?" prompt to confirm the deletion before starting the process. Once it starts it can't be undone.
API (deleting repository foo/bar):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. (Docker -> Index) DELETE /v1/repositories/foo/bar/
**Headers**:
Authorization: Basic sdkjfskdjfhsdkjfh==
X-Docker-Token: true
**Action**::
- in index, we make sure it is a valid repository, and set to deleted (logically)
**Body**::
Empty
2. (Index -> Docker) 202 Accepted
**Headers**:
- WWW-Authenticate: Token signature=123abc,repository=”foo/bar”,access=delete
- X-Docker-Endpoints: registry.docker.io [, registry2.docker.io] # list of endpoints where this repo lives.
3. (Docker -> Registry) DELETE /v1/repositories/foo/bar/
**Headers**:
Authorization: Token signature=123abc,repository=”foo/bar”,access=delete
4. (Registry->Index) PUT /v1/repositories/foo/bar/auth
**Headers**:
Authorization: Token signature=123abc,repository=”foo/bar”,access=delete
**Action**::
- Index:
will invalidate the token.
- Registry:
deletes the repository (if token is approved)
5. (Registry -> Docker) 200 OK
200 If success
403 if forbidden
400 if bad request
404 if repository isn't found
6. (Docker -> Index) DELETE /v1/repositories/foo/bar/
**Headers**:
Authorization: Basic 123oislifjsldfj==
X-Docker-Endpoints: registry-1.docker.io (no validation on this right now)
**Body**:
Empty
**Return** HTTP 200
3. How to use the Registry in standalone mode
=============================================
The Index has two main purposes (along with its fancy social features):
- Resolve short names (to avoid passing absolute URLs all the time)
- username/projectname -> \https://registry.docker.io/users/<username>/repositories/<projectname>/
- team/projectname -> \https://registry.docker.io/team/<team>/repositories/<projectname>/
- Authenticate a user as a repos owner (for a central referenced repository)
3.1 Without an Index
--------------------
Using the Registry without the Index can be useful to store the images on a private network without having to rely on an external entity controlled by dotCloud.
In this case, the registry will be launched in a special mode (--standalone? --no-index?). In this mode, the only thing which changes is that Registry will never contact the Index to verify a token. It will be the Registry owner responsibility to authenticate the user who pushes (or even pulls) an image using any mechanism (HTTP auth, IP based, etc...).
In this scenario, the Registry is responsible for the security in case of data corruption since the checksums are not delivered by a trusted entity.
As hinted previously, a standalone registry can also be implemented by any HTTP server handling GET/PUT requests (or even only GET requests if no write access is necessary).
3.2 With an Index
-----------------
The Index data needed by the Registry are simple:
- Serve the checksums
- Provide and authorize a Token
In the scenario of a Registry running on a private network with the need of centralizing and authorizing, its easy to use a custom Index.
The only challenge will be to tell Docker to contact (and trust) this custom Index. Docker will be configurable at some point to use a specific Index, itll be the private entity responsibility (basically the organization who uses Docker in a private environment) to maintain the Index and the Dockers configuration among its consumers.
4. The API
==========
The first version of the api is available here: https://github.com/jpetazzo/docker/blob/acd51ecea8f5d3c02b00a08176171c59442df8b3/docs/images-repositories-push-pull.md
4.1 Images
----------
The format returned in the images is not defined here (for layer and json), basically because Registry stores exactly the same kind of information as Docker uses to manage them.
The format of ancestry is a line-separated list of image ids, in age order. I.e. the images parent is on the last line, the parent of the parent on the next-to-last line, etc.; if the image has no parent, the file is empty.
GET /v1/images/<image_id>/layer
PUT /v1/images/<image_id>/layer
GET /v1/images/<image_id>/json
PUT /v1/images/<image_id>/json
GET /v1/images/<image_id>/ancestry
PUT /v1/images/<image_id>/ancestry
4.2 Users
---------
4.2.1 Create a user (Index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
POST /v1/users
**Body**:
{"email": "sam@dotcloud.com", "password": "toto42", "username": "foobar"'}
**Validation**:
- **username**: min 4 character, max 30 characters, must match the regular
expression [a-z0-9\_].
- **password**: min 5 characters
**Valid**: return HTTP 200
Errors: HTTP 400 (we should create error codes for possible errors)
- invalid json
- missing field
- wrong format (username, password, email, etc)
- forbidden name
- name already exists
.. note::
A user account will be valid only if the email has been validated (a validation link is sent to the email address).
4.2.2 Update a user (Index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
PUT /v1/users/<username>
**Body**:
{"password": "toto"}
.. note::
We can also update email address, if they do, they will need to reverify their new email address.
4.2.3 Login (Index)
^^^^^^^^^^^^^^^^^^^
Does nothing else but asking for a user authentication. Can be used to validate credentials. HTTP Basic Auth for now, maybe change in future.
GET /v1/users
**Return**:
- Valid: HTTP 200
- Invalid login: HTTP 401
- Account inactive: HTTP 403 Account is not Active
4.3 Tags (Registry)
-------------------
The Registry does not know anything about users. Even though repositories are under usernames, its just a namespace for the registry. Allowing us to implement organizations or different namespaces per user later, without modifying the Registrys API.
The following naming restrictions apply:
- Namespaces must match the same regular expression as usernames (See 4.2.1.)
- Repository names must match the regular expression [a-zA-Z0-9-_.]
4.3.1 Get all tags
^^^^^^^^^^^^^^^^^^
GET /v1/repositories/<namespace>/<repository_name>/tags
**Return**: HTTP 200
{
"latest": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f",
“0.1.1”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”
}
4.3.2 Read the content of a tag (resolve the image id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
GET /v1/repositories/<namespace>/<repo_name>/tags/<tag>
**Return**:
"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f"
4.3.3 Delete a tag (registry)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
DELETE /v1/repositories/<namespace>/<repo_name>/tags/<tag>
4.4 Images (Index)
------------------
For the Index to “resolve” the repository name to a Registry location, it uses the X-Docker-Endpoints header. In other terms, this requests always add a “X-Docker-Endpoints” to indicate the location of the registry which hosts this repository.
4.4.1 Get the images
^^^^^^^^^^^^^^^^^^^^^
GET /v1/repositories/<namespace>/<repo_name>/images
**Return**: HTTP 200
[{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”, “checksum”: “md5:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”}]
4.4.2 Add/update the images
^^^^^^^^^^^^^^^^^^^^^^^^^^^
You always add images, you never remove them.
PUT /v1/repositories/<namespace>/<repo_name>/images
**Body**:
[ {“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”, “checksum”: “sha256:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”} ]
**Return** 204
4.5 Repositories
----------------
4.5.1 Remove a Repository (Registry)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
DELETE /v1/repositories/<namespace>/<repo_name>
Return 200 OK
4.5.2 Remove a Repository (Index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This starts the delete process. see 2.3 for more details.
DELETE /v1/repositories/<namespace>/<repo_name>
Return 202 OK
5. Chaining Registries
======================
Its possible to chain Registries server for several reasons:
- Load balancing
- Delegate the next request to another server
When a Registry is a reference for a repository, it should host the entire images chain in order to avoid breaking the chain during the download.
The Index and Registry use this mechanism to redirect on one or the other.
Example with an image download:
On every request, a special header can be returned:
X-Docker-Endpoints: server1,server2
On the next request, the client will always pick a server from this list.
6. Authentication & Authorization
=================================
6.1 On the Index
-----------------
The Index supports both “Basic” and “Token” challenges. Usually when there is a “401 Unauthorized”, the Index replies this::
401 Unauthorized
WWW-Authenticate: Basic realm="auth required",Token
You have 3 options:
1. Provide user credentials and ask for a token
**Header**:
- Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
- X-Docker-Token: true
In this case, along with the 200 response, youll get a new token (if user auth is ok):
If authorization isn't correct you get a 401 response.
If account isn't active you will get a 403 response.
**Response**:
- 200 OK
- X-Docker-Token: Token signature=123abc,repository=”foo/bar”,access=read
2. Provide user credentials only
**Header**:
Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
3. Provide Token
**Header**:
Authorization: Token signature=123abc,repository=”foo/bar”,access=read
6.2 On the Registry
-------------------
The Registry only supports the Token challenge::
401 Unauthorized
WWW-Authenticate: Token
The only way is to provide a token on “401 Unauthorized” responses::
Authorization: Token signature=123abc,repository=”foo/bar”,access=read
Usually, the Registry provides a Cookie when a Token verification succeeded. Every time the Registry passes a Cookie, you have to pass it back the same cookie.::
200 OK
Set-Cookie: session="wD/J7LqL5ctqw8haL10vgfhrb2Q=?foo=UydiYXInCnAxCi4=&timestamp=RjEzNjYzMTQ5NDcuNDc0NjQzCi4="; Path=/; HttpOnly
Next request::
GET /(...)
Cookie: session="wD/J7LqL5ctqw8haL10vgfhrb2Q=?foo=UydiYXInCnAxCi4=&timestamp=RjEzNjYzMTQ5NDcuNDc0NjQzCi4="
7 Document Version
====================
- 1.0 : May 6th 2013 : initial release
- 1.1 : June 1st 2013 : Added Delete Repository and way to handle new source namespace.

View File

@@ -1,37 +0,0 @@
:title: Registry API
:description: Various client libraries available to use with the Docker remote API
:keywords: API, Docker, index, registry, REST, documentation, clients, Python, Ruby, Javascript, Erlang, Go
==================================
Docker Remote API Client Libraries
==================================
These libraries have not been tested by the Docker Maintainers for
compatibility. Please file issues with the library owners. If you
find more library implementations, please list them in Docker doc bugs
and we will add the libraries here.
+----------------------+----------------+--------------------------------------------+
| Language/Framework | Name | Repository |
+======================+================+============================================+
| Python | docker-py | https://github.com/dotcloud/docker-py |
+----------------------+----------------+--------------------------------------------+
| Ruby | docker-client | https://github.com/geku/docker-client |
+----------------------+----------------+--------------------------------------------+
| Ruby | docker-api | https://github.com/swipely/docker-api |
+----------------------+----------------+--------------------------------------------+
| Javascript (NodeJS) | docker.io | https://github.com/appersonlabs/docker.io |
| | | Install via NPM: `npm install docker.io` |
+----------------------+----------------+--------------------------------------------+
| Javascript | docker-js | https://github.com/dgoujard/docker-js |
+----------------------+----------------+--------------------------------------------+
| Javascript (Angular) | dockerui | https://github.com/crosbymichael/dockerui |
| **WebUI** | | |
+----------------------+----------------+--------------------------------------------+
| Java | docker-java | https://github.com/kpelykh/docker-java |
+----------------------+----------------+--------------------------------------------+
| Erlang | erldocker | https://github.com/proger/erldocker |
+----------------------+----------------+--------------------------------------------+
| Go | go-dockerclient| https://github.com/fsouza/go-dockerclient |
+----------------------+----------------+--------------------------------------------+

View File

@@ -0,0 +1,91 @@
==============
Docker Builder
==============
.. contents:: Table of Contents
1. Format
=========
The Docker builder format is quite simple:
``instruction arguments``
The first instruction must be `FROM`
All instruction are to be placed in a file named `Dockerfile`
In order to place comments within a Dockerfile, simply prefix the line with "`#`"
2. Instructions
===============
Docker builder comes with a set of instructions:
1. FROM: Set from what image to build
2. RUN: Execute a command
3. INSERT: Insert a remote file (http) into the image
2.1 FROM
--------
``FROM <image>``
The `FROM` instruction must be the first one in order for Builder to know from where to run commands.
`FROM` can also be used in order to build multiple images within a single Dockerfile
2.2 RUN
-------
``RUN <command>``
The `RUN` instruction is the main one, it allows you to execute any commands on the `FROM` image and to save the results.
You can use as many `RUN` as you want within a Dockerfile, the commands will be executed on the result of the previous command.
2.3 INSERT
----------
``INSERT <file url> <path>``
The `INSERT` instruction will download the file at the given url and place it within the image at the given path.
.. note::
The path must include the file name.
3. Dockerfile Examples
======================
::
# Nginx
#
# VERSION 0.0.1
# DOCKER-VERSION 0.2
from ubuntu
# make sure the package repository is up to date
run echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
run apt-get update
run apt-get install -y inotify-tools nginx apache openssh-server
insert https://raw.github.com/creack/docker-vps/master/nginx-wrapper.sh /usr/sbin/nginx-wrapper
::
# Firefox over VNC
#
# VERSION 0.3
# DOCKER-VERSION 0.2
from ubuntu
# make sure the package repository is up to date
run echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
run apt-get update
# Install vnc, xvfb in order to create a 'fake' display and firefox
run apt-get install -y x11vnc xvfb firefox
run mkdir /.vnc
# Setup a password
run x11vnc -storepasswd 1234 ~/.vnc/passwd
# Autostart firefox (might not be the best way to do it, but it does the trick)
run bash -c 'echo "firefox" >> /.bashrc'

View File

@@ -0,0 +1,14 @@
:title: docker documentation
:description: Documentation for docker builder
:keywords: docker, builder, dockerfile
Builder
=======
Contents:
.. toctree::
:maxdepth: 2
basics

View File

@@ -0,0 +1,101 @@
:title: Base commands
:description: Common usage and commands
:keywords: Examples, Usage
The basics
=============
Starting Docker
---------------
If you have used one of the quick install paths', Docker may have been installed with upstart, Ubuntu's
system for starting processes at boot time. You should be able to run ``docker help`` and get output.
If you get ``docker: command not found`` or something like ``/var/lib/docker/repositories: permission denied``
you will need to specify the path to it and manually start it.
.. code-block:: bash
# Run docker in daemon mode
sudo <path to>/docker -d &
Running an interactive shell
----------------------------
.. code-block:: bash
# Download a base image
docker pull base
# Run an interactive shell in the base image,
# allocate a tty, attach stdin and stdout
docker run -i -t base /bin/bash
Starting a long-running worker process
--------------------------------------
.. code-block:: bash
# Start a very useful long-running process
JOB=$(docker run -d base /bin/sh -c "while true; do echo Hello world; sleep 1; done")
# Collect the output of the job so far
docker logs $JOB
# Kill the job
docker kill $JOB
Listing all running containers
------------------------------
.. code-block:: bash
docker ps
Expose a service on a TCP port
------------------------------
.. code-block:: bash
# Expose port 4444 of this container, and tell netcat to listen on it
JOB=$(docker run -d -p 4444 base /bin/nc -l -p 4444)
# Which public port is NATed to my container?
PORT=$(docker port $JOB 4444)
# Connect to the public port via the host's public address
# Please note that because of how routing works connecting to localhost or 127.0.0.1 $PORT will not work.
IP=$(ifconfig eth0 | perl -n -e 'if (m/inet addr:([\d\.]+)/g) { print $1 }')
echo hello world | nc $IP $PORT
# Verify that the network connection worked
echo "Daemon received: $(docker logs $JOB)"
Committing (saving) an image
-----------------------------
Save your containers state to a container image, so the state can be re-used.
When you commit your container only the differences between the image the container was created from
and the current state of the container will be stored (as a diff). See which images you already have
using ``docker images``
.. code-block:: bash
# Commit your container to a new named image
docker commit <container_id> <some_name>
# List your containers
docker images
You now have a image state from which you can create new instances.
Read more about :ref:`working_with_the_repository` or continue to the complete :ref:`cli`

Some files were not shown because too many files have changed in this diff Show More