The goal is to make it more clear this will give you the container id after run completes.
Since stdout is now standard on run, "docker run -d" is the best (or only) way to get the container ID returned from docker after a plain run, but the description (help) does not hint any such thing.
SaveConfig sets the Username and Password to an empty string
on save. A copy of the authConfigs need to be made so that the
in memory data is not modified.
This change makes docker attempt to create the container ID file and
open it before attempting to create the container. This avoids leaving
a stale container behind if docker has failed to create and open the
container ID file.
The container ID is written to the file after the container is created.
Currently, if you have the following images:
foo/bar 1 23b27d50fb49
foo/bar 2 f2b86ec3fcc4
And you issue the following command:
docker tag foo/bar:2 foo/bar latest
docker will tag the "wrong" image, because the image id for foo/bar:1 starts
with a "2". That is, you'll end up with the following:
foo/bar 1 23b27d50fb49
foo/bar 2 f2b86ec3fcc4
foo/bar latest 23b27d50fb49
This commit reverses the priority given to tags vs. image ids in the
construction `<user>/<repo>:<tagOrId>`, meaning that if a tag that is an exact
match for the specified `tagOrId`, it will be tagged in preference to an image
with an id that happens to start with the correct character sequence.
Prior this commit, 'docker images' and other cmd's, which used utils.HumanSize(),
showed unnecessary whitespaces.
Formatting of progress has been moved to FormatProgess(), justifing the string
directly in the template.
- For the TCP test try again if socat wasn't listening yet;
- For the UDP test raise the timeout to a minute to workaround what
seems to be an issue with Linux.
API Changes
-----------
The port notation is extended to support "/udp" or "/tcp" at the *end*
of the specifier string (and defaults to tcp if "/tcp" or "/udp" are
missing)
`docker ps` now shows UDP ports as "frontend->backend/udp". Nothing
changes for TCP ports.
`docker inspect` now displays two sub-dictionaries: "Tcp" and "Udp",
under "PortMapping" in "NetworkSettings".
Theses changes stand true for the values returned by the HTTP API too.
This changeset will definitely break tools built upon the API (or upon
`docker inspect`). A less intrusive way to add UDP ports in `docker
inspect` would be to simply add "/udp" for UDP ports but it will still
break existing applications which tries to convert the whole field to an
integer. I believe that having two TCP/UDP sub-dictionaries is better
because it makes the whole thing more clear and more easy to parse right
away (i.e: you don't have to check the format of the string, split it
and convert the right part to an integer)
Code Changes
------------
Significant changes in network.go:
- A second PortAllocator is instantiated for the UDP range;
- PortMapper maintains separate mapping for TCP and UDP;
- The extPorts array in NetworkInterface is now an array of Nat objects
(so we can know on which protocol a given port was mapped when
NetworkInterface.Release() is called);
- TCP proxying on localhost has been moved away in network_proxy.go.
localhost proxy code rewrite in network_proxy.go:
We have to proxy the traffic between localhost:frontend-port and
container:backend-port because Netfilter doesn't work properly on the
loopback interface and DNAT iptable rules aren't applied there.
- Goroutines in the TCP proxying code are now explicitly stopped when
the proxy is stopped;
- UDP connection tracking using a map (more infos in [1]);
- Support for IPv6 (to be more accurate, the code is transparent to the
Go net package, so you can use, tcp/tcp4/tcp6/udp/udp4/udp6);
- Single Proxy interface for both UDP and TCP proxying;
- Full test suite.
[1] https://github.com/dotcloud/docker/issues/33#issuecomment-20010400
writing out streamed status.
This is caused by a Buffering message that is not in the correct json format:
[...]
{"status"
:"Pushing 6bba11a28f1ca247de9a47071355ce5923a45b8fea3182389f992f4
24b93edae"}Buffering to disk 244/? (n/a)..
{"status":"Pushing",[...]
The "Buffering to disk" message is originated in
srv.runtime.graph.TempLayerArchive
I am now using the StreamFormatter provided by the context from which the
method is called.
For structs protected by a single mutex, embed the mutex for more
concise usage.
Also use a sync.Mutex directly, rather than a pointer, to avoid the
need for initialization (because a Mutex's zero-value is valid and
ready to be used).
- Fix TestGetImagesJSON when there is more than one image in the test
repository;
- Remove an hardcoded constant use in TestGetImagesByName;
- Wait in a loop in TestKillDifferentUser;
- Use env instead of /usr/bin/env in TestEnv;
- Create a daemon user in contrib/mkimage-unittest.sh.
It's a fork of the mkimage-busybox.sh script and it adds socat to the
image. (socat being needed to add udp support, see #33).
This script, like mkimage-busybox.sh, probably only works on
Debian/Ubuntu.
The dockerbuilder Dockerfile was installing one package per apt-get
install operation.
This changes it so that consecutive run apt-get install operations are
batched into a single operation.
As a user who has blown $150 on VMWare Fusion and vagrant-vmware, I
would like to use my new shiny to hack on Docker. Docker already has a
multi-provider Vagrantfile, so adding another one presents little risk.
Known Issues:
- The docker install of a new kernel breaks the Vagrant shared folder.
- This seems to be because the VMWare hgfs module doesn't build
against a 3.8 kernel.
- I don't believe that shared folder support is actually in use
* replaced previously removed concepts/containers and concepts/introcution by a redirect
* moved orphan index/varable to workingwiththerepository
* added favicon to the layout.html
* added redirect_home which is a http refresh redirect. It works like a 301 for google
* fixed an issue in the layout that would make it break when on small screens
into the container instead of copying them as a regular file.
* Builder: ADD uses tar/untar for copies instead of calling 'cp -ar'.
This is more consistent, reduces the number of dependencies, and
fixe #896.
Move creating the router and populating the
routes to a separate function outside of
ListenAndServe to allow unit tests to make
assertions on the configured routes and handler
funcs.
Add the Access-Control-Allow-Methods header so that
DELETE operations are allowed.
Also move the write CORS headers method before
docker writes a 404 not found so that the client
receives the correct response and not an invalid
CORS request.
seperated the registry and index API's into their own docs and merged
in the index search api into the index api. Also renamed the original
registry api to registry_index_spec.
This updates the documentation to mention that:
1. a lot of data may get sent to the docker daemon if there is a lot of
data in the directory passed to docker build
2. ADD doesn't work in the absence of the context
3. running without a context doesn't send file data to the docker
daemon
4. explain that the data sent to the docker daemon will be used by ADD
commands
Add the -api-enable-cors flag when running docker
in daemon mode to allow CORS requests to be made to
the Remote Api. The default value is false for this
flag to not allow cross origin request to be made.
Also added a handler for OPTIONS requests the standard
for cross domain requests is to initially make an
OPTIONS request to the api.
Fall back to image-specified hostname if user doesn't
provide one, instead of only using image-specified
hostname if the user *does* try to set one.
(ditto for username)
Closes#694.
Usage: docker COMMAND [arg...]
A self-sufficient runtime for linux containers.
Commands:
attach Attach to a running container
insert Insert a file in an image
login Register or Login to the docker registry server
export Stream the contents of a container as a tar archive
diff Inspect changes on a container's filesystem
logs Fetch the logs of a container
pull Pull an image or a repository from the docker registry server
restart Restart a running container
build Build a container from Dockerfile or via stdin
history Show the history of an image
kill Kill a running container
rmi Remove an image
start Start a stopped container
tag Tag an image into a repository
commit Create a new image from a container's changes
import Create a new filesystem image from the contents of a tarball
ps List containers
rm Remove a container
run Run a command in a new container
wait Block until a container stops, then print its exit code
images List images
port Lookup the public-facing port which is NAT-ed to PRIVATE_PORT
info Display system-wide information
inspect Return low-level information on a container
push Push an image or a repository to the docker registry server
search Search for an image in the docker index
stop Stop a running container
version Show the docker version information back
* Moved the introduction page to the en/latest/index.html home of the documentation and pointed all links there
* Fixed a broken link from+to homepage
* Fixed the javascript of the documentation navigation to allow expand and collapse multiple times.
* Fixed the one typo Andy pointed out
+ Documentation: Added install instructions for RackSpace cloud using Ubuntu 12.04, 12.10, 13.04 and fixed a few minor issues with the ubuntu install docs.
This should make previewing documentation easier.
Also updated the Makefile to now copy the theme dir into the _build/website/ dir. Make connect and Make push work.
Specifically, Ubuntu Precise's cgroup-lite script uses mount -n
to mount the cgroup filesystems so they don't appear in mtab, so
detection always fails unless the admin updates mtab with /proc/mounts.
/proc/mounts is valid on just about every Linux machine in existence and
as a bonus is much easier to parse.
I also removed the regex in favor of a more accurate parser that should
also support monolitic cgroup mounts (e.g. mount -t cgroup none /cgroup).
On Gentoo, the memory cgroup is mounted at /sys/fs/cgroup/memory, but the mount line looks like the following:
memory on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
(note that the first word on the line is "memory", not "cgroup", but the other essentials are there, namely the type of cgroup and the memory mount option)
This is especially to fix the current docker on kernels such as gentoo-sources, where the "flavor" is the string "gentoo", and that obviously fails to be converted to an integer.
* Addded quick install on ubuntu as the 1st install option
* Grouped other binary installs under "binary installs"
* Removed duplicate binary ubuntu installs (linked to the docs)
* Improved "build from source" instructions
Ensure the docker daemon creates a file containing its PID under
/var/run/docker.pid.
The daemon takes care of removing the pid file when it receives either
SIGTERM, SIGINT or SIGKILL.
The daemon also refuses to start when the pidfile is found. An
explanation message is shown to the user when this happens.
This change is required to make docker easier to manage by tools like
checkproc which rely on this information.
ip from iproute2 replaces the legacy route tool which is often not
installed by default on recent Linux distributions.
The same patch has been done in network.go and is re-used here.
The raw mode is actually only needed when you attach to a container.
Having it enabled all the time can be a pain, e.g: if docker crashes
your terminal will end up in a broken state.
Since we are currently missing a real API for the docker daemon to
negotiate this kind of options, this changeset actually enable the raw
mode on the login (because it outputs a password), run and attach
commands.
This "optional raw mode" is implemented by passing a more complicated
interface than io.Writer as the stdout argument of each command. This
interface (DockerConn) exposes a method which allows the command to set
the terminal in raw mode or not.
Finally, the code added by this changeset will be deprecated by a real
API for the docker daemon.
Want to hack on Docker? Awesome! There are instructions to get you
started on the website: http://docker.io/gettingstarted.html
They are probably not perfect, please let us know if anything feels
Want to hack on Docker? Awesome! Here are instructions to get you started. They are probably not perfect, please let us know if anything feels
wrong or incomplete.
## Contribution guidelines
@@ -91,3 +88,73 @@ Add your name to the AUTHORS file, but make sure the list is sorted and your
name and email address match your git configuration. The AUTHORS file is
regenerated occasionally from the git commit history, so a mismatch may result
in your changes being overwritten.
## Decision process
### How are decisions made?
Short answer: with pull requests to the docker repository.
Docker is an open-source project with an open design philosophy. This means that the repository is the source of truth for EVERY aspect of the project,
including its philosophy, design, roadmap and APIs. *If it's part of the project, it's in the repo. It's in the repo, it's part of the project.*
As a result, all decisions can be expressed as changes to the repository. An implementation change is a change to the source code. An API change is a change to
the API specification. A philosophy change is a change to the philosophy manifesto. And so on.
All decisions affecting docker, big and small, follow the same 3 steps:
* Step 1: Open a pull request. Anyone can do this.
* Step 2: Discuss the pull request. Anyone can do this.
* Step 3: Accept or refuse a pull request. The relevant maintainer does this (see below "Who decides what?")
### Who decides what?
So all decisions are pull requests, and the relevant maintainer makes the decision by accepting or refusing the pull request.
But how do we identify the relevant maintainer for a given pull request?
Docker follows the timeless, highly efficient and totally unfair system known as [Benevolent dictator for life](http://en.wikipedia.org/wiki/Benevolent_Dictator_for_Life),
with yours truly, Solomon Hykes, in the role of BDFL.
This means that all decisions are made by default by me. Since making every decision myself would be highly unscalable, in practice decisions are spread across multiple maintainers.
The relevant maintainer for a pull request is assigned in 3 steps:
* Step 1: Determine the subdirectory affected by the pull request. This might be src/registry, docs/source/api, or any other part of the repo.
* Step 2: Find the MAINTAINERS file which affects this directory. If the directory itself does not have a MAINTAINERS file, work your way up the the repo hierarchy until you find one.
* Step 3: The first maintainer listed is the primary maintainer. The pull request is assigned to him. He may assign it to other listed maintainers, at his discretion.
### I'm a maintainer, should I make pull requests too?
Primary maintainers are not required to create pull requests when changing their own subdirectory, but secondary maintainers are.
### Who assigns maintainers?
Solomon.
### How can I become a maintainer?
* Step 1: learn the component inside out
* Step 2: make yourself useful by contributing code, bugfixes, support etc.
* Step 3: volunteer on the irc channel (#docker@freenode)
Don't forget: being a maintainer is a time investment. Make sure you will have time to make yourself available.
You don't have to be a maintainer to make a difference on the project!
### What are a maintainer's responsibility?
It is every maintainer's responsibility to:
* 1) Expose a clear roadmap for improving their component.
* 2) Deliver prompt feedback and decisions on pull requests.
* 3) Be available to anyone with questions, bug reports, criticism etc. on their component. This includes irc, github requests and the mailing list.
* 4) Make sure their component respects the philosophy, design and roadmap of the project.
### How is this process changed?
Just like everything else: by making a pull request :)
Docker complements LXC with a high-level API which operates at the process level. It runs unix processes with strong guarantees of isolation and repeatability across servers.
Docker is an open-source engine which automates the deployment of
applications as highly portable, self-sufficient containers.
Docker is a great building block for automating distributed systems: large-scale web deployments, database clusters, continuous deployment systems, private PaaS, service-oriented architectures, etc.
Docker containers are both *hardware-agnostic* and
*platform-agnostic*. This means that they can run anywhere, from your
laptop to the largest EC2 compute instance and everything in between -
and they don't require that you use a particular language, framework
or packaging system. That makes them great building blocks for
deploying and scaling web apps, databases and backend services without
Docker is an open-source implementation of the deployment engine which
powers [dotCloud](http://dotcloud.com), a popular
Platform-as-a-Service. It benefits directly from the experience
accumulated over several years of large-scale operation and support of
hundreds of thousands of applications and databases.
* *Heterogeneous payloads*: any combination of binaries, libraries, configuration files, scripts, virtualenvs, jars, gems, tarballs, you name it. No more juggling between domain-specific tools. Docker can deploy and run them all.
* *Any server*: docker can run on any x64 machine with a modern linux kernel - whether it's a laptop, a bare metal server or a VM. This makes it perfect for multi-cloud deployments.
## Better than VMs
* *Isolation*: docker isolates processes from each other and from the underlying host, using lightweight containers.
A common method for distributing applications and sandbox their
execution is to use virtual machines, or VMs. Typical VM formats are
VMWare's vmdk, Oracle Virtualbox's vdi, and Amazon EC2's ami. In
theory these formats should allow every developer to automatically
package their application into a "machine" for easy distribution and
deployment. In practice, that almost never happens, for a few reasons:
* *Repeatability*: because containers are isolated in their own filesystem, they behave the same regardless of where, when, and alongside what they run.
* *Size*: VMs are very large which makes them impractical to store
and transfer.
* *Performance*: running VMs consumes significant CPU and memory,
which makes them impractical in many scenarios, for example local
development of multi-tier applications, and large-scale deployment
of cpu and memory-intensive applications on large numbers of
machines.
* *Portability*: competing VM environments don't play well with each
other. Although conversion tools do exist, they are limited and
add even more overhead.
* *Hardware-centric*: VMs were designed with machine operators in
mind, not software developers. As a result, they offer very
limited tooling for what developers need most: building, testing
and running their software. For example, VMs offer no facilities
for application versioning, monitoring, configuration, logging or
service discovery.
By contrast, Docker relies on a different sandboxing method known as
*containerization*. Unlike traditional virtualization,
containerization takes place at the kernel level. Most modern
operating system kernels now support the primitives necessary for
containerization, including Linux with [openvz](http://openvz.org),
[vserver](http://linux-vserver.org) and more recently
developed for another project, etc. These dependencies live in
different "worlds" and require different tools - these tools
typically don't work well with each other, requiring awkward
custom integrations.
* Copy-on-write: root filesystems are created using copy-on-write, which makes deployment extremely fast, memory-cheap and disk-cheap.
* Logging: the standard streams (stdout/stderr/stdin) of each process container are collected and logged for real-time or batch retrieval.
* Change management: changes to a container's filesystem can be committed into a new image and re-used to create more containers. No templating or manual configuration required.
* Interactive shell: docker can allocate a pseudo-tty and attach to the standard input of any container, for example to run a throwaway interactive shell.
* Conflicting dependencies. Different applications may depend on
different versions of the same dependency. Packaging tools handle
these situations with various degrees of ease - but they all
handle them in different and incompatible ways, which again forces
the developer to do extra work.
* Custom dependencies. A developer may need to prepare a custom
version of their application's dependency. Some packaging systems
can handle custom versions of a dependency, others can't - and all
of them handle it differently.
Docker solves dependency hell by giving the developer a simple way to
express *all* their application's dependencies in one place, and
streamline the process of assembling them. If this makes you think of
*replace* your favorite packaging systems. It simply orchestrates
their use in a simple and repeatable way. How does it do that? With
layers.
Under the hood
--------------
Docker defines a build as running a sequence of Unix commands, one
after the other, in the same container. Build commands modify the
contents of the container (usually by installing new files on the
filesystem), the next command modifies it some more, etc. Since each
build command inherits the result of the previous commands, the
*order* in which the commands are executed expresses *dependencies*.
Under the hood, Docker is built on the following components:
Here's a typical Docker build process:
```bash
from ubuntu:12.10
run apt-get update
run DEBIAN_FRONTEND=noninteractive apt-get install -q -y python
run DEBIAN_FRONTEND=noninteractive apt-get install -q -y python-pip
run pip install django
run DEBIAN_FRONTEND=noninteractive apt-get install -q -y curl
run curl -L https://github.com/shykes/helloflask/archive/master.tar.gz | tar -xzv
run cd helloflask-master && pip install -r requirements.txt
```
* The [cgroup](http://blog.dotcloud.com/kernel-secrets-from-the-paas-garage-part-24-c) and [namespacing](http://blog.dotcloud.com/under-the-hood-linux-kernels-on-dotcloud-part) capabilities of the Linux kernel;
* [AUFS](http://aufs.sourceforge.net/aufs.html), a powerful union filesystem with copy-on-write capabilities;
* The [Go](http://golang.org) programming language;
* [lxc](http://lxc.sourceforge.net/), a set of convenience scripts to simplify the creation of linux containers.
Note that Docker doesn't care *how* dependencies are built - as long
as they can be built by running a Unix command in a container.
Install instructions
==================
Building from source
--------------------
1. Make sure you have a [Go language](http://golang.org) compiler.
On a Debian/wheezy or Ubuntu 12.10 install the package:
```bash
$ sudo apt-get install golang-go
```
2. Execute ``make``
This command will install all necessary dependencies and build the
executable that you can find in ``bin/docker``
3. Should you like to see what's happening, run ``make`` with ``VERBOSE=1`` parameter:
```bash
$ make VERBOSE=1
```
Installing on Ubuntu 12.04 and 12.10
------------------------------------
1. Install dependencies:
```bash
sudo apt-get install lxc wget bsdtar curl
sudo apt-get install linux-image-extra-`uname -r`
```
The `linux-image-extra` package is needed on standard Ubuntu EC2 AMIs in order to install the aufs kernel module.
@@ -238,87 +352,104 @@ Run the `go install` command (above) to recompile docker.
What is a Standard Container?
=============================
Docker defines a unit of software delivery called a Standard Container. The goal of a Standard Container is to encapsulate a software component and all its dependencies in
a format that is self-describing and portable, so that any compliant runtime can run it without extra dependencies, regardless of the underlying machine and the contents of the container.
Docker defines a unit of software delivery called a Standard
Container. The goal of a Standard Container is to encapsulate a
software component and all its dependencies in a format that is
self-describing and portable, so that any compliant runtime can run it
without extra dependencies, regardless of the underlying machine and
the contents of the container.
The spec for Standard Containers is currently a work in progress, but it is very straightforward. It mostly defines 1) an image format, 2) a set of standard operations, and 3) an execution environment.
The spec for Standard Containers is currently a work in progress, but
it is very straightforward. It mostly defines 1) an image format, 2) a
set of standard operations, and 3) an execution environment.
A great analogy for this is the shipping container. Just like Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery.
A great analogy for this is the shipping container. Just like how
Standard Containers are a fundamental unit of software delivery,
shipping containers are a fundamental unit of physical delivery.
### 1. STANDARD OPERATIONS
Just like shipping containers, Standard Containers define a set of STANDARD OPERATIONS. Shipping containers can be lifted, stacked, locked, loaded, unloaded and labelled. Similarly, standard containers can be started, stopped, copied, snapshotted, downloaded, uploaded and tagged.
Just like shipping containers, Standard Containers define a set of
STANDARD OPERATIONS. Shipping containers can be lifted, stacked,
locked, loaded, unloaded and labelled. Similarly, Standard Containers
can be started, stopped, copied, snapshotted, downloaded, uploaded and
tagged.
### 2. CONTENT-AGNOSTIC
Just like shipping containers, Standard Containers are CONTENT-AGNOSTIC: all standard operations have the same effect regardless of the contents. A shipping container will be stacked in exactly the same way whether it contains Vietnamese powder coffee or spare Maserati parts. Similarly, Standard Containers are started or uploaded in the same way whether they contain a postgres database, a php application with its dependencies and application server, or Java build artifacts.
Just like shipping containers, Standard Containers are
CONTENT-AGNOSTIC: all standard operations have the same effect
regardless of the contents. A shipping container will be stacked in
exactly the same way whether it contains Vietnamese powder coffee or
spare Maserati parts. Similarly, Standard Containers are started or
uploaded in the same way whether they contain a postgres database, a
php application with its dependencies and application server, or Java
build artifacts.
### 3. INFRASTRUCTURE-AGNOSTIC
Both types of containers are INFRASTRUCTURE-AGNOSTIC: they can be transported to thousands of facilities around the world, and manipulated by a wide variety of equipment. A shipping container can be packed in a factory in Ukraine, transported by truck to the nearest routing center, stacked onto a train, loaded into a German boat by an Australian-built crane, stored in a warehouse at a US facility, etc. Similarly, a standard container can be bundled on my laptop, uploaded to S3, downloaded, run and snapshotted by a build server at Equinix in Virginia, uploaded to 10 staging servers in a home-made Openstack cluster, then sent to 30 production instances across 3 EC2 regions.
Both types of containers are INFRASTRUCTURE-AGNOSTIC: they can be
transported to thousands of facilities around the world, and
manipulated by a wide variety of equipment. A shipping container can
be packed in a factory in Ukraine, transported by truck to the nearest
routing center, stacked onto a train, loaded into a German boat by an
Australian-built crane, stored in a warehouse at a US facility,
etc. Similarly, a standard container can be bundled on my laptop,
uploaded to S3, downloaded, run and snapshotted by a build server at
Equinix in Virginia, uploaded to 10 staging servers in a home-made
Openstack cluster, then sent to 30 production instances across 3 EC2
regions.
### 4. DESIGNED FOR AUTOMATION
Because they offer the same standard operations regardless of content and infrastructure, Standard Containers, just like their physical counterpart, are extremely well-suited for automation. In fact, you could say automation is their secret weapon.
Because they offer the same standard operations regardless of content
and infrastructure, Standard Containers, just like their physical
counterparts, are extremely well-suited for automation. In fact, you
could say automation is their secret weapon.
Many things that once required time-consuming and error-prone human effort can now be programmed. Before shipping containers, a bag of powder coffee was hauled, dragged, dropped, rolled and stacked by 10 different people in 10 different locations by the time it reached its destination. 1 out of 50 disappeared. 1 out of 20 was damaged. The process was slow, inefficient and cost a fortune - and was entirely different depending on the facility and the type of goods.
Many things that once required time-consuming and error-prone human
effort can now be programmed. Before shipping containers, a bag of
powder coffee was hauled, dragged, dropped, rolled and stacked by 10
different people in 10 different locations by the time it reached its
destination. 1 out of 50 disappeared. 1 out of 20 was damaged. The
process was slow, inefficient and cost a fortune - and was entirely
different depending on the facility and the type of goods.
Similarly, before Standard Containers, by the time a software component ran in production, it had been individually built, configured, bundled, documented, patched, vendored, templated, tweaked and instrumented by 10 different people on 10 different computers. Builds failed, libraries conflicted, mirrors crashed, post-it notes were lost, logs were misplaced, cluster updates were half-broken. The process was slow, inefficient and cost a fortune - and was entirely different depending on the language and infrastructure provider.
Similarly, before Standard Containers, by the time a software
component ran in production, it had been individually built,
post-it notes were lost, logs were misplaced, cluster updates were
half-broken. The process was slow, inefficient and cost a fortune -
and was entirely different depending on the language and
infrastructure provider.
### 5. INDUSTRIAL-GRADE DELIVERY
There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded on the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away.
There are 17 million shipping containers in existence, packed with
every physical good imaginable. Every single one of them can be loaded
onto the same boats, by the same cranes, in the same facilities, and
sent anywhere in the World with incredible efficiency. It is
embarrassing to think that a 30 ton shipment of coffee can safely
travel half-way across the World in *less time* than it takes a
software team to deliver its code from one datacenter to another
sitting 10 miles away.
With Standard Containers we can put an end to that embarrassment, by making INDUSTRIAL-GRADE DELIVERY of software a reality.
With Standard Containers we can put an end to that embarrassment, by
making INDUSTRIAL-GRADE DELIVERY of software a reality.
### Legal
Standard Container Specification
--------------------------------
(TODO)
### Image format
### Standard operations
* Copy
* Run
* Stop
* Wait
* Commit
* Attach standard streams
* List filesystem changes
* ...
### Execution environment
#### Root filesystem
#### Environment variables
#### Process arguments
#### Networking
#### Process namespacing
#### Resource limits
#### Process monitoring
#### Logging
#### Signals
#### Pseudo-terminal allocation
#### Security
Transfers of Docker shall be in accordance with applicable export
controls of any country and all other applicable legal requirements.
Docker shall not be distributed or downloaded to or in Cuba, Iran,
North Korea, Sudan or Syria and shall not be distributed or downloaded
to any person on the Denied Persons List administered by the U.S.
- It stores the images and the graph for a set of repositories
- It does not have user accounts data
- It has no notion of user accounts or authorization
- It delegates authentication and authorization to the Index Auth service using tokens
- It supports different storage backends (S3, cloud files, local FS)
- It doesn’t have a local database
- It will be open-sourced at some point
We expect that there will be multiple registries out there. To help to grasp the context, here are some examples of registries:
-**sponsor registry**: such a registry is provided by a third-party hosting infrastructure as a convenience for their customers and the docker community as a whole. Its costs are supported by the third party, but the management and operation of the registry are supported by dotCloud. It features read/write access, and delegates authentication and authorization to the Index.
-**mirror registry**: such a registry is provided by a third-party hosting infrastructure but is targeted at their customers only. Some mechanism (unspecified to date) ensures that public images are pulled from a sponsor registry to the mirror registry, to make sure that the customers of the third-party provider can “docker pull” those images locally.
-**vendor registry**: such a registry is provided by a software vendor, who wants to distribute docker images. It would be operated and managed by the vendor. Only users authorized by the vendor would be able to get write access. Some images would be public (accessible for anyone), others private (accessible only for authorized users). Authentication and authorization would be delegated to the Index. The goal of vendor registries is to let someone do “docker pull basho/riak1.3” and automatically push from the vendor registry (instead of a sponsor registry); i.e. get all the convenience of a sponsor registry, while retaining control on the asset distribution.
-**private registry**: such a registry is located behind a firewall, or protected by an additional security layer (HTTP authorization, SSL client-side certificates, IP address authorization...). The registry is operated by a private entity, outside of dotCloud’s control. It can optionally delegate additional authorization to the Index, but it is not mandatory.
..note::
Mirror registries and private registries which do not use the Index don’t even need to run the registry code. They can be implemented by any kind of transport implementing HTTP GET and PUT. Read-only registries can be powered by a simple static HTTP server.
..note::
The latter implies that while HTTP is the protocol of choice for a registry, multiple schemes are possible (and in some cases, trivial):
- HTTP with GET (and PUT for read-write registries);
- local mount point;
- remote docker addressed through SSH.
The latter would only require two new commands in docker, e.g. “registryget” and “registryput”, wrapping access to the local filesystem (and optionally doing consistency checks). Authentication and authorization are then delegated to SSH (e.g. with public keys).
:description:Documentation for docker Registry and Registry API
:keywords:docker, registry, api, index
=====================
Registry & index Spec
=====================
..contents:: Table of Contents
1. The 3 roles
===============
1.1 Index
---------
The Index is responsible for centralizing information about:
- User accounts
- Checksums of the images
- Public namespaces
The Index has different components:
- Web UI
- Meta-data store (comments, stars, list public repositories)
- Authentication service
- Tokenization
The index is authoritative for those information.
We expect that there will be only one instance of the index, run and managed by dotCloud.
1.2 Registry
------------
- It stores the images and the graph for a set of repositories
- It does not have user accounts data
- It has no notion of user accounts or authorization
- It delegates authentication and authorization to the Index Auth service using tokens
- It supports different storage backends (S3, cloud files, local FS)
- It doesn’t have a local database
- It will be open-sourced at some point
We expect that there will be multiple registries out there. To help to grasp the context, here are some examples of registries:
-**sponsor registry**: such a registry is provided by a third-party hosting infrastructure as a convenience for their customers and the docker community as a whole. Its costs are supported by the third party, but the management and operation of the registry are supported by dotCloud. It features read/write access, and delegates authentication and authorization to the Index.
-**mirror registry**: such a registry is provided by a third-party hosting infrastructure but is targeted at their customers only. Some mechanism (unspecified to date) ensures that public images are pulled from a sponsor registry to the mirror registry, to make sure that the customers of the third-party provider can “docker pull” those images locally.
-**vendor registry**: such a registry is provided by a software vendor, who wants to distribute docker images. It would be operated and managed by the vendor. Only users authorized by the vendor would be able to get write access. Some images would be public (accessible for anyone), others private (accessible only for authorized users). Authentication and authorization would be delegated to the Index. The goal of vendor registries is to let someone do “docker pull basho/riak1.3” and automatically push from the vendor registry (instead of a sponsor registry); i.e. get all the convenience of a sponsor registry, while retaining control on the asset distribution.
-**private registry**: such a registry is located behind a firewall, or protected by an additional security layer (HTTP authorization, SSL client-side certificates, IP address authorization...). The registry is operated by a private entity, outside of dotCloud’s control. It can optionally delegate additional authorization to the Index, but it is not mandatory.
..note::
Mirror registries and private registries which do not use the Index don’t even need to run the registry code. They can be implemented by any kind of transport implementing HTTP GET and PUT. Read-only registries can be powered by a simple static HTTP server.
..note::
The latter implies that while HTTP is the protocol of choice for a registry, multiple schemes are possible (and in some cases, trivial):
- HTTP with GET (and PUT for read-write registries);
- local mount point;
- remote docker addressed through SSH.
The latter would only require two new commands in docker, e.g. “registryget” and “registryput”, wrapping access to the local filesystem (and optionally doing consistency checks). Authentication and authorization are then delegated to SSH (e.g. with public keys).
1.3 Docker
----------
On top of being a runtime for LXC, Docker is the Registry client. It supports:
- Push / Pull on the registry
- Client authentication on the Index
2. Workflow
===========
2.1 Pull
--------
..image:: /static_files/docker_pull_chart.png
1. Contact the Index to know where I should download “samalba/busybox”
2. Index replies:
a. “samalba/busybox” is on Registry A
b. here are the checksums for “samalba/busybox” (for all layers)
c. token
3. Contact Registry A to receive the layers for “samalba/busybox” (all of them to the base image). Registry A is authoritative for “samalba/busybox” but keeps a copy of all inherited layers and serve them all from the same location.
4. registry contacts index to verify if token/user is allowed to download images
5. Index returns true/false lettings registry know if it should proceed or error out
6. Get the payload for all layers
It’s possible to run docker pull \https://<registry>/repositories/samalba/busybox. In this case, docker bypasses the Index. However the security is not guaranteed (in case Registry A is corrupted) because there won’t be any checksum checks.
Currently registry redirects to s3 urls for downloads, going forward all downloads need to be streamed through the registry. The Registry will then abstract the calls to S3 by a top-level class which implements sub-classes for S3 and local storage.
Token is only returned when the 'X-Docker-Token' header is sent with request.
Basic Auth is required to pull private repos. Basic auth isn't required for pulling public repos, but if one is provided, it needs to be valid and for an active account.
API (pulling repository foo/bar):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. (Docker -> Index) GET /v1/repositories/foo/bar/images
**Headers**:
Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
X-Docker-Token: true
**Action**:
(looking up the foo/bar in db and gets images and checksums for that repo (all if no tag is specified, if tag, only checksums for those tags) see part 4.4.1)
5. (Docker -> Registry) GET /v1/images/928374982374/ancestry
**Action**:
(for each image id returned in the registry, fetch /json + /layer)
..note::
If someone makes a second request, then we will always give a new token, never reuse tokens.
2.2 Push
--------
..image:: /static_files/docker_push_chart.png
1. Contact the index to allocate the repository name “samalba/busybox” (authentication required with user credentials)
2. If authentication works and namespace available, “samalba/busybox” is allocated and a temporary token is returned (namespace is marked as initialized in index)
3. Push the image on the registry (along with the token)
4. Registry A contacts the Index to verify the token (token must corresponds to the repository name)
5. Index validates the token. Registry A starts reading the stream pushed by docker and store the repository (with its images)
6. docker contacts the index to give checksums for upload images
..note::
**It’s possible not to use the Index at all!** In this case, a deployed version of the Registry is deployed to store and serve images. Those images are not authentified and the security is not guaranteed.
..note::
**Index can be replaced!** For a private Registry deployed, a custom Index can be used to serve and validate token according to different policies.
Docker computes the checksums and submit them to the Index at the end of the push. When a repository name does not have checksums on the Index, it means that the push is in progress (since checksums are submitted at the end).
API (pushing repos foo/bar):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. (Docker -> Index) PUT /v1/repositories/foo/bar/
**Headers**:
Authorization: Basic sdkjfskdjfhsdkjfh==
X-Docker-Token: true
**Action**::
- in index, we allocated a new repository, and set to initialized
**Body**::
(The body contains the list of images that are going to be pushed, with empty checksums. The checksums will be set at the end of the push)::
If push fails and they need to start again, what happens in the index, there will already be a record for the namespace/name, but it will be initialized. Should we allow it, or mark as name already used? One edge case could be if someone pushes the same thing at the same time with two different shells.
If it's a retry on the Registry, Docker has a cookie (provided by the registry after token validation). So the Index won’t have to provide a new token.
2.3 Delete
----------
If you need to delete something from the index or registry, we need a nice clean way to do that. Here is the workflow.
1. Docker contacts the index to request a delete of a repository “samalba/busybox” (authentication required with user credentials)
2. If authentication works and repository is valid, “samalba/busybox” is marked as deleted and a temporary token is returned
3. Send a delete request to the registry for the repository (along with the token)
4. Registry A contacts the Index to verify the token (token must corresponds to the repository name)
5. Index validates the token. Registry A deletes the repository and everything associated to it.
6. docker contacts the index to let it know it was removed from the registry, the index removes all records from the database.
..note::
The Docker client should present an "Are you sure?" prompt to confirm the deletion before starting the process. Once it starts it can't be undone.
- Authenticate a user as a repos owner (for a central referenced repository)
3.1 Without an Index
--------------------
Using the Registry without the Index can be useful to store the images on a private network without having to rely on an external entity controlled by dotCloud.
In this case, the registry will be launched in a special mode (--standalone? --no-index?). In this mode, the only thing which changes is that Registry will never contact the Index to verify a token. It will be the Registry owner responsibility to authenticate the user who pushes (or even pulls) an image using any mechanism (HTTP auth, IP based, etc...).
In this scenario, the Registry is responsible for the security in case of data corruption since the checksums are not delivered by a trusted entity.
As hinted previously, a standalone registry can also be implemented by any HTTP server handling GET/PUT requests (or even only GET requests if no write access is necessary).
3.2 With an Index
-----------------
The Index data needed by the Registry are simple:
- Serve the checksums
- Provide and authorize a Token
In the scenario of a Registry running on a private network with the need of centralizing and authorizing, it’s easy to use a custom Index.
The only challenge will be to tell Docker to contact (and trust) this custom Index. Docker will be configurable at some point to use a specific Index, it’ll be the private entity responsibility (basically the organization who uses Docker in a private environment) to maintain the Index and the Docker’s configuration among its consumers.
4. The API
==========
The first version of the api is available here: https://github.com/jpetazzo/docker/blob/acd51ecea8f5d3c02b00a08176171c59442df8b3/docs/images-repositories-push-pull.md
4.1 Images
----------
The format returned in the images is not defined here (for layer and json), basically because Registry stores exactly the same kind of information as Docker uses to manage them.
The format of ancestry is a line-separated list of image ids, in age order. I.e. the image’s parent is on the last line, the parent of the parent on the next-to-last line, etc.; if the image has no parent, the file is empty.
- **username**: min 4 character, max 30 characters, must match the regular
expression [a-z0-9\_].
- **password**: min 5 characters
**Valid**: return HTTP 200
Errors: HTTP 400 (we should create error codes for possible errors)
- invalid json
- missing field
- wrong format (username, password, email, etc)
- forbidden name
- name already exists
..note::
A user account will be valid only if the email has been validated (a validation link is sent to the email address).
4.2.2 Update a user (Index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
PUT /v1/users/<username>
**Body**:
{"password": "toto"}
..note::
We can also update email address, if they do, they will need to reverify their new email address.
4.2.3 Login (Index)
^^^^^^^^^^^^^^^^^^^
Does nothing else but asking for a user authentication. Can be used to validate credentials. HTTP Basic Auth for now, maybe change in future.
GET /v1/users
**Return**:
- Valid: HTTP 200
- Invalid login: HTTP 401
- Account inactive: HTTP 403 Account is not Active
4.3 Tags (Registry)
-------------------
The Registry does not know anything about users. Even though repositories are under usernames, it’s just a namespace for the registry. Allowing us to implement organizations or different namespaces per user later, without modifying the Registry’s API.
The following naming restrictions apply:
- Namespaces must match the same regular expression as usernames (See 4.2.1.)
- Repository names must match the regular expression [a-zA-Z0-9-_.]
4.3.1 Get all tags
^^^^^^^^^^^^^^^^^^
GET /v1/repositories/<namespace>/<repository_name>/tags
For the Index to “resolve” the repository name to a Registry location, it uses the X-Docker-Endpoints header. In other terms, this requests always add a “X-Docker-Endpoints” to indicate the location of the registry which hosts this repository.
4.4.1 Get the images
^^^^^^^^^^^^^^^^^^^^^
GET /v1/repositories/<namespace>/<repo_name>/images
Usually, the Registry provides a Cookie when a Token verification succeeded. Every time the Registry passes a Cookie, you have to pass it back the same cookie.::
:description:Build a new image from the Dockerfile passed via stdin
:keywords:build, docker, container, documentation
================================================
``build`` -- Build a container from a Dockerfile
================================================
::
Usage: docker build [OPTIONS] PATH | URL | -
Build a new container image from the source code at PATH
-t="": Tag to be applied to the resulting image in case of success.
-q=false: Suppress verbose build output.
When a single Dockerfile is given as URL, then no context is set. When a git repository is set as URL, the repository is used as context
Examples
--------
..code-block::bash
docker build .
| This will read the Dockerfile from the current directory. It will also send any other files and directories found in the current directory to the docker daemon.
| The contents of this directory would be used by ADD commands found within the Dockerfile.
| This will send a lot of data to the docker daemon if the current directory contains a lot of data.
| If the absolute path is provided instead of '.', only the files and directories required by the ADD commands from the Dockerfile will be added to the context and transferred to the docker daemon.
|
..code-block::bash
docker build - < Dockerfile
| This will read a Dockerfile from Stdin without context. Due to the lack of a context, no contents of any local directory will be sent to the docker daemon.
| ADD doesn't work when running in this mode due to the absence of the context, thus having no source files to copy to the container.
..code-block::bash
docker build github.com/creack/docker-firefox
| This will clone the github repository and use it as context. The Dockerfile at the root of the repository is used as Dockerfile.
| Note that you can specify an arbitrary git repository by using the 'git://' schema.
Usage: docker run [OPTIONS] IMAGE COMMAND [ARG...]
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Run a command in a new container
-a=map[]: Attach to stdin, stdout or stderr.
-d=false: Detached mode: leave the container running in the background
-c=0: CPU shares (relative weight)
-cidfile="": Write the container ID to the file
-d=false: Detached mode: Run container in the background, print new container id
-e=[]: Set environment variables
-h="": Container host name
-i=false: Keep stdin open even if not attached
-m=0: Memory limit (in bytes)
-n=true: Enable networking for this container
-p=[]: Map a network port to the container
-t=false: Allocate a pseudo-tty
-u="": Username or UID
-d=[]: Set custom dns servers for the container
-v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro]. If "host-dir" is missing, then docker creates a new volume.
-volumes-from="": Mount all volumes from the given container.
-entrypoint="": Overwrite the default entrypoint set by the image.
Examples
--------
..code-block::bash
docker run -cidfile /tmp/docker_test.cid ubuntu echo"test"
| This will create a container and print "test" to the console. The cidfile flag makes docker attempt to create a new file and write the container ID to it. If the file exists already, docker will return an error. Docker will close this file when docker run exits.
:description:An introduction to docker and standard containers?
:keywords:containers, lxc, concepts, explanation
Building blocks
===============
.._images:
Images
------
An original container image. These are stored on disk and are comparable with what you normally expect from a stoppped virtual machine image. Images are stored (and retrieved from) repository
Images are stored on your local file system under /var/lib/docker/images
.._containers:
Containers
----------
A container is a local version of an image. It can be running or stopped, The equivalent would be a virtual machine instance.
Containers are stored on your local file system under /var/lib/docker/containers
:description:An introduction to docker and standard containers?
:keywords:containers, lxc, concepts, explanation
:note:This version of the introduction is temporary, just to make sure we don't break the links from the website when the documentation is updated
Introduction
============
Docker - The Linux container runtime
------------------------------------
Docker complements LXC with a high-level API which operates at the process level. It runs unix processes with strong guarantees of isolation and repeatability across servers.
Docker is a great building block for automating distributed systems: large-scale web deployments, database clusters, continuous deployment systems, private PaaS, service-oriented architectures, etc.
-**Heterogeneous payloads** Any combination of binaries, libraries, configuration files, scripts, virtualenvs, jars, gems, tarballs, you name it. No more juggling between domain-specific tools. Docker can deploy and run them all.
-**Any server** Docker can run on any x64 machine with a modern linux kernel - whether it's a laptop, a bare metal server or a VM. This makes it perfect for multi-cloud deployments.
-**Isolation** docker isolates processes from each other and from the underlying host, using lightweight containers.
-**Repeatability** Because containers are isolated in their own filesystem, they behave the same regardless of where, when, and alongside what they run.
What is a Standard Container?
-----------------------------
Docker defines a unit of software delivery called a Standard Container. The goal of a Standard Container is to encapsulate a software component and all its dependencies in
a format that is self-describing and portable, so that any compliant runtime can run it without extra dependency, regardless of the underlying machine and the contents of the container.
The spec for Standard Containers is currently work in progress, but it is very straightforward. It mostly defines 1) an image format, 2) a set of standard operations, and 3) an execution environment.
A great analogy for this is the shipping container. Just like Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery.
Standard operations
~~~~~~~~~~~~~~~~~~~
Just like shipping containers, Standard Containers define a set of STANDARD OPERATIONS. Shipping containers can be lifted, stacked, locked, loaded, unloaded and labelled. Similarly, standard containers can be started, stopped, copied, snapshotted, downloaded, uploaded and tagged.
Content-agnostic
~~~~~~~~~~~~~~~~~~~
Just like shipping containers, Standard Containers are CONTENT-AGNOSTIC: all standard operations have the same effect regardless of the contents. A shipping container will be stacked in exactly the same way whether it contains Vietnamese powder coffee or spare Maserati parts. Similarly, Standard Containers are started or uploaded in the same way whether they contain a postgres database, a php application with its dependencies and application server, or Java build artifacts.
Infrastructure-agnostic
~~~~~~~~~~~~~~~~~~~~~~~~~~
Both types of containers are INFRASTRUCTURE-AGNOSTIC: they can be transported to thousands of facilities around the world, and manipulated by a wide variety of equipment. A shipping container can be packed in a factory in Ukraine, transported by truck to the nearest routing center, stacked onto a train, loaded into a German boat by an Australian-built crane, stored in a warehouse at a US facility, etc. Similarly, a standard container can be bundled on my laptop, uploaded to S3, downloaded, run and snapshotted by a build server at Equinix in Virginia, uploaded to 10 staging servers in a home-made Openstack cluster, then sent to 30 production instances across 3 EC2 regions.
Designed for automation
~~~~~~~~~~~~~~~~~~~~~~~~~~
Because they offer the same standard operations regardless of content and infrastructure, Standard Containers, just like their physical counterpart, are extremely well-suited for automation. In fact, you could say automation is their secret weapon.
Many things that once required time-consuming and error-prone human effort can now be programmed. Before shipping containers, a bag of powder coffee was hauled, dragged, dropped, rolled and stacked by 10 different people in 10 different locations by the time it reached its destination. 1 out of 50 disappeared. 1 out of 20 was damaged. The process was slow, inefficient and cost a fortune - and was entirely different depending on the facility and the type of goods.
Similarly, before Standard Containers, by the time a software component ran in production, it had been individually built, configured, bundled, documented, patched, vendored, templated, tweaked and instrumented by 10 different people on 10 different computers. Builds failed, libraries conflicted, mirrors crashed, post-it notes were lost, logs were misplaced, cluster updates were half-broken. The process was slow, inefficient and cost a fortune - and was entirely different depending on the language and infrastructure provider.
Industrial-grade delivery
~~~~~~~~~~~~~~~~~~~~~~~~~~
There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded on the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away.
With Standard Containers we can put an end to that embarrassment, by making INDUSTRIAL-GRADE DELIVERY of software a reality.
:description:An introduction to docker and standard containers?
:keywords:containers, lxc, concepts, explanation
Introduction
============
Docker - The Linux container runtime
------------------------------------
Docker complements LXC with a high-level API which operates at the process level. It runs unix processes with strong guarantees of isolation and repeatability across servers.
Docker is a great building block for automating distributed systems: large-scale web deployments, database clusters, continuous deployment systems, private PaaS, service-oriented architectures, etc.
-**Heterogeneous payloads** Any combination of binaries, libraries, configuration files, scripts, virtualenvs, jars, gems, tarballs, you name it. No more juggling between domain-specific tools. Docker can deploy and run them all.
-**Any server** Docker can run on any x64 machine with a modern linux kernel - whether it's a laptop, a bare metal server or a VM. This makes it perfect for multi-cloud deployments.
-**Isolation** docker isolates processes from each other and from the underlying host, using lightweight containers.
-**Repeatability** Because containers are isolated in their own filesystem, they behave the same regardless of where, when, and alongside what they run.
Docker defines a unit of software delivery called a Standard Container. The goal of a Standard Container is to encapsulate a software component and all its dependencies in
a format that is self-describing and portable, so that any compliant runtime can run it without extra dependency, regardless of the underlying machine and the contents of the container.
The spec for Standard Containers is currently work in progress, but it is very straightforward. It mostly defines 1) an image format, 2) a set of standard operations, and 3) an execution environment.
A great analogy for this is the shipping container. Just like Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery.
Standard operations
~~~~~~~~~~~~~~~~~~~
Just like shipping containers, Standard Containers define a set of STANDARD OPERATIONS. Shipping containers can be lifted, stacked, locked, loaded, unloaded and labelled. Similarly, standard containers can be started, stopped, copied, snapshotted, downloaded, uploaded and tagged.
Content-agnostic
~~~~~~~~~~~~~~~~~~~
Just like shipping containers, Standard Containers are CONTENT-AGNOSTIC: all standard operations have the same effect regardless of the contents. A shipping container will be stacked in exactly the same way whether it contains Vietnamese powder coffee or spare Maserati parts. Similarly, Standard Containers are started or uploaded in the same way whether they contain a postgres database, a php application with its dependencies and application server, or Java build artifacts.
Infrastructure-agnostic
~~~~~~~~~~~~~~~~~~~~~~~~~~
Both types of containers are INFRASTRUCTURE-AGNOSTIC: they can be transported to thousands of facilities around the world, and manipulated by a wide variety of equipment. A shipping container can be packed in a factory in Ukraine, transported by truck to the nearest routing center, stacked onto a train, loaded into a German boat by an Australian-built crane, stored in a warehouse at a US facility, etc. Similarly, a standard container can be bundled on my laptop, uploaded to S3, downloaded, run and snapshotted by a build server at Equinix in Virginia, uploaded to 10 staging servers in a home-made Openstack cluster, then sent to 30 production instances across 3 EC2 regions.
Designed for automation
~~~~~~~~~~~~~~~~~~~~~~~~~~
Because they offer the same standard operations regardless of content and infrastructure, Standard Containers, just like their physical counterpart, are extremely well-suited for automation. In fact, you could say automation is their secret weapon.
Many things that once required time-consuming and error-prone human effort can now be programmed. Before shipping containers, a bag of powder coffee was hauled, dragged, dropped, rolled and stacked by 10 different people in 10 different locations by the time it reached its destination. 1 out of 50 disappeared. 1 out of 20 was damaged. The process was slow, inefficient and cost a fortune - and was entirely different depending on the facility and the type of goods.
Similarly, before Standard Containers, by the time a software component ran in production, it had been individually built, configured, bundled, documented, patched, vendored, templated, tweaked and instrumented by 10 different people on 10 different computers. Builds failed, libraries conflicted, mirrors crashed, post-it notes were lost, logs were misplaced, cluster updates were half-broken. The process was slow, inefficient and cost a fortune - and was entirely different depending on the language and infrastructure provider.
Industrial-grade delivery
~~~~~~~~~~~~~~~~~~~~~~~~~~
There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded on the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away.
With Standard Containers we can put an end to that embarrassment, by making INDUSTRIAL-GRADE DELIVERY of software a reality.
Standard Container Specification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
(TODO)
Image format
~~~~~~~~~~~~
Standard operations
~~~~~~~~~~~~~~~~~~~
- Copy
- Run
- Stop
- Wait
- Commit
- Attach standard streams
- List filesystem changes
- ...
Execution environment
~~~~~~~~~~~~~~~~~~~~~
Root filesystem
^^^^^^^^^^^^^^^
Environment variables
^^^^^^^^^^^^^^^^^^^^^
Process arguments
^^^^^^^^^^^^^^^^^
Networking
^^^^^^^^^^
Process namespacing
^^^^^^^^^^^^^^^^^^^
Resource limits
^^^^^^^^^^^^^^^
Process monitoring
^^^^^^^^^^^^^^^^^^
Logging
^^^^^^^
Signals
^^^^^^^
Pseudo-terminal allocation
^^^^^^^^^^^^^^^^^^^^^^^^^^
Security
^^^^^^^^
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.