Compare commits

..

314 Commits

Author SHA1 Message Date
Jessica Frazelle
0baf609845 Bump version to v1.7.0
Signed-off-by: Jessica Frazelle <princess@docker.com>
2015-06-17 22:01:36 -07:00
Mary Anthony
e45e9e8fbc Fixing seds, deleting old stuff
Signed-off-by: Mary Anthony <mary@docker.com>

Upding sed, adding script to avoid redirects, remove mkdos

Signed-off-by: Mary Anthony <mary@docker.com>

Ignoring graphics with sed

Signed-off-by: Mary Anthony <mary@docker.com>

Fixing kitematic image

Signed-off-by: Mary Anthony <mary@docker.com>

Removing draft

Signed-off-by: Mary Anthony <mary@docker.com>

Fixing link

Signed-off-by: Mary Anthony <mary@docker.com>

removing from the menu

Signed-off-by: Mary Anthony <mary@docker.com>

Updatiing order of project material

Signed-off-by: Mary Anthony <mary@docker.com>

Removing from Regsitry v2 content per Olivier

Signed-off-by: Mary Anthony <mary@docker.com>

tweaking the touchup

Signed-off-by: Mary Anthony <mary@docker.com>

Removing include; only used four places; hugo global var replace

Signed-off-by: Mary Anthony <mary@docker.com>

Entering fixes from page-by-page

Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit 328dbd0aa2)
2015-06-17 22:01:34 -07:00
Mary Anthony
62fa0ac765 First pass of updates
Working docs
Update after check
update to centos 7 after second test
Updating with hopefully correct urls
Adding thaJetzah's comments
Updating with the new images
Updating after a visual check

Signed-off-by: Mary Anthony <mary@docker.com>

Updating with comments

Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit f9ab04ad13)
2015-06-17 12:54:03 -07:00
Amy Lindburg
1b6403b9f1 Update plugins.md
Fixed broken link.

Signed-off-by: Amy Lindburg <amy.lindburg@docker.com>

Update plugins.md

Some other broken links!

Signed-off-by: Amy Lindburg <amy.lindburg@docker.com>

Update plugin_api.md

FIxing broken links.

Signed-off-by: Amy Lindburg <amy.lindburg@docker.com>

Update plugins_volume.md

Fixing more links.

Signed-off-by: Amy Lindburg <amy.lindburg@docker.com>
(cherry picked from commit 0a529b6e7a)
2015-06-17 12:54:03 -07:00
Arnaud Porterie
f0c8b90524 Update libnetwork vendoring
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
2015-06-16 11:38:16 -07:00
ChaYoung You
60ed011bed Remove sources/ under docs directory
See #13936.

Signed-off-by: ChaYoung You <yousbe@gmail.com>
(cherry picked from commit 3f4eeca68f)
2015-06-16 10:34:30 -07:00
Jessica Frazelle
f99e0cea8a add gpg fingerprint for experimental
Signed-off-by: Jessica Frazelle <princess@docker.com>
(cherry picked from commit 957622d452)
2015-06-16 09:45:53 -07:00
Samuel Karp
984be835db Adjust disallowed CpuShares in /containers/create
Previous versions of libcontainer allowed CpuShares that were greater
than the maximum or less than the minimum supported by the kernel, and
relied on the kernel to do the right thing. Newer libcontainer fails
after creating the container if the requested CpuShares is different
from what was actually created by the kernel, which breaks compatibility
with earlier Docker Remote API versions. This change explicitly adjusts
the requested CpuShares in API versions < 1.20.

Signed-off-by: Samuel Karp <skarp@amazon.com>
(cherry picked from commit ed39fbeb2a)
2015-06-15 21:37:46 -07:00
Michael Crosby
9fc1c4efb9 Get Mtu from default route
If no Mtu value is provided to the docker daemon, get the mtu from the
default route's interface.  If there is no default route, default to a
mtu of 1500.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit ff4e58ff56)
2015-06-15 17:32:10 -07:00
Derek McGowan
012bf4129d Store layer digests on pull
Currently digests are not stored on pull, causing a simple re-tag or re-push to send up all layers. Storing the digests on pull will allow subsequent pushes to the same repository to not push up content.
This does not address pushing content to a new repository. When content is pushed to a new repository, the digest will be recalculated. Since only one digest is currently stored, it may cause a new content push to the original repository.

Fixes #13883

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
(cherry picked from commit a98ea87e46)
2015-06-15 15:06:23 -07:00
Mary Anthony
0bd5953a88 retooling for hugo
Tweaking for Hugo
Updating the Dockerfile with new sed; fix broken link on Kitematic
Fixing image pull for Dockerfile
Removing docs targets

Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit f93fee5f48)
2015-06-15 14:43:25 -07:00
David Calavera
197f2e0693 Revert "contrib/init: unshare mount namespace for inits"
This reverts commit b6569b6b82.

Signed-off-by: David Calavera <david.calavera@gmail.com>
(cherry picked from commit d8592eaff8)
2015-06-15 11:39:13 -07:00
Brian Goff
73e41874b6 Fixes content-type/length for stats stream=false
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 855a056af7)
2015-06-12 11:56:31 -07:00
Darren Shepherd
64552d0b49 Set omitempty for IP and PublicPort to conform w/ API 1.18
Signed-off-by: Darren Shepherd <darren@rancher.com>
(cherry picked from commit 09de92b891)
2015-06-12 10:46:07 -07:00
Chun Chen
84b21b55b9 Fix send on closed channel bug
Signed-off-by: Chun Chen <chenchun.feed@gmail.com>
(cherry picked from commit a408790de8)
2015-06-12 10:29:58 -07:00
Madhu Venugopal
7ffbd7eaee Vendoring in libnetwork to fix #13873.
Libnetwork sha# e578e95aa101441481411ff1d620f343895f24fe

Signed-off-by: Madhu Venugopal <madhu@docker.com>
(cherry picked from commit f3d1826350)
2015-06-12 10:29:58 -07:00
Brian (bex) Exelbierd
469534c855 Update man page Dockerfile to use go-md2man v1.0.1 and go-lang 1.4
The main Dockerfile to was updated - this update brings the
sub-directory specific file inline with it.

Fixes #12866

Signed-off-by: Brian Exelbierd <bex@pobox.com>
(cherry picked from commit 5d51118c7c)
2015-06-11 14:12:55 -07:00
Eric-Olivier Lamey
85d6e7c4ec Display empty string instead of <nil> when IP opt is nil.
Fixes #13878.

Signed-off-by: Eric-Olivier Lamey <eo@lamey.me>
(cherry picked from commit 9ad89281ae)
2015-06-11 12:53:36 -07:00
Doug Davis
a1f16a3738 Remove duplicate call to net.ParseIP
and a little cleanup

Signed-off-by: Doug Davis <dug@us.ibm.com>
(cherry picked from commit b180de55ca)
2015-06-11 12:53:36 -07:00
David Calavera
2ef2967570 Cleanup driver and graph db after stopping containers.
Signed-off-by: David Calavera <david.calavera@gmail.com>
(cherry picked from commit 0964a664e8)
2015-06-11 12:49:52 -07:00
Jörg Thalheim
ceda1e75a4 zfs: correctly apply selinux context
fixes #13858

Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
(cherry picked from commit 19c31a703f)
2015-06-11 12:48:45 -07:00
Jana Radhakrishnan
2519689e99 Vendoring libnetwork to fix stale arp cache issue
Vendoring in libnetwork 90638ec9cf7fa7b7f5d0e96b0854f136d66bff92

Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
(cherry picked from commit 386ab25137)
2015-06-11 12:48:45 -07:00
Eric-Olivier Lamey
009ba67dd0 Fix docs URL not using https.
Fixes #13838.

Signed-off-by: Eric-Olivier Lamey <eo@lamey.me>
(cherry picked from commit 212dfb45de)
2015-06-11 12:48:45 -07:00
Jessica Frazelle
4743f3b3aa update gitignore for new manpages
Signed-off-by: Jessica Frazelle <princess@docker.com>
(cherry picked from commit f88e620359)
2015-06-11 12:48:45 -07:00
Mary Anthony
759cdd8ed6 Moving man pages out of docs
Adding in other areas per comments
Updating with comments; equalizing generating man page info
Updating with duglin's comments
Doug is right here again;fixing.

Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit eacae64bd8)
2015-06-10 15:32:19 -07:00
Michael Crosby
ff770d33cd Revert shared container rootfs
This is breaking various setups where the host's rootfs is mount shared
correctly and breaks live migration with bind mounts.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit c9d71317be)
2015-06-10 14:16:11 -07:00
Brian Goff
1d3f7cc012 Default events since to current time
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 74c12aa429)
2015-06-10 10:38:33 -07:00
David Calavera
5291de79ae Allow to downgrade local volumes from > 1.7 to 1.6.
Signed-off-by: David Calavera <david.calavera@gmail.com>
(cherry picked from commit bd9814f0db)
2015-06-10 10:19:12 -07:00
Zefan Li
821f9450fc Cleanup Daemon.verifyVolumesInfo() a bit
vols.VolumesRW has been initialized so it can't be nil. Furthermore
it's ok to read a nil map.

Signed-off-by: Zefan Li <lizefan@huawei.com>
(cherry picked from commit 8b4c0decfc)
2015-06-10 10:19:11 -07:00
John Howard
6a04940f86 Windows: Fix PR13278 compile break
Signed-off-by: John Howard <jhoward@microsoft.com>
(cherry picked from commit 71eadd4176)
2015-06-10 10:19:11 -07:00
Alexander Morozov
cf13497ca8 Update libcontainer to v2.1.1
It includes fix for mounting / as volume on SELinux.
docker/libcontainer#619

Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit 38acd31e8a)
2015-06-09 17:47:07 -07:00
Jana Radhakrishnan
2b15a888cd libnetwork: Add garbage collection trigger
When the daemon is going down trigger immediate
garbage collection of libnetwork resources deleted
like namespace path since there will be no way to
remove them when the daemon restarts.

Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
(cherry picked from commit c68e7f96f9)
2015-06-09 17:28:08 -07:00
Tibor Vass
455ad3afe2 Do not set auth headers if 302
This patch ensures no auth headers are set for v1 registries if there
was a 302 redirect.

This also ensures v2 does not use authTransport.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 123a0582b2)
2015-06-09 15:37:55 -07:00
Alessandro Boch
b5086a7494 Add integ test for unpublished ports in ps o/p
- This is a test to assert the fix #13734

Signed-off-by: Alessandro Boch <aboch@docker.com>
(cherry picked from commit 7b9ae696d8)
2015-06-09 11:47:25 -07:00
Jessica Frazelle
80157b35ac skip test on lxc
Signed-off-by: Jessica Frazelle <princess@docker.com>
(cherry picked from commit 1392f99a97)
2015-06-09 11:47:25 -07:00
Zefan Li
4ebcd0d544 test: Skip TestDevicePermissions on lxc
Closes: #13641

Signed-off-by: Zefan Li <lizefan@huawei.com>
(cherry picked from commit e55649192e)
2015-06-09 11:47:25 -07:00
Flavio Castelli
3e4284bd7c Added openSUSE and SUSE Linux Enterprise support to install.sh
Handle docker installation on openSUSE and SUSE Linux Enterprise
via https://get.docker.io/

Signed-off-by: Flavio Castelli <fcastelli@suse.com>
(cherry picked from commit d0c4c7c83f)
2015-06-09 10:14:15 -07:00
Eric-Olivier Lamey
eb4798a10e Tiny improvements to systemd docs.
Show how to use `systemctl show` and recommend against modifying
system unit files in `/usr` and `/lib`.

Fixes #13796.

Signed-off-by: Eric-Olivier Lamey <eo@lamey.me>
(cherry picked from commit 68bfd9e3ae)
2015-06-09 10:14:15 -07:00
Doug Davis
6c90a239ed Fix COPY/ADD quoted/json form
Minor tweak to the quoted/json form and made man page look like the Dockerfile
docs.  W/o the `,` people may think there should be a space delimited list.

Signed-off-by: Doug Davis <dug@us.ibm.com>
(cherry picked from commit f4a3e8bef0)
2015-06-09 10:14:15 -07:00
Eric-Olivier Lamey
8e81f3711f Fix a typo and a minor formatting issue in the docs.
Signed-off-by: Eric-Olivier Lamey <eo@lamey.me>
(cherry picked from commit 173d0918a8)
2015-06-09 10:14:15 -07:00
Doug Davis
d10ed90080 Minor doc edit to add clarity around the --volume path format
Also add a comment to the ValidatePath func so devs/reviewers
know exactly what its looking for.

Signed-off-by: Doug Davis <dug@us.ibm.com>
(cherry picked from commit 3fcf53db92)
2015-06-09 10:14:15 -07:00
Ben Severson
7223584e10 quick doc fix for windows versions
Signed-off-by: Ben Severson <BenSeverson@users.noreply.github.com>
(cherry picked from commit a448b7aff2)
2015-06-09 10:14:15 -07:00
Chris Wahl
c4ba3a8352 Corrected VMWare to VMware
Corrected spelling of VMware.

Signed-off-by: Chris Wahl <github@wahlnetwork.com>
(cherry picked from commit 55cea4952b)
2015-06-09 10:14:15 -07:00
Antonio Murdaca
33588c23c3 Avoid nil pointer dereference while creating a container with an empty Config
Signed-off-by: Antonio Murdaca <runcom@linux.com>
(cherry picked from commit 4ce817796e)
2015-06-08 11:39:17 -07:00
Eric-Olivier Lamey
6a5cbd0dd4 Fix docs URL in systemd service file.
Fixes #13799.

Signed-off-by: Eric-Olivier Lamey <eo@lamey.me>
(cherry picked from commit dbf5e36fd6)
2015-06-08 11:39:17 -07:00
Eric-Olivier Lamey
495640005a Restore --default-gateway{,-v6} daemon options.
This was added before the libnetwork merge, and then lost. Fixes #13755.

Signed-off-by: Eric-Olivier Lamey <eo@lamey.me>
(cherry picked from commit 5fa60149e2)
2015-06-08 11:36:48 -07:00
John Howard
5c408158a6 Windows: factor out bridge server+config
Signed-off-by: John Howard <jhoward@microsoft.com>
(cherry picked from commit ead2f80073)
2015-06-08 11:36:48 -07:00
Arnaud Porterie
ede1c3f0c2 Remove reference to experimental release
Remove reference to experimental releases as it is really a nightly
channel rather than a scheduled release.

Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
(cherry picked from commit d8680f7beb)
2015-06-05 10:50:50 -07:00
Arnaud Porterie
09295a1453 Rename EXPERIMENTAL.md to README.md
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
(cherry picked from commit 8352f2e264)
2015-06-05 10:50:50 -07:00
Mary Anthony
1f6eca7b99 Moving experimental
Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit 95dfc4c4a5)
2015-06-05 10:50:50 -07:00
Jessica Frazelle
8f211fde46 fix lxc build
Signed-off-by: Jessica Frazelle <princess@docker.com>
(cherry picked from commit 0adfb908a6)
2015-06-05 10:29:08 -07:00
Sven Dowideit
5c70b1eadd Bring over DHE docs updates for publishing
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
(cherry picked from commit 9ed2cb300b)
2015-06-05 09:04:00 -07:00
Tianon Gravi
b3adf94d81 Fix release script to release _both_ .deb files
Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
(cherry picked from commit 4572329d4b)
2015-06-04 16:39:20 -07:00
David Calavera
8e7ea1a8fd Migrate data from old vfs paths to new local volumes path.
Signed-off-by: David Calavera <david.calavera@gmail.com>
(cherry picked from commit 16a5590c5b)
2015-06-04 12:14:21 -07:00
Alessandro Boch
c11128520c Fix for #13720
Signed-off-by: Alessandro Boch <aboch@docker.com>
(cherry picked from commit ea180a73bc)
2015-06-04 12:12:50 -07:00
Tianon Gravi
25fc200dcf Swap build-* to use UTC instead of local time
Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
(cherry picked from commit aa54a93f74)
2015-06-04 09:03:54 -07:00
Tianon Gravi
d17f27a13f Make "DEST" a make.sh construct instead of ad-hoc
Using "DEST" for our build artifacts inside individual bundlescripts was already well-established convention, but this officializes it by having `make.sh` itself set the variable and create the directory, also handling CYGWIN oddities in a single central place (instead of letting them spread outward from `hack/make/binary` like was definitely on their roadmap, whether they knew it or not; sneaky oddities).

Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
(cherry picked from commit ac3388367b)
2015-06-04 09:03:54 -07:00
Madhu Venugopal
f2f2c492e1 Using container NetworkDisabled to fix #13725
container.config.NetworkDisabled is set for both daemon's
DisableNetwork and --networking=false case. Hence using
this flag instead to fix #13725.

There is an existing integration-test to catch this issue,
but it is working for the wrong reasons.

Signed-off-by: Madhu Venugopal <madhu@docker.com>
(cherry picked from commit 83208a531d)
2015-06-04 09:03:54 -07:00
Jessica Frazelle
18af7fdbba fix version struct on old versions
Signed-off-by: Jessica Frazelle <princess@docker.com>
(cherry picked from commit 229b599259)
2015-06-03 17:33:10 -07:00
Antonio Murdaca
339f0a128a SizeRW & SizeRootFs omitted if empty in /container/json call
Signed-off-by: Antonio Murdaca <runcom@linux.com>
(cherry picked from commit 6945ac2d02)
2015-06-03 17:33:10 -07:00
Jessica Frazelle
5a0fa9545a Update urls from .com to .org.
I added 301 redirects from dockerproject.com to dockerproject.org but may as
well make sure everything is updated anyways.

Signed-off-by: Jessica Frazelle <princess@docker.com>
(cherry picked from commit 7943bce894)
2015-06-03 14:23:45 -07:00
Alexander Morozov
930f691919 Support CloseNotifier for events
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit 9e7fc245a7)
2015-06-03 13:47:20 -07:00
Antonio Murdaca
bf371ab2a5 Do not omit empty json field in /containers/json api response
Signed-off-by: Antonio Murdaca <runcom@linux.com>
(cherry picked from commit 725f34151c)
2015-06-03 12:43:08 -07:00
Michael Crosby
d283999b1c Fix nat integration tests
This removes complexity of current implementation and makes the test
correct and assert the right things.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit 4f42097883)
2015-06-02 18:54:54 -07:00
Arnaud Porterie
2786c4d0e2 Update vendoring
Update libnetwork to 005bc475ee49a36ef2ad9c112d1b5ccdaba277d4
Synchronize changes to distribution (was missing _test.go file)

Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
(cherry picked from commit 1664d6cd66)
2015-06-02 17:42:41 -07:00
Antonio Murdaca
2939617c8b Expose old config field for api < 1.19
Signed-off-by: Antonio Murdaca <me@runcom.ninja>
(cherry picked from commit 6deaa58ba5)
2015-06-02 16:02:13 -07:00
David Calavera
f5e3c68c93 Update libnetwork to 4ded6fe3641b71863cc5985652930ce40efc3af4
Signed-off-by: David Calavera <david.calavera@gmail.com>
(cherry picked from commit ad244668c3)
2015-06-02 12:06:40 -07:00
David Calavera
c96b03797a Bump libnetwork for 1.7 release.
Signed-off-by: David Calavera <david.calavera@gmail.com>
(cherry picked from commit ad41efd9a2)
2015-06-02 12:06:40 -07:00
Stephen J Day
0f06c54f40 Break down loadManifest function into constituent parts
Signed-off-by: Stephen J Day <stephen.day@docker.com>
(cherry picked from commit 84413be3c9)
2015-06-02 11:56:42 -07:00
Stephen J Day
32fcacdedb Add tests for loadManifest digest verification
Signed-off-by: Stephen J Day <stephen.day@docker.com>
(cherry picked from commit 74528be903)
2015-06-02 11:56:42 -07:00
Stephen J Day
140e36a77e Attempt to retain tagging behavior
Signed-off-by: Stephen J Day <stephen.day@docker.com>
(cherry picked from commit 1e653ab645)
2015-06-02 11:56:42 -07:00
Stephen J Day
4945a51f73 Properly verify manifests and layer digests on pull
To ensure manifest integrity when pulling by digest, this changeset ensures
that not only the remote digest provided by the registry is verified but also
that the digest provided on the command line is checked, as well. If this check
fails, the pull is cancelled as with an error. Inspection also should that
while layers were being verified against their digests, the error was being
treated as tech preview image signing verification error. This, in fact, is not
a tech preview and opens up the docker daemon to man in the middle attacks that
can be avoided with the v2 registry protocol.

As a matter of cleanliness, the digest package from the distribution project
has been updated to latest version. There were some recent improvements in the
digest package.

Signed-off-by: Stephen J Day <stephen.day@docker.com>
(cherry picked from commit 06612cc0fe)
2015-06-02 11:56:41 -07:00
Jeffrey van Gogh
c881349e5e Upon HTTP 302 redirect do not include "Authorization" header on 'untrusted' registries.
Refactoring in Docker 1.7 changed the behavior to add this header where as Docker <= 1.6 wouldn't emit this Header on a HTTP 302 redirect.

This closes #13649

Signed-off-by: Jeffrey van Gogh <jvg@google.com>
(cherry picked from commit 65c5105fcc)
2015-06-02 11:56:41 -07:00
Victor Vieux
ecdf1297a3 no not print empty keys in docker info
Signed-off-by: Victor Vieux <victorvieux@gmail.com>
(cherry picked from commit c790aa36ea)
2015-06-02 11:03:06 -07:00
Antonio Murdaca
db3daa7bdd Fix wrong kill signal parsing
Signed-off-by: Antonio Murdaca <antonio.murdaca@gmail.com>
(cherry picked from commit 39eec4c25b)
2015-06-02 09:43:43 -07:00
Richard
c0fc839e2b If no endpoint could be established with the given mirror configuration,
fallback to pulling from the hub as per v1 behavior.

Signed-off-by: Richard Scothern <richard.scothern@gmail.com>
(cherry picked from commit 6e4ff1bb13)
2015-06-01 18:11:10 -07:00
Alexander Morozov
2e29eadb5c Fix race condition in registry/session
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit 9d98c28855)
2015-06-01 14:45:22 -07:00
Tonis Tiigi
ecda5c0a6d Fix breakouts from git root during build
Signed-off-by: Tõnis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 7f7ebeffe8)
2015-06-01 13:06:12 -07:00
Arnaud Porterie
4872f86d00 Add note about overlay not being production ready
Add a paragraph in cli.md mentioning that overlay is not a production
ready graphdriver.

Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
(cherry picked from commit 67cb748e26)
2015-06-01 12:26:57 -07:00
Lei Jitang
6ca78fff63 Add docker stats --no-stream show cpu usage
Signed-off-by: Lei Jitang <leijitang@huawei.com>
(cherry picked from commit 96123a1fd5)

Conflicts:
	daemon/stats.go
2015-06-01 10:09:14 -07:00
David R. Jenni
e9c51c3edc Fix issue #10184.
Merge user specified devices correctly with default devices.
Otherwise the user specified devices end up without permissions.

Signed-off-by: David R. Jenni <david.r.jenni@gmail.com>
(cherry picked from commit c913c9921b)
2015-06-01 09:32:50 -07:00
Jessica Frazelle
b38c720f19 fix experimental version and release script
add api version experimental

Signed-off-by: Jessica Frazelle <princess@docker.com>
(cherry picked from commit b372f9f224)

Conflicts:
	docs/sources/reference/api/docker_remote_api_v1.20.md
2015-06-01 09:31:32 -07:00
Jana Radhakrishnan
3bed793ba1 Do not attempt releasing network when not attached to any network
Sometimes container.cleanup() can be called from multiple paths
for the same container during error conditions from monitor and
regular startup path. So if the container network has been already
released do not try to release it again.

Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
(cherry picked from commit 6cdf8623d5)
2015-06-01 08:53:49 -07:00
Doug Davis
ab7e7a7338 Carry #11858
Continues 11858 by:
- Making sure the exit code is always zero when we ask for help
- Making sure the exit code isn't zero when we print help on error cases
- Making sure both short and long usage go to the same stream (stdout vs stderr)
- Making sure all docker commands support --help
- Test that all cmds send --help to stdout, exit code 0, show full usage, no blank lines at end
- Test that all cmds (that support it) show short usage on bad arg to stderr, no blank line at end
- Test that all cmds complain about a bad option, no blank line at end
- Test that docker (w/o subcmd) does the same stuff mentioned above properly

Signed-off-by: Doug Davis <dug@us.ibm.com>
(cherry picked from commit 8324d7918b)
2015-06-01 08:53:49 -07:00
Lei Jitang
4fefcde5a6 Ensure all the running containers are killed on daemon shutdown
Signed-off-by: Lei Jitang <leijitang@huawei.com>
(cherry picked from commit bdb77078b5)
2015-05-29 16:50:23 -07:00
Arnaud Porterie
63e3b7433f Add note about overlay not being production ready
Add a paragraph in cli.md mentioning that overlay is not a production
ready graphdriver.

Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
(cherry picked from commit 67cb748e26)
2015-05-29 16:28:00 -07:00
Jessica Frazelle
736c216d58 fix bug with rmi multiple tag
Signed-off-by: Jessica Frazelle <princess@docker.com>
(cherry picked from commit 185f392691)
2015-05-29 15:12:33 -07:00
Antonio Murdaca
85f0e01833 Fix regression in stats API endpoint where stream query param default is true
Signed-off-by: Antonio Murdaca <me@runcom.ninja>
(cherry picked from commit ec97f41465)
2015-05-29 15:06:01 -07:00
Zhang Wei
eb3ed436a4 bug fix: close http response body no longer in use
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
(cherry picked from commit 6c49576a86)
2015-05-29 14:34:25 -07:00
Burke Libbey
d77d7a0056 Use bufio.Reader instead of bufio.Scanner for logger.Copier
When using a scanner, log lines over 64K will crash the Copier with
bufio.ErrTooLong. Subsequently, the ioutils.bufReader will grow without
bound as the logs are no longer being flushed to disk.

Signed-off-by: Burke Libbey <burke.libbey@shopify.com>
(cherry picked from commit f779cfc5d8)
2015-05-29 13:39:21 -07:00
Antonio Murdaca
a0aad0d4de Add syslog-address log-opt
Signed-off-by: Antonio Murdaca <me@runcom.ninja>
(cherry picked from commit e8c88d2533)
2015-05-29 10:28:28 -07:00
Mary Anthony
baf9ea5ce4 Fixes to the 1.19 version
updating with changes to this instant
Signed-off-by: Mary Anthony <mary@docker.com>

(cherry picked from commit 30901609a8)
2015-05-29 10:28:28 -07:00
David Calavera
a6fe70c696 Mount bind volumes coming from the old volumes configuration.
Signed-off-by: David Calavera <david.calavera@gmail.com>
(cherry picked from commit 53d9609de4)
2015-05-29 09:52:36 -07:00
Harald Albers
eee959a9b9 Update bash completion for 1.7.0
Signed-off-by: Harald Albers <github@albersweb.de>
(cherry picked from commit b2832dffe5)
2015-05-29 09:52:36 -07:00
Alexander Morozov
d000ba05fd Treat systemd listeners as all other
Fix #13549

Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit 6f9fa64645)
2015-05-28 14:16:28 -07:00
Antonio Murdaca
d8eff999e0 Fix race in httpsRequestModifier.ModifyRequest when writing tlsConfig
Signed-off-by: Antonio Murdaca <me@runcom.ninja>
(cherry picked from commit a27395e6df)
2015-05-28 11:33:34 -07:00
Jason Shepherd
fb124bcad0 adding nicer help when missing arguments (#11858)
Signed-off-by: Jason Shepherd <jason@jasonshepherd.net>
(cherry picked from commit 48231d623f)
2015-05-28 11:33:34 -07:00
Lei Jitang
9827107dcf Fix automatically publish ports without --publish-all
Signed-off-by: Lei Jitang <leijitang@huawei.com>
(cherry picked from commit 9a09664b51)
2015-05-28 10:03:29 -07:00
Jessica Frazelle
e57d649057 fix release script
Signed-off-by: Jessica Frazelle <princess@docker.com>
(cherry picked from commit 9f619282db)
2015-05-28 10:03:29 -07:00
Tianon Gravi
d6ff6e2c6f Add fedora:22 to our rpm targets
Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
(cherry picked from commit 96903c837f)
2015-05-28 10:03:29 -07:00
Jessica Frazelle
cc2944c7af Merge remote-tracking branch 'origin/master' into bump_v1.7.0
* origin/master: (999 commits)
  Review feedback:     - Match verbiage with other output     - Remove dead code and clearer flow
  Vendoring in libnetwork 2da2dc055de5a474c8540871ad88a48213b0994f
  Restore the stripped registry version number
  Use SELinux labels for volumes
  apply selinux labels volume patch on volumes refactor
  Modify volume mounts SELinux labels on the fly based on :Z or :z
  Remove unused code
  Remove redundant set header
  Return err if we got err on parseForm
  script cleaned up
  Fix unregister stats on when rm running container
  Fix container unmount networkMounts
  Windows: Set default exec driver to windows
  Fixes title, line wrap, and Adds install area Tibor's comment Updating with the new plugins Entering comments from Seb
  Add regression test to make sure we can load old containers with volumes.
  Do not force `syscall.Unmount` on container cleanup.
  Revert "Add docker exec run a command in privileged mode"
  Cleanup container rm funcs
  Allow mirroring only for the official index
  Registry v2 mirror support.
  ...

Conflicts:
	CHANGELOG.md
	VERSION
	api/client/commands.go
	api/client/utils.go
	api/server/server.go
	api/server/server_linux.go
	builder/shell_parser.go
	builder/words
	daemon/config.go
	daemon/container.go
	daemon/daemon.go
	daemon/delete.go
	daemon/execdriver/execdrivers/execdrivers_linux.go
	daemon/execdriver/lxc/driver.go
	daemon/execdriver/native/driver.go
	daemon/graphdriver/aufs/aufs.go
	daemon/graphdriver/driver.go
	daemon/logger/syslog/syslog.go
	daemon/networkdriver/bridge/driver.go
	daemon/networkdriver/portallocator/portallocator.go
	daemon/networkdriver/portmapper/mapper.go
	daemon/networkdriver/portmapper/mapper_test.go
	daemon/volumes.go
	docs/Dockerfile
	docs/man/docker-create.1.md
	docs/man/docker-login.1.md
	docs/man/docker-logout.1.md
	docs/man/docker-run.1.md
	docs/man/docker.1.md
	docs/mkdocs.yml
	docs/s3_website.json
	docs/sources/installation/windows.md
	docs/sources/reference/api/docker_remote_api_v1.18.md
	docs/sources/reference/api/registry_api_client_libraries.md
	docs/sources/reference/builder.md
	docs/sources/reference/run.md
	docs/sources/release-notes.md
	graph/graph.go
	graph/push.go
	hack/install.sh
	hack/vendor.sh
	integration-cli/docker_cli_build_test.go
	integration-cli/docker_cli_pull_test.go
	integration-cli/docker_cli_run_test.go
	pkg/archive/changes.go
	pkg/broadcastwriter/broadcastwriter.go
	pkg/ioutils/readers.go
	pkg/ioutils/readers_test.go
	pkg/progressreader/progressreader.go
	registry/auth.go
	vendor/src/github.com/docker/libcontainer/cgroups/fs/cpu.go
	vendor/src/github.com/docker/libcontainer/cgroups/fs/devices.go
	vendor/src/github.com/docker/libcontainer/cgroups/fs/memory.go
	vendor/src/github.com/docker/libcontainer/cgroups/systemd/apply_systemd.go
	vendor/src/github.com/docker/libcontainer/container_linux.go
	vendor/src/github.com/docker/libcontainer/init_linux.go
	vendor/src/github.com/docker/libcontainer/integration/exec_test.go
	vendor/src/github.com/docker/libcontainer/integration/utils_test.go
	vendor/src/github.com/docker/libcontainer/nsinit/README.md
	vendor/src/github.com/docker/libcontainer/process.go
	vendor/src/github.com/docker/libcontainer/rootfs_linux.go
	vendor/src/github.com/docker/libcontainer/update-vendor.sh
	vendor/src/github.com/docker/libnetwork/portallocator/portallocator_test.go
2015-05-27 19:13:22 -07:00
Michael Crosby
9cee8c4ed0 Merge pull request #13133 from crosbymichael/bump_v1.6.2
Bump to version v1.6.2
2015-05-13 13:46:56 -07:00
Michael Crosby
7c8fca2ddb Bump to version 1.6.2
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-05-11 15:05:16 -07:00
Michael Crosby
376188dcd3 Update libcontainer to 227771c8f611f03639f0ee
This fixes regressions for docker containers mounting into
/sys/fs/cgroup.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-05-11 15:05:07 -07:00
David Calavera
dc610864aa Merge pull request #13072 from jfrazelle/bump_v1.6.1
Bump v1.6.1
2015-05-07 14:59:23 -07:00
Jessica Frazelle
97cd073598 Bump version to 1.6.1
Signed-off-by: Jessica Frazelle <jess@docker.com>
2015-05-07 10:07:01 -07:00
Michael Crosby
d5ebb60bdd Allow libcontainer to eval symlink destination
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>

Add tests for mounting into /proc and /sys

These two locations should be prohibited from mounting volumes into
those destinations.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-04-30 14:08:02 -07:00
Michael Crosby
83c5131acd Update libcontainer to 1b471834b45063b61e0aedefbb1
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-04-30 14:07:52 -07:00
Michael Crosby
b6a9dc399b Mask reads from timer_stats and latency_stats
These files in /proc should not be able to be read as well
as written to.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-04-30 10:25:26 -07:00
Michael Crosby
614a9690e7 Mount RO for timer_stats and latency_stats in proc
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-04-22 11:15:26 -07:00
Michael Crosby
545b440a80 Mount /proc/fs as readonly
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-04-22 11:15:17 -07:00
Michael Crosby
3162024e28 Prevent write access to /proc/asound
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>

Conflicts:
	integration-cli/docker_cli_run_test.go
2015-04-22 11:14:46 -07:00
Jessie Frazelle
769acfec29 Merge pull request #11635 from jfrazelle/bump_v1.6.0
Bump v1.6.0
2015-04-16 12:31:02 -07:00
Jessica Frazelle
47496519da Bump version to v1.6.0
Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-04-16 11:38:05 -07:00
Mary Anthony
fdd21bf032 for 1.6
Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit f44aa3b1fb)
2015-04-16 11:37:59 -07:00
Mary Anthony
d928dad8c8 In with the old menu layout
Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit fe8fb24b53)
2015-04-15 21:34:50 -07:00
Mary Anthony
82366ce059 Adding environment variables for sub projects
Fixes issue #12186
Fixing variables per Jess

Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit ef8b917fac)
2015-04-15 21:34:49 -07:00
Mary Anthony
6410c3c066 Updating with man pages for distribution
Went through the man pages to update for the
v2 instance. Checked against the commands.

Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit b6d55ebcbc)
2015-04-15 21:34:49 -07:00
Michael Crosby
9231dc9cc0 Ensure state is destroyed on daemont restart
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit a5f7c4aa31)
2015-04-15 21:34:49 -07:00
Deng Guangxing
6a3f37386b Inspect show right LogPath in json-file driver
Signed-off-by: Deng Guangxing <dengguangxing@huawei.com>
(cherry picked from commit acf025ad1b)
2015-04-15 17:37:43 -07:00
Darren Shepherd
d9a0c05208 Change syslog format and facility
This patch changes two things

1. Set facility to LOG_DAEMON
2. Remove ": " from tag so that the tag + pid become a single column in
   the log

Signed-off-by: Darren Shepherd <darren@rancher.com>
(cherry picked from commit 05641ccffc)
2015-04-15 14:49:24 -07:00
Jessica Frazelle
24cb9df189 try to modprobe bridge
Signed-off-by: Jessica Frazelle <jess@docker.com>
(cherry picked from commit b3867b8899)
2015-04-15 09:10:56 -07:00
Lewis Marshall
c51cd3298c Prevent Upstart post-start stanza from hanging
Once the job has failed and is respawned, the status becomes `docker
respawn/post-start` after subsequent failures (as opposed to `docker
stop/post-start`), so the post-start script needs to take this into
account.

I could not find specific documentation on the job transitioning to the
`respawn/post-start` state, but this was observed on Ubuntu 14.04.2.

Signed-off-by: Lewis Marshall <lewis@lmars.net>
(cherry picked from commit 302e3834a0)
2015-04-14 16:23:59 -07:00
Alexander Morozov
10affa8018 Get process list after PID 1 dead
Fix #11087

Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit ac8bd12b39)
2015-04-13 12:28:22 -07:00
Deng Guangxing
ce27fa2716 move syslog-tag to syslog.New function
Signed-off-by: Deng Guangxing <dengguangxing@huawei.com>
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit 4f91a333d5)
2015-04-13 12:17:26 -07:00
Brendan Dixon
8d83409e85 Turned off Ctrl+C processing by Windows shell
Signed-off-by: Brendan Dixon <brendand@microsoft.com>
(cherry picked from commit c337bfd2e0)
2015-04-10 16:41:59 -07:00
Nathan LeClaire
3a73b6a2bf Allow SEO crawling from docs site
Signed-off-by: Nathan LeClaire <nathan.leclaire@gmail.com>

Docker-DCO-1.1-Signed-off-by: Nathan LeClaire <nathan.leclaire@gmail.com> (github: nathanleclaire)

(cherry picked from commit de03f4797b)
2015-04-10 15:59:35 -07:00
Tianon Gravi
f99269882f Commonalize more bits of install.sh (especially standardizing around "cat <<-EOF")
Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
(cherry picked from commit 6842bba163)
2015-04-10 11:20:15 -07:00
Eric Windisch
568a9703ac Wrap installer in a function
This will assure that the install script will not
begin executing until after it has been downloaded should
it be utilized in a 'curl | bash' workflow.

Signed-off-by: Eric Windisch <eric@windisch.us>
(cherry picked from commit fa961ce046)
2015-04-10 11:20:15 -07:00
Aaron Welch
faaeb5162d add centos to supported distros
Signed-off-by: Aaron Welch <welch@packet.net>
(cherry picked from commit a6b8f2e3fe)
2015-04-10 11:20:15 -07:00
Brendan Dixon
b5613baac2 Corrected int16 overflow and buffer sizes
Signed-off-by: Brendan Dixon <brendand@microsoft.com>
(cherry picked from commit a264e1e83d)
2015-04-10 11:20:15 -07:00
Alexander Morozov
c956efcd52 Update libcontainer to bd8ec36106086f72b66e1be85a81202b93503e44
Fix #12130

Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit 2f853b9493)
2015-04-07 16:22:37 -07:00
Alexander Morozov
5455864187 Test case for network mode chain container -> container -> host
Issue #12130

Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit ce69dafe4d)
2015-04-07 16:22:37 -07:00
Ahmet Alp Balkan
ceb72fab34 Swap width/height in GetWinsize and monitorTtySize
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
(cherry picked from commit 6e44246fed)
2015-04-06 16:55:38 -07:00
Brendan Dixon
c6ea062a26 Windows console fixes
Corrected integer size passed to Windows
Corrected DisableEcho / SetRawTerminal to not modify state
Cleaned up and made routines more idiomatic
Corrected raw mode state bits
Removed duplicate IsTerminal
Corrected off-by-one error
Minor idiomatic change

Signed-off-by: Brendan Dixon <brendand@microsoft.com>
(cherry picked from commit 1a36a113d4)
2015-04-06 11:48:33 -07:00
Ahmet Alp Balkan
0e045ab50c docs: Add new windows installation tutorials
Updated Windows installation documentation with newest
screencasts and Chocolatey instructions to install windows
client CLI.

Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
(cherry picked from commit 2b320a2309)
2015-04-03 13:51:53 -07:00
Michael Crosby
eeb05fc081 Update libcontainaer to d00b8369852285d6a830a8d3b9
Fixes #12015

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit d12fef1515)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-04-02 14:52:27 -07:00
Michael Crosby
e1381ae328 Return closed channel if oom notification fails
When working with Go channels you must not set it to nil or else the
channel will block forever.  It will not panic reading from a nil chan
but it blocks.  The correct way to do this is to create the channel then
close it as the correct results to the caller will be returned.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit 7061a993c5)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-04-01 16:14:14 -07:00
Alexander Morozov
45ad064150 Fix panic in integration tests
Closing activationLock only if it's not closed already. This is needed
only because integration tests using docker code directly and doesn't
care about global state.

Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit c717475714)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-31 16:43:16 -07:00
Darren Shepherd
72e14a1566 Avoid ServeApi race condition
If job "acceptconnections" is called before "serveapi" the API Accept()
method will hang forever waiting for activation.  This is due to the fact
that when "acceptconnections" ran the activation channel was nil.

Signed-off-by: Darren Shepherd <darren@rancher.com>
(cherry picked from commit 8f6a14452d)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-31 16:43:16 -07:00
Vincent Batts
7d7bec86c9 graphdriver: promote overlay above vfs
It's about time to let folks not hit 'vfs', when 'overlay' is supported
on their kernel. Especially now that v3.18.y is a long-term kernel.

Signed-off-by: Vincent Batts <vbatts@redhat.com>
(cherry picked from commit 2c72ff1dbf)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-31 14:46:09 -07:00
Derek McGowan
a39d49d676 Fix progress reader output on close
Currently the progress reader won't close properly by not setting the close size.

fixes #11849

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
(cherry picked from commit aa3083f577)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-31 14:18:42 -07:00
Brian Goff
5bf15a013b Use getResourcePath instead
Also cleans up tests to not shell out for file creation.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 63708dca8a)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-31 14:06:32 -07:00
Lei Jitang
9461967eec Fix create volume in a directory which is a symbolic link
Signed-off-by: Lei Jitang <leijitang@huawei.com>
(cherry picked from commit 7583b49125)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-31 14:06:32 -07:00
Alexander Morozov
3be7d11cee Initialize portMapper in RequestPort too
Api requesting port for daemon before init_networkdriver called.
Problem is that now initialization of api depends on initialization of
daemon and their intializations runs in parallel. Proper fix will be
just do it sequentially. For now I don't want refactor it, because it
can bring additional problems in 1.6.0.

Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit 584180fce7)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-31 14:00:02 -07:00
Vivek Goyal
d9910b8fd8 container: Do not remove contianer if any of the resource failed cleanup
Do not remove container if any of the resource could not be cleaned up. We
don't want to leak resources.

Two new states have been created. RemovalInProgress and Dead. Once container
is Dead, it can not be started/restarted. Dead container signifies the
container where we tried to remove it but removal failed. User now needs to
figure out what went wrong, corrent the situation and try cleanup again.

RemovalInProgress signifies that container is already being removed. Only
one removal can be in progress.

Also, do not allow start of a container if it is already dead or removal is
in progress.

Also extend existing force option (-f) to docker rm to not return an error
and remove container from user view even if resource cleanup failed.
This will allow a user to get back to old behavior where resources
might leak but atleast user will be able to make progress.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
(cherry picked from commit 40945fc186)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-31 14:00:02 -07:00
Michael Crosby
f115c32f6b Ensure that bridge driver does not use global mappers
This has a few hacks in it but it ensures that the bridge driver does
not use global state in the mappers, atleast as much as possible at this
point without further refactoring.  Some of the exported fields are
hacks to handle the daemon port mapping but this results in a much
cleaner approach and completely remove the global state from the mapper
and allocator.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit d8c628cf08)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-31 10:53:09 -07:00
Michael Crosby
57939badc3 Refactor port allocator to not have ANY global state
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit 43a50b0618)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-31 10:53:09 -07:00
Michael Crosby
51ee02d478 Refactor portmapper to remove ALL global state
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit 62522c9853)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-31 10:53:09 -07:00
Paul Bellamy
c92860748c Refactor global portallocator and portmapper state
Continuation of: #11660, working on issue #11626.

Wrapped portmapper global state into a struct. Now portallocator and
portmapper have no global state (except configuration, and a default
instance).

Unfortunately, removing the global default instances will break
```api/server/server.go:1539```, and ```daemon/daemon.go:832```, which
both call the global portallocator directly. Fixing that would be a much
bigger change, so for now, have postponed that.

Signed-off-by: Paul Bellamy <paul.a.bellamy@gmail.com>
(cherry picked from commit 87df5ab41b)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-31 10:53:09 -07:00
Paul Bellamy
f582f9717f Refactor global portallocator state into a global struct
Signed-off-by: Paul Bellamy <paul.a.bellamy@gmail.com>
(cherry picked from commit 1257679876)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-31 10:53:09 -07:00
Ahmet Alp Balkan
ebcb36a8d2 windows: monitorTtySize correctly by polling
This change makes `monitorTtySize` work correctly on windows by polling
into win32 API to get terminal size (because there's no SIGWINCH on
windows) and send it to the engine over Remove API properly.

Average getttysize syscall takes around 30-40 ms on an average windows
machine as far as I can tell, therefore in a `for` loop, checking every
250ms if size has changed or not.

I'm not sure if there's a better way to do it on windows, if so,
somebody please send a link 'cause I could not find.

Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
(cherry picked from commit ebbceea8a7)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-31 10:39:59 -07:00
Michael Crosby
e6e8f2d717 Update libcontainer to c8512754166539461fd860451ff
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit 17ecbcf8ff)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-30 17:19:49 -07:00
Alexander Morozov
317a510261 Use proper wait function for --pid=host
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit 489ab77f4a)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-30 17:19:48 -07:00
Alexander Morozov
5d3a080178 Get child processes before main process die
Signed-off-by: Alexander Morozov <lk4d4@docker.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-30 17:19:48 -07:00
Alexander Morozov
542c84c2d2 Do not mask *exec.ExitError
Fix #11764

Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit f468bbb7e8)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-30 17:19:48 -07:00
Harald Albers
f1df74d09d Add missing filters to bash completion for docker images and docker ps
Signed-off-by: Harald Albers <github@albersweb.de>
(cherry picked from commit cf438a542e)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-30 16:42:53 -07:00
Derek McGowan
4ddbc7a62f Compress layers on push to a v2 registry
When buffering to file add support for compressing the tar contents. Since digest should be computed while writing buffer, include digest creation during buffer.

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
(cherry picked from commit 851c64725d)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-30 15:42:14 -07:00
Harald Albers
f72b2c02b8 Do not complete --cgroup-parent as _filedir
This is a follow-up on PR 11708, as suggested by tianon.

Signed-off-by: Harald Albers <github@albersweb.de>
(cherry picked from commit a09cc935c3)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-30 12:02:48 -07:00
Daniel, Dao Quang Minh
af9dab70f8 aufs: apply dirperm1 by default if supported
Automatically detect support for aufs `dirperm1` option and apply it.
`dirperm1` tells aufs to check the permission bits of the directory on the
topmost branch and ignore the permission bits on all lower branches.
It can be used to fix aufs' permission bug (i.e., upper layer having
broader mask than the lower layer).

More information about the bug can be found at https://github.com/docker/docker/issues/783
`dirperm1` man page is at: http://aufs.sourceforge.net/aufs3/man.html

Signed-off-by: Daniel, Dao Quang Minh <dqminh89@gmail.com>
(cherry picked from commit 281abd2c8a)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-30 12:02:48 -07:00
Daniel, Dao Quang Minh
10425e83f2 document dirperm1 fix for #783 in known issues
Since `dirperm1` requires a more recent aufs patch than many current OS release,
we cant remove #783 completely. This documents that docker will apply `dirperm1`
automatically for systems that support it

Signed-off-by: Daniel, Dao Quang Minh <dqminh89@gmail.com>
(cherry picked from commit d7bbe2fcb5)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-30 12:02:48 -07:00
Daniel, Dao Quang Minh
9c528dca85 print dirperm1 supported status in docker info
It's easier for users to check if their systems support dirperm1 just by using
docker info

Signed-off-by: Daniel, Dao Quang Minh <dqminh89@gmail.com>
(cherry picked from commit d68d5f2e4b)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-30 12:02:48 -07:00
Ahmet Alp Balkan
cb2c25ad2d docs: remove unused windows images
These images was just sitting around and referenced from
nowhere, nor they seemed any useful.

Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
(cherry picked from commit 986ae5d52a)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-30 10:46:16 -07:00
Ahmet Alp Balkan
962dec81ec Update boot2docker on Windows documentation
Boot2Docker experience is updated now that we have a Docker
client on Windows. Instead of running `boot2docker ssh`, users
can also use boot2docker on Windows Command Prompt (`cmd.exe`)
and PowerShell.

Updated documentation and screenshots, added a few details,
reorganized sections by importance, fixed a few errors.

Remaining: the video link in the Demonstration section needs
to be updated once I shoot a new video.

Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
(cherry picked from commit de09c55394)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-30 10:46:16 -07:00
unclejack
1eae925a3d pkg/broadcastwriter: avoid alloc w/ WriteString
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
(cherry picked from commit db877d8a42)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-30 10:46:16 -07:00
Lei Jitang
3ce2cc8ee7 Add some run option to bash completion
Signed-off-by: Lei Jitang <leijitang@huawei.com>
(cherry picked from commit 7d70736015)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-27 15:11:59 -07:00
Ahmet Alp Balkan
054acc4bee term/winconsole: Identify tty correctly, fix resize problem
This change fixes a bug where stdout/stderr handles are not identified
correctly.

Previously we used to set the window size to fixed size to fit the default
tty size on the host (80x24). Now the attach/exec commands can correctly
get the terminal size from windows.

We still do not `monitorTtySize()` correctly on windows and update the tty
size on the host-side, in order to fix that we'll provide a
platform-specific `monitorTtySize` implementation in the future.

Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
(cherry picked from commit 0532dcf3dc)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-26 16:24:40 -07:00
Don Kjer
63cb03a55b Fix for issue 9922: private registry search with auth returns 401
Signed-off-by: Don Kjer <don.kjer@gmail.com>
(cherry picked from commit 6b2eeaf896)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-26 13:03:50 -07:00
Arnaud Porterie
49b6f23696 Remove unused runconfig.Config.SecurityOpt field
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
(cherry picked from commit e39646d2e1)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-25 16:22:34 -07:00
Michael Crosby
299ae6a2e6 Update libcontainer to a6044b701c166fe538fc760f9e2
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-25 12:23:08 -07:00
Vincent Batts
97b521bf10 make.sh: leave around the generated version
For positerity (largely of packagers) lets leave around the generated
version files that happen during build.
They're already ignored in git, and recreated on every build.

Signed-off-by: Vincent Batts <vbatts@redhat.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-25 11:34:55 -07:00
Vincent Batts
7f5937d46c btrfs: #ifdef for build version
We removed it, because upstream removed it. But now it will be coming
back, so work with it either way.

Signed-off-by: Vincent Batts <vbatts@redhat.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-25 11:34:55 -07:00
Jessica Frazelle
b6166b9496 btrfs_noversion: including what was in merge commit from 8fc9e40086 (diff-479b910834cf0e4daea2e02767fd5dc9R1) pr #11417
Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-24 22:15:08 -07:00
Jessica Frazelle
b596d025f5 fix 2 integration tests on lxc
Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-24 21:47:42 -07:00
Jessica Frazelle
ca32446950 Get rid of panic in stats for lxc
Fix containers dir

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-24 21:47:42 -07:00
Michal Fojtik
5328d6d620 Fix lxc-start in lxc>1.1.0 where containers start daemonized by default
Signed-off-by: Michal Fojtik <mfojtik@redhat.com>

Docker-DCO-1.1-Signed-off-by: Michal Fojtik <mfojtik@redhat.com> (github: jfrazelle)
2015-03-24 21:38:01 -07:00
Michael Crosby
d0023242ab Mkdir for lxc root dir before setup of symlink
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-24 16:30:38 -07:00
Alexander Morozov
3ff002aa1a Use /var/run/docker as root for execdriver
Signed-off-by: Alexander Morozov <lk4d4@docker.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-24 15:43:08 -07:00
Dan Walsh
ea9b357be2 Btrfs has eliminated the BTRFS_BUILD_VERSION in latest version
They say we should only use the BTRFS_LIB_VERSION

They will no longer support this since it had to be managed manually

Docker-DCO-1.1-Signed-off-by: Dan Walsh <dwalsh@redhat.com> (github: rhatdan)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-24 13:12:49 -07:00
Harald Albers
bf1829459f restrict bash completion for hostdir arg to directories
The previous state assumed that the HOSTPATH argument referred to a
file. As clarified by moxiegirl in PR #11305, it is a directory.
Adjusted completion to reflect this.

Signed-off-by: Harald Albers <github@albersweb.de>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-24 13:12:49 -07:00
Vincent Batts
4f744ca781 pkg/archive: ignore mtime changes on directories
on overlay fs, the mtime of directories changes in a container where new
files are added in an upper layer (e.g. '/etc'). This flags the
directory as a change where there was none.

Closes #9874

Signed-off-by: Vincent Batts <vbatts@redhat.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-23 15:29:10 -07:00
Michael Crosby
7dab04383b Update libcontainer to fd0087d3acdc4c5865de1829d4a
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-23 15:05:44 -07:00
Brian Goff
8a003c8134 Improve err message when parsing kernel port range
Signed-off-by: Brian Goff <cpuguy83@gmail.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-23 14:26:42 -07:00
Ahmet Alp Balkan
208178c799 Disable ANSI emulation in certain windows shells
This disables recently added ANSI emulation feature in certain Windows
shells (like ConEmu) where ANSI output is emulated by default with builtin
functionality in the shell.

MSYS (mingw) runs in cmd.exe window and it doesn't support emulation.

Cygwin doesn't even pass terminal handles to docker.exe as far as I can
tell, stdin/stdout/stderr handles are behaving like non-TTY. Therefore not
even including that in the check.

Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-23 13:32:40 -07:00
sidharthamani
03b36f3451 add syslog driver
Signed-off-by: wlan0 <sid@rancher.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-23 11:41:35 -07:00
Doug Davis
7758553239 Fix some escaping around env var processing
Clarify in the docs that ENV is not recursive

Closes #10391

Signed-off-by: Doug Davis <dug@us.ibm.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-23 11:37:10 -07:00
Arnaud Porterie
10fb5ce6d0 Restore TestPullVerified test
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-23 11:33:08 -07:00
Brian Goff
0959aec1a9 Allow normal volume to overwrite in start Binds
Fixes #9981
Allows a volume which was created by docker (ie, in
/var/lib/docker/vfs/dir) to be used as a Bind argument via the container
start API and overwrite an existing volume.

For example:

```bash
docker create -v /foo --name one
docker create -v /foo --name two
```

This allows the volume from `one` to be passed into the container start
API as a bind to `two`, and it will overwrite it.

This was possible before 7107898d5c

Signed-off-by: Brian Goff <cpuguy83@gmail.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-23 11:04:27 -07:00
Mabin
773f74eb71 Fix hanging up problem when start and attach multiple containers
Signed-off-by: Mabin <bin.ma@huawei.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-23 11:04:27 -07:00
unclejack
7070d9255a pkg/ioutils: add tests for BufReader
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-23 11:04:27 -07:00
unclejack
2cb4b7f65c pkg/ioutils: avoid huge Buffer growth in bufreader
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-23 11:04:27 -07:00
Mitch Capper
2d80652d8a Change windows default permissions to 755 not 711, read access for all poses little security risk and prevents breaking existing Dockerfiles
Signed-off-by: Mitch Capper <mitch.capper@gmail.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-23 11:04:27 -07:00
Jessica Frazelle
81b4691406 Merge origin/master into origin/release
Signed-off-by: Jessica Frazelle <jess@docker.com>

Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-22 23:45:58 -07:00
Arnaud Porterie
4bae33ef9f Merge pull request #10286 from icecrime/bump_v1.5.0
Bump to version v1.5.0
2015-02-10 10:50:09 -08:00
Arnaud Porterie
a8a31eff10 Bump to version v1.5.0
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
2015-02-10 08:14:37 -08:00
Sven Dowideit
68a8fd5c4e updates from review
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-10 08:14:37 -08:00
unclejack
8387c5ab65 update kernel reqs doc; recommend updates on RHEL
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
2015-02-10 08:14:37 -08:00
Sven Dowideit
69498943c3 remove the text-indent and increase the font size
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-10 08:14:37 -08:00
Sven Dowideit
1aeb78c2ae Simplfy the sidebar html and css, and then allow the text to wrap
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-10 08:14:36 -08:00
Tibor Vass
331d37f35d Minor nits
Signed-off-by: Tibor Vass <tibor@docker.com>
2015-02-10 08:14:36 -08:00
Tibor Vass
edf3bf7f33 Clarify docs review role
Signed-off-by: Tibor Vass <teabee89@gmail.com>
2015-02-10 08:14:36 -08:00
Tibor Vass
9ee8dca246 A few fixes
Signed-off-by: Tibor Vass <teabee89@gmail.com>
2015-02-10 08:14:36 -08:00
Tibor Vass
aa98bb6c13 New pull request workflow
Signed-off-by: Tibor Vass <teabee89@gmail.com>
2015-02-10 08:14:36 -08:00
Zhang Wei
2aba3c69f9 docs: fix a typo in registry_mirror.md
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
2015-02-10 08:14:36 -08:00
Steve Koch
71a44c769e Add link to user guide to end of 14.04 section
Adding instructions to exit the test shell and a link to the user guide (as is done in the following sections for 12.04 and 13.04/10

Signed-off-by: Steven Koch <sjkoch@unm.edu>
2015-02-10 08:14:36 -08:00
Wei-Ting Kuo
d8381fad2b Update certificates.md
`openssl req -new -x509 -text -key client.key -out client.cert` creates a self-sign certificate but not a certificate request.

Signed-off-by: Wei-Ting Kuo <waitingkuo0527@gmail.com>
2015-02-09 08:49:05 -08:00
Chen Hanxiao
be379580d0 docs: fix a typo in Dockerfile.5.md
s/Mutliple/Multiple

Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
2015-02-09 08:49:05 -08:00
Alexander Morozov
7ea8513479 Fix example about ps and linked containers
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
2015-02-09 08:49:05 -08:00
Sven Dowideit
3b2fe01c78 Documentation on boolean flags is wrong #10517
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:05 -08:00
Sven Dowideit
e8afc22b1f Do some major rearranging of the fedora/centos/rhel installation docs
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:05 -08:00
Lokesh Mandvekar
4e407e6b77 update fedora docs to reflect latest rpm changes
Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2015-02-09 08:49:05 -08:00
unclejack
23f1c2ea9e docs/articles/systemd: correct --storage-driver
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
2015-02-09 08:49:05 -08:00
Katie McLaughlin
788047cafb Format awsconfig sample config correctly
Reflow change in commit 195f3a3f removed newlines in the config format.

This change reverts the sample config to the original formatting, which
matches the actual config format of a `awsconfig` file.

Signed-off-by: Katie McLaughlin <katie@glasnt.com>
2015-02-09 08:49:05 -08:00
Sven Dowideit
0c0e7b1b60 Fix a small spelling error in the dm.blkdiscard docs
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:05 -08:00
Sven Dowideit
09d41529a0 Add an initial list of new features in Docker Engine 1.5.0
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:04 -08:00
Sven Dowideit
cb288fefee remove swarm, machine and compose from the 1.5.0 release docs
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:04 -08:00
Sven Dowideit
f7636796c5 The DHE documentation will not be published with 1.5.0
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:04 -08:00
Sven Dowideit
cb5af83444 For now, docker stats appears to be libcontainer only
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:04 -08:00
Mihai Borobocea
96feaf1920 docs: fix typo
There are 2 not 3 RUN instructions in the userguide's Dockerfile.

Signed-off-by: Mihai Borobocea <MihaiBorobocea@gmail.com>
2015-02-09 08:49:04 -08:00
Vincent Giersch
1f03944950 Documents build API "remote" parameter
Introduced in Docker v0.4.5 / Remove API v1.1 (#848), the remote
parameter of the API method POST /build allows to specify a buildable
remote URL (HTTPS, HTTP or Git).

Signed-off-by: Vincent Giersch <vincent.giersch@ovh.net>
2015-02-09 08:49:04 -08:00
Sven Dowideit
6060eedf9c The Hub build webhooks now list the images that have been built
And fix some spelling - repo isn't really a word :)

Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:04 -08:00
Sven Dowideit
d217da854a Spelling mistake in dockerlinks
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:04 -08:00
Victor Vieux
d74d6d981b add crosbymichael and Github -> GitHub
Signed-off-by: Victor Vieux <vieux@docker.com>
2015-02-09 08:49:04 -08:00
Victor Vieux
0205ac33d2 update MAINTAINERS file
Signed-off-by: Victor Vieux <vieux@docker.com>
2015-02-09 08:49:04 -08:00
Sven Dowideit
dbb9d47bdc The reference menu is too big to list more than the latest API docs, so the others can be hidden - they're still linked from the API summary
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:03 -08:00
Sven Dowideit
ddd1d081d7 use the same paths as in the swarm repo, so that their links magically work
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:03 -08:00
Sven Dowideit
d6ac36d929 Docker attach documentation didn't make sense to me
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:03 -08:00
Michael A. Smith
715b94f664 Distinguish ENV from setting environment inline
It's ambiguous to say that `ENV` is _functionally equivalent to prefixing the command with `<key>=<value>`_. `ENV` sets the environment for all future commands, but `RUN` can take chained commands like `RUN foo=bar bash -c 'echo $foo' && bash -c 'echo $foo $bar'`. Users with a solid understanding of `exec` may grok this without confusion, but less experienced users may need this distinction.

Signed-off-by: Michael A. Smith <msmith3@ebay.com>

Improve Environment Handling Descriptions

- Link `ENV` and `Environment Replacement`
- Improve side-effects of `ENV` text
- Rearrange avoiding side effects text

Signed-off-by: Michael A. Smith <msmith3@ebay.com>
2015-02-09 08:49:03 -08:00
Chen Hanxiao
16baca9277 docs: change events --since to fit RFC3339Nano
PR6931 changed time format to RFC3339Nano.
But the example in cli.md does not changed.

Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
2015-02-09 08:49:03 -08:00
J Bruni
627f8a6cd5 Remove File List
This list is outdated. It could be updated instead of removed... but why should it be maintained? I do not see a reason.

Signed-off-by: João Bruni <contato@jbruni.com.br>
2015-02-09 08:49:03 -08:00
Sebastiaan van Stijn
a8a7df203a Fix broken link to project/MAINTAINERS.md
The link to project/MAINTAINERS.md was broken, in
addition, /MAINTAINERS containers more relevant
information on the LGTM process and contains info
about maintainers of all subsystems.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2015-02-09 08:49:03 -08:00
Thell 'Bo' Fowler
580cbcefd3 Update dockerfile_best-practices.md
Signed-off-by: Thell Fowler <Thell@tbfowler.name>
2015-02-09 08:49:03 -08:00
Yihang Ho
d9c5ce6e97 Fix a tiny typo.
'saving', not 'saveing'

Signed-off-by: Yihang Ho <hoyihang5@gmail.com>
2015-02-09 08:49:03 -08:00
Bradley Cicenas
0fe9b95415 fix project url in readme to point to the correct location,
https://github.com/docker/docker/tree/master/project

Signed-off-by: Bradley Cicenas <bradley.cicenas@gmail.com>
2015-02-09 08:49:03 -08:00
Alexandr Morozov
41d0e4293e Update events format in man page
Signed-off-by: Alexandr Morozov <lk4d4@docker.com>
2015-02-09 08:49:03 -08:00
Doug Davis
26fe640da1 Add builder folks to the top-level maintainers file
Signed-off-by: Doug Davis <dug@us.ibm.com>
2015-02-09 08:49:02 -08:00
Jessica Frazelle
198ca26969 Added tianon's info and changed a typo.
Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
2015-02-09 08:49:02 -08:00
Phil Estes
d5365f6fc4 Fix incorrect IPv6 addresses/subnet notations in docs
Fixes a few typos in IPv6 addresses. Will make it easier for users who
actually try and copy/paste or use the example addresses directly.

Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
2015-02-09 08:49:02 -08:00
Brian Goff
5f7e814ee7 Update go-md2man
Update fixes some rendering issues, including improperly escaping '$' in
blocks, and actual parsing of blockcode.

`ID=$(sudo docker run -d fedora /usr/bin/top -b)` was being converted to
`ID=do docker run -d fedora/usr/bin/top -b)`

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2015-02-09 08:49:02 -08:00
Doug Davis
a84aca0985 Fix docs so WORKDIR mentions it works for COPY and ADD too
The docs around COPY/ADD already mentioned that it will do a relative
copy/add based on WORKDIR, so that part is already ok.  Just needed to
tweak the WORKDIR section since w/o mentioning COPY/ADD it can be misleading.

Noticed by @phemmer

Signed-off-by: Doug Davis <dug@us.ibm.com>
2015-02-09 08:49:02 -08:00
Solomon Hykes
68ec22876a Proposal for an improved project structure.
Note: this deprecates the fine-grained, high-overlap cascading MAINTAINERS files,
and replaces them with a single top-level file, using a new structure:

* More coarse grained subsystems with dedicated teams of maintainers
* Core maintainers with a better-defined role and a wider scope (if it's
not in a subsystem, it's up to the core maintainers to figure it out)
* Architects
* Operators

This is work in progress, the goal is to start a conversation

Signed-off-by: Solomon Hykes <solomon@docker.com>
Signed-off-by: Erik Hollensbe <github@hollensbe.org>
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
Signed-off-by: Tibor Vass <teabee89@gmail.com>
Signed-off-by: Victor Vieux <vieux@docker.com>
Signed-off-by: Vincent Batts <vbatts@redhat.com>
2015-02-09 08:49:02 -08:00
Josh Hawn
0dcc3559e9 Updated image spec docs to clarify image JSON
The title `Image JSON Schema` was used as a header in the section
which describes the layout and fields of the image metadata JSON
file. It was pointed out that `JSON Schema` is its own term for
describing JSON in a machine-and-human-readable format, while the
word "Schema" in this context was used more generically to say that
the section is meant to be an example and outline of the Image JSON.

http://spacetelescope.github.io/understanding-json-schema/

This section now has the title `Image JSON Description` in order
to not cause this confusion.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-02-09 08:49:02 -08:00
Derek McGowan
d4c731ecd6 Limit push and pull to v2 official registry
No longer push to the official v2 registry when it is available. This allows pulling images from the v2 registry without defaulting push. Only pull official images from the v2 official registry.

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-02-04 10:05:16 -08:00
Arnaud Porterie
2dba4e1386 Fix client-side validation of Dockerfile path
Arguments to `filepath.Rel` were reversed, making all builder tests to
fail.

Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
2015-02-03 09:07:17 -08:00
Tibor Vass
06a7f471e0 builder: prevent Dockerfile to leave build context
Signed-off-by: Tibor Vass <teabee89@gmail.com>
2015-02-03 09:07:17 -08:00
Doug Davis
4683d01691 Add an API test for docker build -f Dockerfile
I noticed that while we have tests to make sure that people don't
specify a Dockerfile (via -f) that's outside of the build context
when using the docker cli, we don't check on the server side to make
sure that API users have the same check done. This would be a security
risk.

While in there I had to add a new util func for the tests to allow us to
send content to the server that isn't json encoded - in this case a tarball

Signed-off-by: Doug Davis <dug@us.ibm.com>
2015-02-03 09:07:17 -08:00
Michael Crosby
6020a06399 Print zeros for initial stats collection on stopped container
When calling stats on stopped container's print out zeros for all of the
values to populate the initial table.  This signals to the user that the
operations completed and will not block.

Closes #10504

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-02-03 08:50:47 -08:00
Sebastiaan van Stijn
cc0bfccdf4 Replace "base" with "ubuntu" in documentation
The API documentation uses the "base" image in various
places. The "base" image is deprecated and it is no longer
possible to download this image.

This changes the API documentation to use "ubuntu" in stead.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2015-02-02 13:34:12 -08:00
Chen Hanxiao
0c18ec62f3 docs: fix another typo in docker-build man page
s/arbtrary/arbitrary

Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
2015-02-02 13:30:11 -08:00
John Tims
a9825c9bd8 Fix documentation typo
Signed-off-by: John Tims <john.k.tims@gmail.com>
2015-02-02 13:30:11 -08:00
Josh Hawn
908be50c44 Handle gorilla/mux route url bug
When getting the URL from a v2 registry url builder, it does not
honor the scheme from the endpoint object and will cause an https
endpoint to return urls starting with http.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-02-02 13:30:11 -08:00
Josh Hawn
2a82dba34d Fix token basic auth header issue
When requesting a token, the basic auth header is always being set even
if there is no username value. This patch corrects this and does not set
the basic auth header if the username is empty.

Also fixes an issue where pulling all tags from a v2 registry succeeds
when the image does not actually exist on the registry.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-02-02 13:30:11 -08:00
Erik Hollensbe
13fd2a908c Remove "OMG IPV6" log message
Signed-off-by: Erik Hollensbe <erik+github@hollensbe.org>
2015-02-02 13:30:11 -08:00
Arnaud Porterie
464891aaf8 Fix race in test registry setup
Wait for the local registry-v2 test instance to become available to
avoid random tests failures.

Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
2015-02-02 13:30:10 -08:00
Phil Estes
9974663ed7 Add missing $HOST in a couple places in HTTPS/TLS setup docs
Fix typos in setup docs where tcp://:2376 is used without the $HOST
parameter.

Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com>
2015-02-02 13:30:10 -08:00
Derek McGowan
76269e5c9d Add push fallback to v1 for the official registry
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-02-02 13:30:10 -08:00
Jessica Frazelle
1121d7c4fd Validate toml
Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-02-02 13:30:10 -08:00
Josh Hawn
7e197575a2 Remove Checksum field from image.Image struct
The checksum is now being stored in a separate file beside the image
JSON file.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-02-02 13:30:10 -08:00
Derek McGowan
3dc3059d94 Store tar checksum in separate file
Fixes #10432

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-02-02 13:30:10 -08:00
Derek McGowan
7b6de74c9a Revert client signature
Supports multiple tag push with daemon signature

Fixes #10444

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-02-02 13:30:10 -08:00
Phil Estes
cad8adacb8 Setup TCP keep-alive on hijacked HTTP(S) client <--> daemon sessions
Fixes #10387

Without TCP keep-alive set on socket connections to the daemon, any
long-running container with std{out,err,in} attached that doesn't
read/write for a minute or longer will end in ECONNTIMEDOUT (depending
on network settings/OS defaults, etc.), leaving the docker client side
believing it is still waiting on data with no actual underlying socket
connection.

This patch turns on TCP keep-alive for the underlying TCP connection
for both TLS and standard HTTP hijacked daemon connections from the
docker client, with a keep-alive timeout of 30 seconds.

Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com>
2015-02-02 13:30:10 -08:00
Srini Brahmaroutu
6226deeaf4 Removing the check on Architecture to build and run Docker on IBM Power and Z platforms
Signed-off-by: Srini Brahmaroutu <srbrahma@us.ibm.com>
2015-02-02 13:30:10 -08:00
gdi2290
3ec19f56cf Update AUTHORS file and .mailmap
added `LC_ALL=C.UTF-8` due to osx
http://www.inmotionhosting.com/support/website/ssh/speed-up-grep-searche
s-with-lc-all

Signed-off-by: Patrick Stapleton <github@gdi2290.com>
2015-02-02 13:30:10 -08:00
Josh Hawn
48c71787ed No longer compute checksum when installing images.
While checksums are verified when a layer is pulled from v2 registries,
there are known issues where the checksum may change when the layer diff
is computed again. To avoid these issues, the checksum should no longer
be computed and stored until after it has been extracted to the docker
storage driver. The checksums are instead computed lazily before they
are pushed to a v2 registry.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-02-02 13:30:10 -08:00
Jessica Frazelle
604731a930 Some small updates to the dev env docs.
Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
2015-02-02 13:30:09 -08:00
Derek McGowan
e8650e01f8 Defer creation of trust key file until needed
Fixes #10442

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-02-02 13:30:09 -08:00
Sven Dowideit
817d04d992 DHE documentation placeholder and Navbar changes
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-02 13:30:09 -08:00
Sven Dowideit
cdff91a01c comment out the docker and curl lines we'll run later
Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@docker.com> (github: SvenDowideit)
2015-02-02 13:30:09 -08:00
Mehul Kar
6f26bd0e16 Improve explanation of port mapping from containers
Signed-off-by: Mehul Kar <mehul.kar@gmail.com>
2015-02-02 13:30:09 -08:00
Tianon Gravi
3c090db4e9 Update .deb version numbers to be more sane
Example output:
```console
root@906b21a861fb:/go/src/github.com/docker/docker# ./hack/make.sh binary ubuntu
bundles/1.4.1-dev already exists. Removing.

---> Making bundle: binary (in bundles/1.4.1-dev/binary)
Created binary: /go/src/github.com/docker/docker/bundles/1.4.1-dev/binary/docker-1.4.1-dev

---> Making bundle: ubuntu (in bundles/1.4.1-dev/ubuntu)
Created package {:path=>"lxc-docker-1.4.1-dev_1.4.1~dev~git20150128.182847.0.17e840a_amd64.deb"}
Created package {:path=>"lxc-docker_1.4.1~dev~git20150128.182847.0.17e840a_amd64.deb"}

```

As noted in a comment in the code here, this sums up the reasoning for this change: (which is how APT and reprepro compare versions)
```console
$ dpkg --compare-versions 1.5.0 gt 1.5.0~rc1 && echo true || echo false
true
$ dpkg --compare-versions 1.5.0~rc1 gt 1.5.0~git20150128.112847.17e840a && echo true || echo false
true
$ dpkg --compare-versions 1.5.0~git20150128.112847.17e840a gt 1.5.0~dev~git20150128.112847.17e840a && echo true || echo false
true
```

ie, `1.5.0` > `1.5.0~rc1` > `1.5.0~git20150128.112847.17e840a` > `1.5.0~dev~git20150128.112847.17e840a`

Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
2015-01-28 13:51:12 -08:00
Arnaud Porterie
b7c3fdfd0d Update fish completion for 1.5.0
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
2015-01-28 10:29:29 -08:00
Phil Estes
aa682a845b Fix bridge initialization for IPv6 if IPv4-only docker0 exists
This fixes the daemon's failure to start when setting --ipv6=true for
the first time without deleting `docker0` bridge from a prior use with
only IPv4 addressing.

The addition of the IPv6 bridge address is factored out into a separate
initialization routine which is called even if the bridge exists but no
IPv6 addresses are found.

Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
2015-01-28 10:29:29 -08:00
Jonathan Rudenberg
218d0dcc9d Fix missing err assignment in bridge creation
Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>
2015-01-28 10:29:29 -08:00
Stephen J Day
510d8f8634 Open up v2 http status code checks for put and head checks
Under certain cases, such as when putting a manifest or check for the existence
of a layer, the status code checks in session_v2.go were too narrow for their
purpose. In the case of putting a manifest, the handler only cares that an
error is not returned. Whether it is a 304 or 202 does not matter, as long as
the server reports success. Having the client only accept specific http codes
inhibits future protocol evolution.

Signed-off-by: Stephen J Day <stephen.day@docker.com>
2015-01-28 08:48:03 -08:00
Derek McGowan
b65600f6b6 Buffer tar file on v2 push
fixes #10312
fixes #10306

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-01-28 08:48:03 -08:00
Jessica Frazelle
79dcea718c Add completion for stats.
Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
2015-01-28 08:45:57 -08:00
Sven Dowideit
072b09c45d Add the registry mirror document to the menu
Signed-off-by: Sven Dowideit <SvenDowideit@docker.com>

Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@docker.com> (github: SvenDowideit)
2015-01-27 19:35:26 -08:00
Derek McGowan
c2d9837745 Use layer checksum if calculated during manifest creation
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-01-27 19:35:26 -08:00
Josh Hawn
fa5dfbb18b Fix premature close of build output on pull
The build job will sometimes trigger a pull job when the base image
does not exist. Now that engine jobs properly close their output by default
the pull job would also close the build job's stdout in a cascading close
upon completion of the pull.

This patch corrects this by wrapping the `pull` job's stdout with a
nopCloseWriter which will not close the stdout of the `build` job.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-01-27 19:35:26 -08:00
Sven Dowideit
6532a075f3 tell users they can what IP range Hub webhooks can come from so they can filter
Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@docker.com> (github: SvenDowideit)

Signed-off-by: Sven Dowideit <SvenDowideit@docker.com>
2015-01-27 19:35:26 -08:00
Chen Hanxiao
3b4a4bf809 docs: fix a typo in docker-build man page
s/Dockefile/Dockerfile

Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
2015-01-27 19:35:26 -08:00
Tony Miller
4602909566 fix /etc/host typo in remote API docs
Signed-off-by: Tony Miller <mcfiredrill@gmail.com>
2015-01-27 19:35:25 -08:00
Sven Dowideit
588f350b61 as we're not using the search suggestion feature only load the search_content when we have a search ?q= param
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>

Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au> (github: SvenDowideit)
2015-01-27 19:35:25 -08:00
Sven Dowideit
6e5ff509b2 set the content-type for the search_content.json.gz
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>

Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au> (github: SvenDowideit)
2015-01-27 19:35:25 -08:00
Sven Dowideit
61d341c2ca Change to load the json.gz file
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>

Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au> (github: SvenDowideit)
2015-01-27 19:35:25 -08:00
unclejack
b996d379a1 docs: compress search_content.json for release
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>

Docker-DCO-1.1-Signed-off-by: unclejack <unclejacksons@gmail.com> (github: SvenDowideit)
2015-01-27 19:35:25 -08:00
Derek McGowan
b0935ea730 Better error messaging and logging for v2 registry requests
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-01-27 19:35:25 -08:00
Derek McGowan
96fe13b49b Add file path to errors loading the key file
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-01-27 19:35:25 -08:00
Brian Goff
12ccde442a Do not return err on symlink eval
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2015-01-27 19:35:25 -08:00
Michael Crosby
4262cfe41f Remove omitempty json tags from stucts
When unmarshaling the json response from the API in languages to a
dynamic object having the omitempty field tag on types such as float64
case the key to be omitted on 0.0 values.  Various langages will
interpret this as a null when 0.0 is the actual value.

This patch removes the omitempty tags on fields that are not structs
where they can be safely omited.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-01-27 19:35:25 -08:00
Brian Goff
ddc2e25546 Fix bind-mounts only partially removed
When calling delete on a bind-mount volume, the config file was bing
removed, but it was not actually being removed from the volume index.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2015-01-27 19:35:25 -08:00
unclejack
6646cff646 docs: shrink sprites-small_360.png
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
2015-01-27 19:35:25 -08:00
Euan
ac8fd856c0 Allow empty layer configs in manifests
Before the V2 registry changes, images with no config could be pushed.
This change fixes a regression that made those images not able to be
pushed to a registry.

Signed-off-by: Euan Kemp <euank@euank.com>
2015-01-27 19:35:25 -08:00
DiuDiugirl
48754d673c Fix a minor typo
Docker inspect can also be used on images, this patch fixed the
minor typo in file docker/flags.go and docs/man/docker.1.md

Signed-off-by:  DiuDiugirl <sophia.wang@pku.edu.cn>
2015-01-27 19:35:25 -08:00
unclejack
723684525a pkg/archive: remove tar autodetection log line
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
2015-01-27 19:35:24 -08:00
Derek McGowan
32aceadbe6 Revert progressreader to not defer close
When progress reader closes it overwrites the progress line with the full progress bar, replaces the completed message.

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-01-27 19:35:24 -08:00
Derek McGowan
c67d3e159c Use filepath instead of path
Currently loading the trust key uses path instead of filepath. This creates problems on some operating systems such as Windows.

Fixes #10319

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-01-27 19:35:24 -08:00
Jessica Frazelle
a080e2add7 Make debugs logs suck less.
Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
2015-01-27 19:35:24 -08:00
Josh Hawn
24d81b0ddb Always store images with tarsum.v1 checksum added
Updates `image.StoreImage()` to always ensure that images
that are installed in Docker have a tarsum.v1 checksum.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-01-27 19:35:24 -08:00
Tony Miller
08f2fad40b document the ExtraHosts parameter for /containers/create for the remote API
I think this was added from version 1.15.

Signed-off-by: Tony Miller <mcfiredrill@gmail.com>
2015-01-27 19:35:24 -08:00
GennadySpb
f91fbe39ce Update using_supervisord.md
Fix factual error

change made by: GennadySpb <lipenkov@gmail.com>

Signed-off-by: Sven Dowideit <SvenDowideit@docker.com>
2015-01-27 19:35:24 -08:00
Tianon Gravi
018ab080bb Remove windows from the list of supported platforms
Since it can still be tested natively without this, this won't cause any harm while we fix the tests to actually work on Windows.

Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
2015-01-27 19:35:24 -08:00
Tibor Vass
fe94ecb2c1 integration-cli: wait for container before sending ^D
Signed-off-by: Tibor Vass <teabee89@gmail.com>
2015-01-27 19:35:24 -08:00
Lorenz Leutgeb
7b2e67036f Fix inconsistent formatting
Colon was bold, but regular at other occurences.

Blame cf27b310c4

Signed-off-by: Lorenz Leutgeb <lorenz.leutgeb@gmail.com>
2015-01-27 19:35:24 -08:00
Lorenz Leutgeb
e130faea1b doc: Minor semantical/editorial fixes in HTTPS article
"read-only" vs. "only readable by you"

Refer to:
https://github.com/docker/docker/pull/9952#discussion_r22690266

Signed-off-by: Lorenz Leutgeb <lorenz.leutgeb@gmail.com>
2015-01-27 19:35:24 -08:00
Lorenz Leutgeb
38f09de334 doc: Editorial changes as suggested by @fredlf
Refer to:
 * https://github.com/docker/docker/pull/9952#discussion_r22686652
 * https://github.com/docker/docker/pull/9952#discussion_r22686804

Signed-off-by: Lorenz Leutgeb <lorenz.leutgeb@gmail.com>
2015-01-27 19:35:24 -08:00
Lorenz Leutgeb
f9ba68ddfb doc: Improve article on HTTPS
* Adjust header to match _page_title
 * Add instructions on deletion of CSRs and setting permissions
 * Simplify some path expressions and commands
 * Consqeuently use ~ instead of ${HOME}
 * Precise formulation ('key' vs. 'public key')
 * Fix wrong indentation of output of `openssl req`
 * Use dash ('--') instead of minus ('-')

Remark on permissions:

It's not a problem to `chmod 0400` the private keys, because the
Docker daemon runs as root (can read the file anyway) and the Docker
client runs as user.

Signed-off-by: Lorenz Leutgeb <lorenz.leutgeb@gmail.com>
2015-01-27 19:35:23 -08:00
Abin Shahab
16913455bd Fixes apparmor regression
Signed-off-by: Abin Shahab <ashahab@altiscale.com> (github: ashahab-altiscale)
Docker-DCO-1.1-Signed-off-by: Abin Shahab <ashahab@altiscale.com> (github: ashahab-altiscale)
2015-01-27 19:35:23 -08:00
Andrew C. Bodine
32f189cd08 Adds docs for /containers/(id)/attach/ws api endpoint
Signed-off-by: Andrew C. Bodine <acbodine@us.ibm.com>
2015-01-27 19:35:23 -08:00
Josh Hawn
526ca42282 Split API Version header when checking for v2
Since the Docker-Distribution-API-Version header value may contain multiple
space delimited versions as well as many instances of the header key, the
header value is now split on whitespace characters to iterate over all versions
that may be listed in one instance of the header.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-01-27 19:35:23 -08:00
Harald Albers
b98b42d843 Add bash completions for daemon flags, simplify with extglob
Implementing the deamon flags the traditional way introduced even more
redundancy than usual because the same list of options with flags
had to be added twice.

This can be avoided by using variables in the case statements when
using the extglob shell option.

Signed-off-by: Harald Albers <github@albersweb.de>
2015-01-27 19:35:23 -08:00
imre Fitos
7bf03dd132 fix typo 'setup/set up'
Signed-off-by: imre Fitos <imre.fitos+github@gmail.com>
2015-01-27 19:35:23 -08:00
imre Fitos
034aa3b2c4 start docker before checking for updated NAT rule
Signed-off-by: imre Fitos <imre.fitos+github@gmail.com>
2015-01-27 19:35:23 -08:00
imre Fitos
6da1e01e6c docs: remove NAT rule when removing bridge
Signed-off-by: imre Fitos <imre.fitos+github@gmail.com>
2015-01-27 19:35:23 -08:00
1788 changed files with 117722 additions and 277392 deletions

2
.gitignore vendored
View File

@@ -2,7 +2,6 @@
# if you want to ignore files created by your editor/tools,
# please consider a global .gitignore https://help.github.com/articles/ignoring-files
*.exe
*.exe~
*.orig
*.rej
*.test
@@ -14,6 +13,7 @@
.git/
.gopath/
.hg/
.idea
.vagrant*
Vagrantfile
a.out

View File

@@ -142,30 +142,3 @@ Jessica Frazelle <jess@docker.com> Jessie Frazelle <jfrazelle@users.noreply.gith
Thomas LEVEIL <thomasleveil@gmail.com> Thomas LÉVEIL <thomasleveil@users.noreply.github.com>
<oi@truffles.me.uk> <timruffles@googlemail.com>
<Vincent.Bernat@exoscale.ch> <bernat@luffy.cx>
Antonio Murdaca <antonio.murdaca@gmail.com> <me@runcom.ninja>
Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@linux.com>
Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@users.noreply.github.com>
Darren Shepherd <darren.s.shepherd@gmail.com> <darren@rancher.com>
Deshi Xiao <dxiao@redhat.com> <dsxiao@dataman-inc.com>
Deshi Xiao <dxiao@redhat.com> <xiaods@gmail.com>
Doug Davis <dug@us.ibm.com> <duglin@users.noreply.github.com>
Jacob Atzen <jacob@jacobatzen.dk> <jatzen@gmail.com>
Jeff Nickoloff <jeff.nickoloff@gmail.com> <jeff@allingeek.com>
<jess@docker.com> <princess@docker.com>
John Howard (VM) <John.Howard@microsoft.com> John Howard <jhoward@microsoft.com>
Madhu Venugopal <madhu@socketplane.io> <madhu@docker.com>
Mary Anthony <mary.anthony@docker.com> <mary@docker.com>
Mary Anthony <mary.anthony@docker.com> moxiegirl <mary@docker.com>
Mary Anthony <mary.anthony@docker.com> <moxieandmore@gmail.com>
mattyw <mattyw@me.com> <gh@mattyw.net>
resouer <resouer@163.com> <resouer@gmail.com>
AJ Bowen <aj@gandi.net> soulshake <amy@gandi.net>
AJ Bowen <aj@gandi.net> soulshake <aj@gandi.net>
Tibor Vass <teabee89@gmail.com> <tibor@docker.com>
Tibor Vass <teabee89@gmail.com> <tiborvass@users.noreply.github.com>
Vincent Bernat <bernat@luffy.cx> <Vincent.Bernat@exoscale.ch>
Yestin Sun <sunyi0804@gmail.com> <yestin.sun@polyera.com>
bin liu <liubin0329@users.noreply.github.com> <liubin0329@gmail.com>
John Howard (VM) <John.Howard@microsoft.com> jhowardmsft <jhoward@microsoft.com>
Ankush Agarwal <ankushagarwal11@gmail.com> <ankushagarwal@users.noreply.github.com>
Tangi COLIN <tangicolin@gmail.com> tangicolin <tangicolin@gmail.com>

264
AUTHORS
View File

@@ -2,18 +2,14 @@
# For how it is generated, see `hack/generate-authors.sh`.
Aanand Prasad <aanand.prasad@gmail.com>
Aaron Davidson <aaron@databricks.com>
Aaron Feng <aaron.feng@gmail.com>
Aaron Huslage <huslage@gmail.com>
Aaron Welch <welch@packet.net>
Abel Muiño <amuino@gmail.com>
Abhinav Ajgaonkar <abhinav316@gmail.com>
Abhishek Chanda <abhishek.becs@gmail.com>
Abin Shahab <ashahab@altiscale.com>
Adam Miller <admiller@redhat.com>
Adam Singer <financeCoding@gmail.com>
Aditya <aditya@netroy.in>
Adria Casas <adriacasas88@gmail.com>
Adrian Mouat <adrian.mouat@gmail.com>
Adrien Folie <folie.adrien@gmail.com>
Ahmed Kamal <email.ahmedkamal@googlemail.com>
@@ -27,9 +23,6 @@ Albert Callarisa <shark234@gmail.com>
Albert Zhang <zhgwenming@gmail.com>
Aleksa Sarai <cyphar@cyphar.com>
Aleksandrs Fadins <aleks@s-ko.net>
Alena Prokharchyk <alena@rancher.com>
Alessandro Boch <aboch@docker.com>
Alessio Biancalana <dottorblaster@gmail.com>
Alex Gaynor <alex.gaynor@gmail.com>
Alex Warhawk <ax.warhawk@gmail.com>
Alexander Boyd <alex@opengroove.org>
@@ -37,20 +30,14 @@ Alexander Larsson <alexl@redhat.com>
Alexander Morozov <lk4d4@docker.com>
Alexander Shopov <ash@kambanaria.org>
Alexandr Morozov <lk4d4@docker.com>
Alexey Guskov <lexag@mail.ru>
Alexey Kotlyarov <alexey@infoxchange.net.au>
Alexey Shamrin <shamrin@gmail.com>
Alexis THOMAS <fr.alexisthomas@gmail.com>
Allen Madsen <blatyo@gmail.com>
almoehi <almoehi@users.noreply.github.com>
Alvin Richards <alvin.richards@docker.com>
amangoel <amangoel@gmail.com>
Amit Bakshi <ambakshi@gmail.com>
Amy Lindburg <amy.lindburg@docker.com>
Anand Patil <anand.prabhakar.patil@gmail.com>
AnandkumarPatel <anandkumarpatel@gmail.com>
Anchal Agrawal <aagrawa4@illinois.edu>
Anders Janmyr <anders@janmyr.com>
Andre Dublin <81dublin@gmail.com>
Andrea Luzzardi <aluzzardi@gmail.com>
Andrea Turli <andrea.turli@gmail.com>
@@ -58,19 +45,15 @@ Andreas Köhler <andi5.py@gmx.net>
Andreas Savvides <andreas@editd.com>
Andreas Tiefenthaler <at@an-ti.eu>
Andrew C. Bodine <acbodine@us.ibm.com>
Andrew Clay Shafer <andrewcshafer@gmail.com>
Andrew Duckworth <grillopress@gmail.com>
Andrew France <andrew@avito.co.uk>
Andrew Kuklewicz <kookster@gmail.com>
Andrew Macgregor <andrew.macgregor@agworld.com.au>
Andrew Martin <sublimino@gmail.com>
Andrew Munsell <andrew@wizardapps.net>
Andrew Weiss <andrew.weiss@outlook.com>
Andrew Williams <williams.andrew@gmail.com>
Andrews Medina <andrewsmedina@gmail.com>
Andrey Petrov <andrey.petrov@shazow.net>
Andrey Stolbovsky <andrey.stolbovsky@gmail.com>
André Martins <martins@noironetworks.com>
Andy Chambers <anchambers@paypal.com>
andy diller <dillera@gmail.com>
Andy Goldstein <agoldste@redhat.com>
@@ -78,23 +61,18 @@ Andy Kipp <andy@rstudio.com>
Andy Rothfusz <github@developersupport.net>
Andy Smith <github@anarkystic.com>
Andy Wilson <wilson.andrew.j+github@gmail.com>
Anes Hasicic <anes.hasicic@gmail.com>
Ankush Agarwal <ankushagarwal11@gmail.com>
Anthony Baire <Anthony.Baire@irisa.fr>
Anthony Bishopric <git@anthonybishopric.com>
Anton Löfgren <anton.lofgren@gmail.com>
Anton Nikitin <anton.k.nikitin@gmail.com>
Anton Tiurin <noxiouz@yandex.ru>
Antonio Murdaca <antonio.murdaca@gmail.com>
Antony Messerli <amesserl@rackspace.com>
apocas <petermdias@gmail.com>
ArikaChen <eaglesora@gmail.com>
Arnaud Porterie <arnaud.porterie@docker.com>
Arthur Barr <arthur.barr@uk.ibm.com>
Arthur Gautier <baloo@gandi.net>
Asbjørn Enge <asbjorn@hanafjedle.net>
averagehuman <averagehuman@users.noreply.github.com>
Avi Das <andas222@gmail.com>
Avi Miller <avi.miller@oracle.com>
Barnaby Gray <barnaby@pickle.me.uk>
Barry Allard <barry.allard@gmail.com>
@@ -102,31 +80,22 @@ Bartłomiej Piotrowski <b@bpiotrowski.pl>
bdevloed <boris.de.vloed@gmail.com>
Ben Firshman <ben@firshman.co.uk>
Ben Sargent <ben@brokendigits.com>
Ben Severson <BenSeverson@users.noreply.github.com>
Ben Toews <mastahyeti@gmail.com>
Ben Wiklund <ben@daisyowl.com>
Benjamin Atkin <ben@benatkin.com>
Benoit Chesneau <bchesneau@gmail.com>
Bernerd Schaefer <bj.schaefer@gmail.com>
Bert Goethals <bert@bertg.be>
Bharath Thiruveedula <bharath_ves@hotmail.com>
Bhiraj Butala <abhiraj.butala@gmail.com>
bin liu <liubin0329@users.noreply.github.com>
Blake Geno <blakegeno@gmail.com>
bobby abbott <ttobbaybbob@gmail.com>
boucher <rboucher@gmail.com>
Bouke Haarsma <bouke@webatoom.nl>
Boyd Hemphill <boyd@feedmagnet.com>
Bradley Cicenas <bradley.cicenas@gmail.com>
Bradley Wright <brad@intranation.com>
Brandon Liu <bdon@bdon.org>
Brandon Philips <brandon@ifup.org>
Brandon Rhodes <brandon@rhodesmill.org>
Brendan Dixon <brendand@microsoft.com>
Brent Salisbury <brent.salisbury@docker.com>
Brett Kochendorfer <brett.kochendorfer@gmail.com>
Brian (bex) Exelbierd <bexelbie@redhat.com>
Brian DeHamer <brian@dehamer.com>
Brian Dorsey <brian@dorseys.org>
Brian Flad <bflad417@gmail.com>
Brian Goff <cpuguy83@gmail.com>
@@ -137,67 +106,48 @@ Brice Jaglin <bjaglin@teads.tv>
Briehan Lombaard <briehan.lombaard@gmail.com>
Bruno Bigras <bigras.bruno@gmail.com>
Bruno Binet <bruno.binet@gmail.com>
Bruno Gazzera <bgazzera@paginar.com>
Bruno Renié <brutasse@gmail.com>
Bryan Bess <squarejaw@bsbess.com>
Bryan Boreham <bjboreham@gmail.com>
Bryan Matsuo <bryan.matsuo@gmail.com>
Bryan Murphy <bmurphy1976@gmail.com>
buddhamagnet <buddhamagnet@gmail.com>
Burke Libbey <burke@libbey.me>
Byung Kang <byung.kang.ctr@amrdec.army.mil>
Caleb Spare <cespare@gmail.com>
Calen Pennington <cale@edx.org>
Cameron Boehmer <cameron.boehmer@gmail.com>
Carl X. Su <bcbcarl@gmail.com>
Cary <caryhartline@users.noreply.github.com>
Casey Bisson <casey.bisson@joyent.com>
Charles Hooper <charles.hooper@dotcloud.com>
Charles Lindsay <chaz@chazomatic.us>
Charles Merriam <charles.merriam@gmail.com>
Charlie Lewis <charliel@lab41.org>
Chen Chao <cc272309126@gmail.com>
Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
cheney90 <cheney-90@hotmail.com>
Chewey <prosto-chewey@users.noreply.github.com>
Chia-liang Kao <clkao@clkao.org>
chli <chli@freewheel.tv>
Chris Alfonso <calfonso@redhat.com>
Chris Armstrong <chris@opdemand.com>
Chris Khoo <chris.khoo@gmail.com>
Chris Snow <chsnow123@gmail.com>
Chris St. Pierre <chris.a.st.pierre@gmail.com>
Chris Stivers <chris@stivers.us>
Chris Wahl <github@wahlnetwork.com>
chrismckinnel <chris.mckinnel@tangentlabs.co.uk>
Christian Berendt <berendt@b1-systems.de>
Christian Simon <simon@swine.de>
Christian Stefanescu <st.chris@gmail.com>
ChristoperBiscardi <biscarch@sketcht.com>
Christophe Troestler <christophe.Troestler@umons.ac.be>
Christopher Currie <codemonkey+github@gmail.com>
Christopher Latham <sudosurootdev@gmail.com>
Christopher Rigor <crigor@gmail.com>
Christy Perez <christy@linux.vnet.ibm.com>
Chun Chen <chenchun.feed@gmail.com>
Ciro S. Costa <ciro.costa@usp.br>
Clayton Coleman <ccoleman@redhat.com>
Coenraad Loubser <coenraad@wish.org.za>
Colin Dunklau <colin.dunklau@gmail.com>
Colin Rice <colin@daedrum.net>
Colin Walters <walters@verbum.org>
Colm Hally <colmhally@gmail.com>
Cory Forsyth <cory.forsyth@gmail.com>
cressie176 <github@stephen-cresswell.net>
Cristian Staretu <cristian.staretu@gmail.com>
Cruceru Calin-Cristian <crucerucalincristian@gmail.com>
Cyril F <cyrilf7x@gmail.com>
Daan van Berkel <daan.v.berkel.1980@gmail.com>
Daehyeok Mun <daehyeok@gmail.com>
Dafydd Crosby <dtcrsby@gmail.com>
dalanlan <dalanlan925@gmail.com>
Damjan Georgievski <gdamjan@gmail.com>
Dan Anolik <dan@anolik.net>
Dan Buch <d.buch@modcloth.com>
Dan Cotora <dan@bluevision.ro>
Dan Griffin <dgriffin@peer1.com>
@@ -207,46 +157,35 @@ Dan McPherson <dmcphers@redhat.com>
Dan Stine <sw@stinemail.com>
Dan Walsh <dwalsh@redhat.com>
Dan Williams <me@deedubs.com>
Daniel Antlinger <d.antlinger@gmx.at>
Daniel Exner <dex@dragonslave.de>
Daniel Farrell <dfarrell@redhat.com>
Daniel Garcia <daniel@danielgarcia.info>
Daniel Gasienica <daniel@gasienica.ch>
Daniel Menet <membership@sontags.ch>
Daniel Mizyrycki <daniel.mizyrycki@dotcloud.com>
Daniel Nephin <dnephin@gmail.com>
Daniel Norberg <dano@spotify.com>
Daniel Nordberg <dnordberg@gmail.com>
Daniel Robinson <gottagetmac@gmail.com>
Daniel S <dan.streby@gmail.com>
Daniel Von Fange <daniel@leancoder.com>
Daniel YC Lin <dlin.tw@gmail.com>
Daniel Zhang <jmzwcn@gmail.com>
Daniel, Dao Quang Minh <dqminh89@gmail.com>
Danny Berger <dpb587@gmail.com>
Danny Yates <danny@codeaholics.org>
Darren Coxall <darren@darrencoxall.com>
Darren Shepherd <darren.s.shepherd@gmail.com>
Dave Henderson <Dave.Henderson@ca.ibm.com>
David Anderson <dave@natulte.net>
David Calavera <david.calavera@gmail.com>
David Corking <dmc-source@dcorking.com>
David Davis <daviddavis@redhat.com>
David Gageot <david@gageot.net>
David Gebler <davidgebler@gmail.com>
David Mackey <tdmackey@booleanhaiku.com>
David Mat <david@davidmat.com>
David Mcanulty <github@hellspark.com>
David Pelaez <pelaez89@gmail.com>
David R. Jenni <david.r.jenni@gmail.com>
David Röthlisberger <david@rothlis.net>
David Sissitka <me@dsissitka.com>
David Xia <dxia@spotify.com>
David Young <yangboh@cn.ibm.com>
Davide Ceretti <davide.ceretti@hogarthww.com>
Dawn Chen <dawnchen@google.com>
decadent <decadent@users.noreply.github.com>
Deng Guangxing <dengguangxing@huawei.com>
Deni Bertovic <deni@kset.org>
Derek <crq@kernel.org>
Derek <crquan@gmail.com>
@@ -254,53 +193,39 @@ Derek McGowan <derek@mcgstyle.net>
Deric Crago <deric.crago@gmail.com>
Deshi Xiao <dxiao@redhat.com>
Dinesh Subhraveti <dineshs@altiscale.com>
DiuDiugirl <sophia.wang@pku.edu.cn>
Djibril Koné <kone.djibril@gmail.com>
dkumor <daniel@dkumor.com>
Dmitry Demeshchuk <demeshchuk@gmail.com>
Dmitry Gusev <dmitry.gusev@gmail.com>
Dmitry V. Krivenok <krivenok.dmitry@gmail.com>
Dolph Mathews <dolph.mathews@gmail.com>
Dominik Finkbeiner <finkes93@gmail.com>
Dominik Honnef <dominik@honnef.co>
Don Kirkby <donkirkby@users.noreply.github.com>
Don Kjer <don.kjer@gmail.com>
Don Spaulding <donspauldingii@gmail.com>
Doug Davis <dug@us.ibm.com>
Doug MacEachern <dougm@vmware.com>
doug tangren <d.tangren@gmail.com>
Dr Nic Williams <drnicwilliams@gmail.com>
dragon788 <dragon788@users.noreply.github.com>
Dražen Lučanin <kermit666@gmail.com>
Dustin Sallings <dustin@spy.net>
Ed Costello <epc@epcostello.com>
Edmund Wagner <edmund-wagner@web.de>
Eiichi Tsukata <devel@etsukata.com>
Eike Herzbach <eike@herzbach.net>
Eivind Uggedal <eivind@uggedal.com>
Elias Probst <mail@eliasprobst.eu>
Elijah Zupancic <elijah@zupancic.name>
eluck <mail@eluck.me>
Emil Hernvall <emil@quench.at>
Emily Maier <emily@emilymaier.net>
Emily Rose <emily@contactvibe.com>
Emir Ozer <emirozer@yandex.com>
Enguerran <engcolson@gmail.com>
Eohyung Lee <liquidnuker@gmail.com>
Eric Hanchrow <ehanchrow@ine.com>
Eric Lee <thenorthsecedes@gmail.com>
Eric Myhre <hash@exultant.us>
Eric Paris <eparis@redhat.com>
Eric Rafaloff <erafaloff@gmail.com>
Eric Windisch <ewindisch@docker.com>
Eric-Olivier Lamey <eo@lamey.me>
Erik Dubbelboer <erik@dubbelboer.com>
Erik Hollensbe <github@hollensbe.org>
Erik Inge Bolsø <knan@redpill-linpro.com>
Erik Kristensen <erik@erikkristensen.com>
Erno Hopearuoho <erno.hopearuoho@gmail.com>
Erwin van der Koogh <info@erronis.nl>
Euan <euank@amazon.com>
Eugene Yakubovich <eugene.yakubovich@coreos.com>
eugenkrizo <eugen.krizo@gmail.com>
Evan Carmi <carmi@users.noreply.github.com>
@@ -308,78 +233,58 @@ Evan Hazlett <ejhazlett@gmail.com>
Evan Krall <krall@yelp.com>
Evan Phoenix <evan@fallingsnow.net>
Evan Wies <evan@neomantra.net>
Evgeny Vereshchagin <evvers@ya.ru>
Eystein Måløy Stenberg <eystein.maloy.stenberg@cfengine.com>
ezbercih <cem.ezberci@gmail.com>
Fabiano Rosas <farosas@br.ibm.com>
Fabio Falci <fabiofalci@gmail.com>
Fabio Rehm <fgrehm@gmail.com>
Fabrizio Regini <freegenie@gmail.com>
Faiz Khan <faizkhan00@gmail.com>
falmp <chico.lopes@gmail.com>
Fareed Dudhia <fareeddudhia@googlemail.com>
Felix Rabe <felix@rabe.io>
Felix Schindler <fschindler@weluse.de>
Ferenc Szabo <pragmaticfrank@gmail.com>
Fernando <fermayo@gmail.com>
Filipe Brandenburger <filbranden@google.com>
Flavio Castelli <fcastelli@suse.com>
FLGMwt <ryan.stelly@live.com>
Florian Weingarten <flo@hackvalue.de>
Francisco Carriedo <fcarriedo@gmail.com>
Francisco Souza <f@souza.cc>
Frank Herrmann <fgh@4gh.tv>
Frank Macreery <frank@macreery.com>
Frank Rosquin <frank.rosquin+github@gmail.com>
Fred Lifton <fred.lifton@docker.com>
Frederick F. Kautz IV <fkautz@alumni.cmu.edu>
Frederik Loeffert <frederik@zitrusmedia.de>
Freek Kalter <freek@kalteronline.org>
Félix Baylac-Jacqué <baylac.felix@gmail.com>
Gabe Rosenhouse <gabe@missionst.com>
Gabor Nagy <mail@aigeruth.hu>
Gabriel Monroy <gabriel@opdemand.com>
Galen Sampson <galen.sampson@gmail.com>
Gareth Rushgrove <gareth@morethanseven.net>
Gaurav <gaurav.gosec@gmail.com>
gautam, prasanna <prasannagautam@gmail.com>
GennadySpb <lipenkov@gmail.com>
Geoffrey Bachelet <grosfrais@gmail.com>
George MacRorie <gmacr31@gmail.com>
George Xie <georgexsh@gmail.com>
Gereon Frey <gereon.frey@dynport.de>
German DZ <germ@ndz.com.ar>
Gert van Valkenhoef <g.h.m.van.valkenhoef@rug.nl>
Gianluca Borello <g.borello@gmail.com>
Giuseppe Mazzotta <gdm85@users.noreply.github.com>
Gleb Fotengauer-Malinovskiy <glebfm@altlinux.org>
Gleb M Borisov <borisov.gleb@gmail.com>
Glyn Normington <gnormington@gopivotal.com>
Goffert van Gool <goffert@phusion.nl>
golubbe <ben.golub@dotcloud.com>
Gosuke Miyashita <gosukenator@gmail.com>
Graydon Hoare <graydon@pobox.com>
Greg Fausak <greg@tacodata.com>
Greg Thornton <xdissent@me.com>
grossws <grossws@gmail.com>
grunny <mwgrunny@gmail.com>
Guilherme Salgado <gsalgado@gmail.com>
Guillaume Dufour <gdufour.prestataire@voyages-sncf.com>
Guillaume J. Charmes <guillaume.charmes@docker.com>
guoxiuyan <guoxiuyan@huawei.com>
Gurjeet Singh <gurjeet@singh.im>
Guruprasad <lgp171188@gmail.com>
Günter Zöchbauer <guenter@gzoechbauer.com>
Hans Rødtang <hansrodtang@gmail.com>
Harald Albers <github@albersweb.de>
Harley Laue <losinggeneration@gmail.com>
Harry Zhang <harryzhang@zju.edu.cn>
He Simei <hesimei@zju.edu.cn>
Hector Castro <hectcastro@gmail.com>
Henning Sprang <henning.sprang@gmail.com>
Hobofan <goisser94@gmail.com>
Hollie Teal <hollie@docker.com>
Hong Xu <hong@topbug.net>
Hu Keping <hukeping@huawei.com>
Hu Tao <hutao@cn.fujitsu.com>
Huayi Zhang <irachex@gmail.com>
@@ -387,28 +292,21 @@ Hugo Duncan <hugo@hugoduncan.org>
Hunter Blanks <hunter@twilio.com>
Huu Nguyen <huu@prismskylabs.com>
hyeongkyu.lee <hyeongkyu.lee@navercorp.com>
hyp3rdino <markus.kortlang@lhsystems.com>
Ian Babrou <ibobrik@gmail.com>
Ian Bishop <ianbishop@pace7.com>
Ian Bull <irbull@gmail.com>
Ian Calvert <ianjcalvert@gmail.com>
Ian Main <imain@redhat.com>
Ian Truslove <ian.truslove@gmail.com>
Iavael <iavaelooeyt@gmail.com>
Igor Dolzhikov <bluesriverz@gmail.com>
ILYA Khlopotov <ilya.khlopotov@gmail.com>
imre Fitos <imre.fitos+github@gmail.com>
inglesp <peter.inglesby@gmail.com>
Isaac Dupree <antispam@idupree.com>
Isabel Jimenez <contact.isabeljimenez@gmail.com>
Isao Jonas <isao.jonas@gmail.com>
Ivan Fraixedes <ifcdev@gmail.com>
J Bruni <joaohbruni@yahoo.com.br>
J. Nunn <jbnunn@gmail.com>
Jack Danger Canty <jackdanger@squareup.com>
Jacob Atzen <jacob@jacobatzen.dk>
Jacob Edelman <edelman.jd@gmail.com>
Jake Champlin <jake.champlin.27@gmail.com>
Jake Moshenko <jake@devtable.com>
jakedt <jake@devtable.com>
James Allen <jamesallen0108@gmail.com>
@@ -416,67 +314,41 @@ James Carr <james.r.carr@gmail.com>
James DeFelice <james.defelice@ishisystems.com>
James Harrison Fisher <jameshfisher@gmail.com>
James Kyle <james@jameskyle.org>
James Lal <james@lightsofapollo.com>
James Mills <prologic@shortcircuit.net.au>
James Turnbull <james@lovedthanlost.net>
Jamie Hannaford <jamie.hannaford@rackspace.com>
Jamshid Afshar <jafshar@yahoo.com>
Jan Keromnes <janx@linux.com>
Jan Koprowski <jan.koprowski@gmail.com>
Jan Pazdziora <jpazdziora@redhat.com>
Jan Toebes <jan@toebes.info>
Jan-Jaap Driessen <janjaapdriessen@gmail.com>
Jana Radhakrishnan <mrjana@docker.com>
Jared Biel <jared.biel@bolderthinking.com>
Jaroslaw Zabiello <hipertracker@gmail.com>
jaseg <jaseg@jaseg.net>
Jason Divock <jdivock@gmail.com>
Jason Giedymin <jasong@apache.org>
Jason Hall <imjasonh@gmail.com>
Jason Livesay <ithkuil@gmail.com>
Jason McVetta <jason.mcvetta@gmail.com>
Jason Plum <jplum@devonit.com>
Jason Shepherd <jason@jasonshepherd.net>
Jason Smith <jasonrichardsmith@gmail.com>
Jason Sommer <jsdirv@gmail.com>
Jason Stangroome <jason@codeassassin.com>
Jay <teguhwpurwanto@gmail.com>
Jean-Baptiste Barth <jeanbaptiste.barth@gmail.com>
Jean-Baptiste Dalido <jeanbaptiste@appgratis.com>
Jean-Paul Calderone <exarkun@twistedmatrix.com>
Jean-Tiare Le Bigot <jt@yadutaf.fr>
Jeff Anderson <jeff@docker.com>
Jeff Lindsay <progrium@gmail.com>
Jeff Nickoloff <jeff.nickoloff@gmail.com>
Jeff Welch <whatthejeff@gmail.com>
Jeffrey Bolle <jeffreybolle@gmail.com>
Jeffrey Morgan <jmorganca@gmail.com>
Jeffrey van Gogh <jvg@google.com>
Jeremy Grosser <jeremy@synack.me>
Jesse Dearing <jesse.dearing@gmail.com>
Jesse Dubay <jesse@thefortytwo.net>
Jessica Frazelle <jess@docker.com>
Jezeniel Zapanta <jpzapanta22@gmail.com>
jianbosun <wonderflow.sun@gmail.com>
Jilles Oldenbeuving <ojilles@gmail.com>
Jim Alateras <jima@comware.com.au>
Jim Perrin <jperrin@centos.org>
Jimmy Cuadra <jimmy@jimmycuadra.com>
Jimmy Puckett <jimmy.puckett@spinen.com>
jimmyxian <jimmyxian2004@yahoo.com.cn>
Jinsoo Park <cellpjs@gmail.com>
Jiri Popelka <jpopelka@redhat.com>
Jiří Župka <jzupka@redhat.com>
jjy <jiangjinyang@outlook.com>
jmzwcn <jmzwcn@gmail.com>
Joe Beda <joe.github@bedafamily.com>
Joe Ferguson <joe@infosiftr.com>
Joe Gordon <joe.gordon0@gmail.com>
Joe Shaw <joe@joeshaw.org>
Joe Van Dyk <joe@tanga.com>
Joel Friedly <joelfriedly@gmail.com>
Joel Handwell <joelhandwell@gmail.com>
Joey Gibson <joey@joeygibson.com>
Joffrey F <joffrey@docker.com>
Johan Euphrosine <proppy@google.com>
Johan Rydberg <johan.rydberg@gmail.com>
@@ -485,17 +357,13 @@ John Costa <john.costa@gmail.com>
John Feminella <jxf@jxf.me>
John Gardiner Myers <jgmyers@proofpoint.com>
John Gossman <johngos@microsoft.com>
John Howard (VM) <John.Howard@microsoft.com>
John OBrien III <jobrieniii@yahoo.com>
John Tims <john.k.tims@gmail.com>
John Warwick <jwarwick@gmail.com>
John Willis <john.willis@docker.com>
Jon Wedaman <jweede@gmail.com>
Jonas Pfenniger <jonas@pfenniger.name>
Jonathan A. Sternberg <jonathansternberg@gmail.com>
Jonathan Boulle <jonathanboulle@gmail.com>
Jonathan Camp <jonathan@irondojo.com>
Jonathan Dowland <jon+github@alcopop.org>
Jonathan McCrohan <jmccrohan@gmail.com>
Jonathan Mueller <j.mueller@apoveda.ch>
Jonathan Pares <jonathanpa@users.noreply.github.com>
@@ -505,18 +373,15 @@ Jordan Arentsen <blissdev@gmail.com>
Jordan Sissel <jls@semicomplete.com>
Joseph Anthony Pasquale Holsten <joseph@josephholsten.com>
Joseph Hager <ajhager@gmail.com>
Joseph Kern <jkern@semafour.net>
Josh <jokajak@gmail.com>
Josh Hawn <josh.hawn@docker.com>
Josh Poimboeuf <jpoimboe@redhat.com>
Josiah Kiehl <jkiehl@riotgames.com>
José Tomás Albornoz <jojo@eljojo.net>
JP <jpellerin@leapfrogonline.com>
Julian Taylor <jtaylor.debian@googlemail.com>
Julien Barbier <write0@gmail.com>
Julien Bordellier <julienbordellier@gmail.com>
Julien Dubois <julien.dubois@gmail.com>
Jun-Ru Chang <jrjang@gmail.com>
Justin Force <justin.force@gmail.com>
Justin Plock <jplock@users.noreply.github.com>
Justin Simonelis <justin.p.simonelis@gmail.com>
@@ -525,28 +390,22 @@ Jérôme Petazzoni <jerome.petazzoni@dotcloud.com>
Jörg Thalheim <joerg@higgsboson.tk>
Kamil Domanski <kamil@domanski.co>
Karan Lyons <karan@karanlyons.com>
kargakis <kargakis@users.noreply.github.com>
Karl Grzeszczak <karlgrz@gmail.com>
Katie McLaughlin <katie@glasnt.com>
Kato Kazuyoshi <kato.kazuyoshi@gmail.com>
Katrina Owen <katrina.owen@gmail.com>
Kawsar Saiyeed <kawsar.saiyeed@projiris.com>
Keli Hu <dev@keli.hu>
Ken Cochrane <kencochrane@gmail.com>
Ken ICHIKAWA <ichikawa.ken@jp.fujitsu.com>
Kent Johnson <kentoj@gmail.com>
Kevin "qwazerty" Houdebert <kevin.houdebert@gmail.com>
Kevin Clark <kevin.clark@gmail.com>
Kevin J. Lynagh <kevin@keminglabs.com>
Kevin Menard <kevin@nirvdrum.com>
Kevin Wallace <kevin@pentabarf.net>
Kevin Yap <me@kevinyap.ca>
Keyvan Fatehi <keyvanfatehi@gmail.com>
kies <lleelm@gmail.com>
Kim BKC Carlbacker <kim.carlbacker@gmail.com>
Kimbro Staken <kstaken@kstaken.com>
Kiran Gangadharan <kiran.daredevil@gmail.com>
Kirill SIbirev <l0kix2@gmail.com>
knappe <tyler.knappe@gmail.com>
Kohei Tsuruta <coheyxyz@gmail.com>
Konrad Kleine <konrad.wilhelm.kleine@gmail.com>
@@ -560,8 +419,6 @@ Lajos Papp <lajos.papp@sequenceiq.com>
Lakshan Perera <lakshan@laktek.com>
lalyos <lalyos@yahoo.com>
Lance Chen <cyen0312@gmail.com>
Lance Kinley <lkinley@loyaltymethods.com>
Lars Kellogg-Stedman <lars@redhat.com>
Lars R. Damerow <lars@pixar.com>
Laurie Voss <github@seldo.com>
leeplay <hyeongkyu.lee@navercorp.com>
@@ -571,26 +428,17 @@ Leszek Kowalski <github@leszekkowalski.pl>
Levi Gross <levi@levigross.com>
Lewis Marshall <lewis@lmars.net>
Lewis Peckover <lew+github@lew.io>
Liana Lo <liana.lixia@gmail.com>
Liang-Chi Hsieh <viirya@gmail.com>
limsy <seongyeol37@gmail.com>
Liu Hua <sdu.liu@huawei.com>
Lloyd Dewolf <foolswisdom@gmail.com>
Lokesh Mandvekar <lsm5@fedoraproject.org>
Lorenz Leutgeb <lorenz.leutgeb@gmail.com>
Lorenzo Fontana <fontanalorenzo@me.com>
Louis Opter <kalessin@kalessin.fr>
Luis Martínez de Bartolomé Izquierdo <lmartinez@biicode.com>
lukaspustina <lukas.pustina@centerdevice.com>
lukemarsden <luke@digital-crocus.com>
Lénaïc Huard <lhuard@amadeus.com>
Ma Shimiao <mashimiao.fnst@cn.fujitsu.com>
Mabin <bin.ma@huawei.com>
Madhu Venugopal <madhu@socketplane.io>
Mahesh Tiyyagura <tmahesh@gmail.com>
malnick <malnick@gmail..com>
Malte Janduda <mail@janduda.net>
Manfred Touron <m@42.am>
Manfred Zabarauskas <manfredas@zabarauskas.com>
Manuel Meurer <manuel@krautcomputing.com>
Manuel Woelker <github@manuel.woelker.org>
@@ -602,52 +450,38 @@ Marcus Farkas <toothlessgear@finitebox.com>
Marcus Linke <marcus.linke@gmx.de>
Marcus Ramberg <marcus@nordaaker.com>
Marek Goldmann <marek.goldmann@gmail.com>
Marian Marinov <mm@yuhu.biz>
Marianna <mtesselh@gmail.com>
Marius Voila <marius.voila@gmail.com>
Mark Allen <mrallen1@yahoo.com>
Mark McGranaghan <mmcgrana@gmail.com>
Mark West <markewest@gmail.com>
Marko Mikulicic <mmikulicic@gmail.com>
Marko Tibold <marko@tibold.nl>
Markus Fix <lispmeister@gmail.com>
Martijn Dwars <ikben@martijndwars.nl>
Martijn van Oosterhout <kleptog@svana.org>
Martin Honermeyer <maze@strahlungsfrei.de>
Martin Redmond <martin@tinychat.com>
Mary Anthony <mary.anthony@docker.com>
Masahito Zembutsu <zembutsu@users.noreply.github.com>
Mary Anthony <moxieandmore@gmail.com>
Mason Malone <mason.malone@gmail.com>
Mateusz Sulima <sulima.mateusz@gmail.com>
Mathias Monnerville <mathias@monnerville.com>
Mathieu Le Marec - Pasquet <kiorky@cryptelium.net>
Matt Apperson <me@mattapperson.com>
Matt Bachmann <bachmann.matt@gmail.com>
Matt Bentley <mbentley@mbentley.net>
Matt Haggard <haggardii@gmail.com>
Matt McCormick <matt.mccormick@kitware.com>
Matthew Heon <mheon@redhat.com>
Matthew Mayer <matthewkmayer@gmail.com>
Matthew Mueller <mattmuelle@gmail.com>
Matthew Riley <mattdr@google.com>
Matthias Klumpp <matthias@tenstral.net>
Matthias Kühnle <git.nivoc@neverbox.com>
mattymo <raytrac3r@gmail.com>
mattyw <mattyw@me.com>
mauriyouth <mauriyouth@gmail.com>
Max Shytikov <mshytikov@gmail.com>
Maxim Kulkin <mkulkin@mirantis.com>
Maxim Treskin <zerthurd@gmail.com>
Maxime Petazzoni <max@signalfuse.com>
Meaglith Ma <genedna@gmail.com>
meejah <meejah@meejah.ca>
Megan Kostick <mkostick@us.ibm.com>
Mehul Kar <mehul.kar@gmail.com>
Mengdi Gao <usrgdd@gmail.com>
Mert Yazıcıoğlu <merty@users.noreply.github.com>
Michael A. Smith <michael@smith-li.com>
Michael Brown <michael@netdirect.ca>
Michael Chiang <mchiang@docker.com>
Michael Crosby <michael@docker.com>
Michael Gorsuch <gorsuch@github.com>
Michael Hudson-Doyle <michael.hudson@linaro.org>
@@ -657,35 +491,26 @@ Michael Scharf <github@scharf.gr>
Michael Stapelberg <michael+gh@stapelberg.de>
Michael Steinert <mike.steinert@gmail.com>
Michael Thies <michaelthies78@gmail.com>
Michael West <mwest@mdsol.com>
Michal Fojtik <mfojtik@redhat.com>
Michal Jemala <michal.jemala@gmail.com>
Michal Minar <miminar@redhat.com>
Michaël Pailloncy <mpapo.dev@gmail.com>
Michiel@unhosted <michiel@unhosted.org>
Miguel Angel Fernández <elmendalerenda@gmail.com>
Mihai Borobocea <MihaiBorobocea@gmail.com>
Mike Chelen <michael.chelen@gmail.com>
Mike Dillon <mike@embody.org>
Mike Gaffney <mike@uberu.com>
Mike Leone <mleone896@gmail.com>
Mike MacCana <mike.maccana@gmail.com>
Mike Naberezny <mike@naberezny.com>
Mike Snitzer <snitzer@redhat.com>
Mikhail Sobolev <mss@mawhrin.net>
Mingzhen Feng <fmzhen@zju.edu.cn>
Mitch Capper <mitch.capper@gmail.com>
Mohit Soni <mosoni@ebay.com>
Morgante Pell <morgante.pell@morgante.net>
Morten Siebuhr <sbhr@sbhr.dk>
Moysés Borges <moyses.furtado@wplex.com.br>
Mrunal Patel <mrunalp@gmail.com>
mschurenko <matt.schurenko@gmail.com>
Mustafa Akın <mustafa91@gmail.com>
Médi-Rémi Hashim <medimatrix@users.noreply.github.com>
Nan Monnand Deng <monnand@gmail.com>
Naoki Orii <norii@cs.cmu.edu>
Natalie Parker <nparker@omnifone.com>
Nate Eagleson <nate@nateeag.com>
Nate Jones <nate@endot.org>
Nathan Hsieh <hsieh.nathan@gmail.com>
@@ -693,11 +518,8 @@ Nathan Kleyn <nathan@nathankleyn.com>
Nathan LeClaire <nathan.leclaire@docker.com>
Neal McBurnett <neal@mcburnett.org>
Nelson Chen <crazysim@gmail.com>
Nghia Tran <nghia@google.com>
Niall O'Higgins <niallo@unworkable.org>
Nicholas E. Rabenau <nerab@gmx.at>
Nick Irvine <nfirvine@nfirvine.com>
Nick Parker <nikaios@gmail.com>
Nick Payne <nick@kurai.co.uk>
Nick Stenning <nick.stenning@digital.cabinet-office.gov.uk>
Nick Stinemates <nick@stinemates.org>
@@ -706,11 +528,8 @@ Nicolas Dudebout <nicolas.dudebout@gatech.edu>
Nicolas Goy <kuon@goyman.com>
Nicolas Kaiser <nikai@nikai.net>
NikolaMandic <mn080202@gmail.com>
nikolas <nnyby@columbia.edu>
noducks <onemannoducks@gmail.com>
Nolan Darilek <nolan@thewordnerd.info>
nponeccop <andy.melnikov@gmail.com>
Nuutti Kotivuori <naked@iki.fi>
nzwsch <hi@nzwsch.com>
O.S. Tezer <ostezer@gmail.com>
OddBloke <daniel@daniel-watkins.co.uk>
@@ -723,14 +542,11 @@ pandrew <letters@paulnotcom.se>
panticz <mail@konczalski.de>
Pascal Borreli <pascal@borreli.com>
Pascal Hartig <phartig@rdrei.net>
Patrick Devine <patrick.devine@docker.com>
Patrick Hemmer <patrick.hemmer@gmail.com>
Patrick Stapleton <github@gdi2290.com>
pattichen <craftsbear@gmail.com>
Paul <paul9869@gmail.com>
paul <paul@inkling.com>
Paul Annesley <paul@annesley.cc>
Paul Bellamy <paul.a.bellamy@gmail.com>
Paul Bowsher <pbowsher@globalpersonals.co.uk>
Paul Hammond <paul@paulhammond.org>
Paul Jimenez <pj@place.org>
@@ -738,18 +554,11 @@ Paul Lietar <paul@lietar.net>
Paul Morie <pmorie@gmail.com>
Paul Nasrat <pnasrat@gmail.com>
Paul Weaver <pauweave@cisco.com>
Pavel Lobashov <ShockwaveNN@gmail.com>
Pavel Tikhomirov <ptikhomirov@parallels.com>
Pavlos Ratis <dastergon@gentoo.org>
Peggy Li <peggyli.224@gmail.com>
Peter Bourgon <peter@bourgon.org>
Peter Braden <peterbraden@peterbraden.co.uk>
Peter Choi <reikani@Peters-MacBook-Pro.local>
Peter Dave Hello <PeterDaveHello@users.noreply.github.com>
Peter Ericson <pdericson@gmail.com>
Peter Esbensen <pkesbensen@gmail.com>
Peter Salvatore <peter@psftw.com>
Peter Volpe <petervo@redhat.com>
Peter Waller <p@pwaller.net>
Phil <underscorephil@gmail.com>
Phil Estes <estesp@linux.vnet.ibm.com>
@@ -763,7 +572,6 @@ Pierre-Alain RIVIERE <pariviere@ippon.fr>
Piotr Bogdan <ppbogdan@gmail.com>
pixelistik <pixelistik@users.noreply.github.com>
Porjo <porjo38@yahoo.com.au>
Pradeep Chhetri <pradeep@indix.com>
Prasanna Gautam <prasannagautam@gmail.com>
Przemek Hejman <przemyslaw.hejman@gmail.com>
pysqz <randomq@126.com>
@@ -772,7 +580,6 @@ Quentin Brossard <qbrossard@gmail.com>
r0n22 <cameron.regan@gmail.com>
Rafal Jeczalik <rjeczalik@gmail.com>
Rafe Colton <rafael.colton@gmail.com>
Raghuram Devarakonda <draghuram@gmail.com>
Rajat Pandit <rp@rajatpandit.com>
Rajdeep Dua <dua_rajdeep@yahoo.com>
Ralph Bean <rbean@redhat.com>
@@ -781,19 +588,13 @@ Ramon van Alteren <ramon@vanalteren.nl>
Recursive Madman <recursive.madman@gmx.de>
Remi Rampin <remirampin@gmail.com>
Renato Riccieri Santos Zannon <renato.riccieri@gmail.com>
resouer <resouer@163.com>
rgstephens <greg@udon.org>
Rhys Hiltner <rhys@twitch.tv>
Rich Seymour <rseymour@gmail.com>
Richard <richard.scothern@gmail.com>
Richard Burnison <rburnison@ebay.com>
Richard Harvey <richard@squarecows.com>
Richard Metzler <richard@paadee.com>
Richo Healey <richo@psych0tik.net>
Rick Bradley <rick@users.noreply.github.com>
Rick van de Loo <rickvandeloo@gmail.com>
Rick Wieman <git@rickw.nl>
Rik Nijessen <rik@keefo.nl>
Robert Bachmann <rb@robertbachmann.at>
Robert Bittle <guywithnose@gmail.com>
Robert Obryk <robryk@gmail.com>
@@ -807,7 +608,6 @@ Rohit Jnagal <jnagal@google.com>
Roland Huß <roland@jolokia.org>
Roland Moriz <rmoriz@users.noreply.github.com>
Ron Smits <ron.smits@gmail.com>
root <docker-dummy@example.com>
Rovanion Luckey <rovanion.luckey@gmail.com>
Rudolph Gottesheim <r.gottesheim@loot.at>
Ryan Anderson <anderson.ryanc@gmail.com>
@@ -818,11 +618,6 @@ Ryan O'Donnell <odonnellryanc@gmail.com>
Ryan Seto <ryanseto@yak.net>
Ryan Thomas <rthomas@atlassian.com>
Rémy Greinhofer <remy.greinhofer@livelovely.com>
s. rannou <mxs@sbrk.org>
s00318865 <sunyuan3@huawei.com>
Sabin Basyal <sabin.basyal@gmail.com>
Sachin Joshi <sachin_jayant_joshi@hotmail.com>
Sam Abed <sam.abed@gmail.com>
Sam Alba <sam.alba@gmail.com>
Sam Bailey <cyprix@cyprix.com.au>
Sam J Sharpe <sam.sharpe@digital.cabinet-office.gov.uk>
@@ -831,9 +626,6 @@ Sam Rijs <srijs@airpost.net>
Sami Wagiaalla <swagiaal@redhat.com>
Samuel Andaya <samuel@andaya.net>
Samuel PHAN <samuel-phan@users.noreply.github.com>
Sankar சங்கர் <sankar.curiosity@gmail.com>
Sanket Saurav <sanketsaurav@gmail.com>
sapphiredev <se.imas.kr@gmail.com>
Satnam Singh <satnam@raintown.org>
satoru <satorulogic@gmail.com>
Satoshi Amemiya <satoshi_amemiya@voyagegroup.com>
@@ -842,35 +634,26 @@ Scott Collier <emailscottcollier@gmail.com>
Scott Johnston <scott@docker.com>
Scott Stamp <scottstamp851@gmail.com>
Scott Walls <sawalls@umich.edu>
sdreyesg <sdreyesg@gmail.com>
Sean Cronin <seancron@gmail.com>
Sean P. Kane <skane@newrelic.com>
Sebastiaan van Steenis <mail@superseb.nl>
Sebastiaan van Stijn <github@gone.nl>
Senthil Kumar Selvaraj <senthil.thecoder@gmail.com>
SeongJae Park <sj38.park@gmail.com>
Seongyeol Lim <seongyeol37@gmail.com>
Sergey Alekseev <sergey.alekseev.minsk@gmail.com>
Sergey Evstifeev <sergey.evstifeev@gmail.com>
Shane Canon <scanon@lbl.gov>
shaunol <shaunol@gmail.com>
Shawn Landden <shawn@churchofgit.com>
Shawn Siefkas <shawn.siefkas@meredith.com>
Shih-Yuan Lee <fourdollars@gmail.com>
Shijiang Wei <mountkin@gmail.com>
Shishir Mahajan <shishir.mahajan@redhat.com>
shuai-z <zs.broccoli@gmail.com>
sidharthamani <sid@rancher.com>
Silas Sewell <silas@sewell.org>
Simei He <hesimei@zju.edu.cn>
Simon Eskildsen <sirup@sirupsen.com>
Simon Leinen <simon.leinen@gmail.com>
Simon Taranto <simon.taranto@gmail.com>
Sindhu S <sindhus@live.in>
Sjoerd Langkemper <sjoerd-github@linuxonly.nl>
Solomon Hykes <solomon@docker.com>
Song Gao <song@gao.io>
Soulou <leo@unbekandt.eu>
soulshake <amy@gandi.net>
Sridatta Thatipamala <sthatipamala@gmail.com>
Sridhar Ratnakumar <sridharr@activestate.com>
Srini Brahmaroutu <sbrahma@us.ibm.com>
@@ -878,30 +661,20 @@ Srini Brahmaroutu <srbrahma@us.ibm.com>
Steeve Morin <steeve.morin@gmail.com>
Stefan Praszalowicz <stefan@greplin.com>
Stephen Crosby <stevecrozz@gmail.com>
Stephen J Day <stephen.day@docker.com>
Steve Francia <steve.francia@gmail.com>
Steve Koch <stevekochscience@gmail.com>
Steven Burgess <steven.a.burgess@hotmail.com>
Steven Merrill <steven.merrill@gmail.com>
Steven Richards <steven@axiomzen.co>
Steven Taylor <steven.taylor@me.com>
Sven Dowideit <SvenDowideit@home.org.au>
Swapnil Daingade <swapnil.daingade@gmail.com>
Sylvain Baubeau <sbaubeau@redhat.com>
Sylvain Bellemare <sylvain.bellemare@ezeep.com>
Sébastien <sebastien@yoozio.com>
Sébastien Luttringer <seblu@seblu.net>
Sébastien Stormacq <sebsto@users.noreply.github.com>
tang0th <tang0th@gmx.com>
Tangi COLIN <tangicolin@gmail.com>
Tatsuki Sugiura <sugi@nemui.org>
Tatsushi Inagaki <e29253@jp.ibm.com>
Ted M. Young <tedyoung@gmail.com>
Tehmasp Chaudhri <tehmasp@gmail.com>
Tejesh Mehta <tejesh.mehta@gmail.com>
Thatcher Peskens <thatcher@docker.com>
theadactyl <thea.lamkin@gmail.com>
Thell 'Bo' Fowler <thell@tbfowler.name>
Thermionix <bond711@gmail.com>
Thijs Terlouw <thijsterlouw@gmail.com>
Thomas Bikeev <thomas.bikeev@mac.com>
@@ -910,11 +683,8 @@ Thomas Hansen <thomas.hansen@gmail.com>
Thomas LEVEIL <thomasleveil@gmail.com>
Thomas Orozco <thomas@orozco.fr>
Thomas Schroeter <thomas@cliqz.com>
Thomas Sjögren <konstruktoid@users.noreply.github.com>
Thomas Texier <sharkone@en-mousse.org>
Tianon Gravi <admwiggin@gmail.com>
Tibor Vass <teabee89@gmail.com>
Tiffany Low <tiffany@box.com>
Tim Bosse <taim@bosboot.org>
Tim Hockin <thockin@google.com>
Tim Ruffles <oi@truffles.me.uk>
@@ -928,7 +698,6 @@ Tobias Gesellchen <tobias@gesellix.de>
Tobias Schmidt <ts@soundcloud.com>
Tobias Schwab <tobias.schwab@dynport.de>
Todd Lunter <tlunter@gmail.com>
Todd Whiteman <todd.whiteman@joyent.com>
Tom Fotherby <tom+github@peopleperhour.com>
Tom Hulihan <hulihan.tom159@gmail.com>
Tom Maaswinkel <tom.maaswinkel@12wiki.eu>
@@ -936,22 +705,16 @@ Tomas Tomecek <ttomecek@redhat.com>
Tomasz Lipinski <tlipinski@users.noreply.github.com>
Tomasz Nurkiewicz <nurkiewicz@gmail.com>
Tommaso Visconti <tommaso.visconti@gmail.com>
Tomáš Hrčka <thrcka@redhat.com>
Tonis Tiigi <tonistiigi@gmail.com>
Tonny Xu <tonny.xu@gmail.com>
Tony Daws <tony@daws.ca>
Tony Miller <mcfiredrill@gmail.com>
Torstein Husebø <torstein@huseboe.net>
tpng <benny.tpng@gmail.com>
Travis Cline <travis.cline@gmail.com>
Travis Thieman <travis.thieman@gmail.com>
Trent Ogren <tedwardo2@gmail.com>
Tristan Carel <tristan.carel@gmail.com>
Tyler Brock <tyler.brock@gmail.com>
Tzu-Jung Lee <roylee17@gmail.com>
Ulysse Carion <ulyssecarion@gmail.com>
unknown <sebastiaan@ws-key-sebas3.dpi1.dpi>
vagrant <vagrant@ubuntu-14.04-amd64-vbox>
Vaidas Jablonskis <jablonskis@gmail.com>
vgeta <gopikannan.venugopalsamy@gmail.com>
Victor Coisne <victor.coisne@dotcloud.com>
@@ -962,7 +725,6 @@ Viktor Vojnovski <viktor.vojnovski@amadeus.com>
Vincent Batts <vbatts@redhat.com>
Vincent Bernat <bernat@luffy.cx>
Vincent Bernat <Vincent.Bernat@exoscale.ch>
Vincent Demeester <vincent@sbr.pm>
Vincent Giersch <vincent.giersch@ovh.net>
Vincent Mayers <vincent.mayers@inbloom.org>
Vincent Woo <me@vincentwoo.com>
@@ -976,7 +738,6 @@ Vivek Goyal <vgoyal@redhat.com>
Vladimir Bulyga <xx@ccxx.cc>
Vladimir Kirillov <proger@wilab.org.ua>
Vladimir Rutsky <altsysrq@gmail.com>
VladimirAus <v_roudakov@yahoo.com>
Vojtech Vitek (V-Teq) <vvitek@redhat.com>
waitingkuo <waitingkuo0527@gmail.com>
Walter Leibbrandt <github@wrl.co.za>
@@ -984,46 +745,25 @@ Walter Stanish <walter@pratyeka.org>
Ward Vandewege <ward@jhvc.com>
WarheadsSE <max@warheads.net>
Wayne Chang <wayne@neverfear.org>
Wei-Ting Kuo <waitingkuo0527@gmail.com>
Wes Morgan <cap10morgan@gmail.com>
Will Dietz <w@wdtz.org>
Will Rouesnel <w.rouesnel@gmail.com>
Will Weaver <monkey@buildingbananas.com>
willhf <willhf@gmail.com>
William Delanoue <william.delanoue@gmail.com>
William Henry <whenry@redhat.com>
William Riancho <wr.wllm@gmail.com>
William Thurston <thurstw@amazon.com>
WiseTrem <shepelyov.g@gmail.com>
wlan0 <sidharthamn@gmail.com>
Wolfgang Powisch <powo@powo.priv.at>
wonderflow <wonderflow.sun@gmail.com>
xamyzhao <x.amy.zhao@gmail.com>
XiaoBing Jiang <s7v7nislands@gmail.com>
Xinzi Zhou <imdreamrunner@gmail.com>
Xiuming Chen <cc@cxm.cc>
xuzhaokui <cynicholas@gmail.com>
y00277921 <yuchangchun1@huawei.com>
Yahya <ya7yaz@gmail.com>
YAMADA Tsuyoshi <tyamada@minimum2scp.org>
Yan Feng <yanfeng2@huawei.com>
Yang Bai <hamo.by@gmail.com>
Yasunori Mahata <nori@mahata.net>
Yestin Sun <sunyi0804@gmail.com>
Yihang Ho <hoyihang5@gmail.com>
Yohei Ueda <yohei@jp.ibm.com>
Yongzhi Pan <panyongzhi@gmail.com>
Yuan Sun <sunyuan3@huawei.com>
Yurii Rashkovskii <yrashk@gmail.com>
Zac Dover <zdover@redhat.com>
Zach Borboa <zachborboa@gmail.com>
Zain Memon <zain@inzain.net>
Zaiste! <oh@zaiste.net>
Zane DeGraffenried <zane.deg@gmail.com>
Zefan Li <lizefan@huawei.com>
Zen Lin(Zhinan Lin) <linzhinan@huawei.com>
Zhang Wei <zhangwei555@huawei.com>
Zhang Wentao <zhangwentao234@huawei.com>
Zilin Du <zilin.du@gmail.com>
zimbatm <zimbatm@zimbatm.com>
Zoltan Tombol <zoltan.tombol@gmail.com>

View File

@@ -1,155 +1,5 @@
# Changelog
## 1.8.2 (2015-09-10)
### Distribution:
- Fixes rare edge case of handling GNU LongLink and LongName entries.
- Fix ^C on docker pull.
- Fix docker pull issues on client disconnection.
- Fix issue that caused the daemon to panic when loggers weren't configured properly.
- Fix goroutine leak pulling images from registry V2.
### Runtime:
- Fix a bug mounting cgroups for docker daemons running inside docker containers.
- Initialize log configuration properly.
### Client:
- Handle `-q` flag in `docker ps` properly when there is a default format.
### Networking:
- Fix several corner cases with netlink.
### Contrib:
- Fix several issues with bash completion.
## 1.8.1 (2015-08-12)
### Distribution
- Fix a bug where pushing multiple tags would result in invalid images
## 1.8.0 (2015-08-11)
### Distribution
+ Trusted pull, push and build, disabled by default
* Make tar layers deterministic between registries
* Don't allow deleting the image of running containers
* Check if a tag name to load is a valid digest
* Allow one character repository names
* Add a more accurate error description for invalid tag name
* Make build cache ignore mtime
### Cli
+ Add support for DOCKER_CONFIG/--config to specify config file dir
+ Add --type flag for docker inspect command
+ Add formatting options to `docker ps` with `--format`
+ Replace `docker -d` with new subcommand `docker daemon`
* Zsh completion updates and improvements
* Add some missing events to bash completion
* Support daemon urls with base paths in `docker -H`
* Validate status= filter to docker ps
* Display when a container is in --net=host in docker ps
* Extend docker inspect to export image metadata related to graph driver
* Restore --default-gateway{,-v6} daemon options
* Add missing unpublished ports in docker ps
* Allow duration strings in `docker events` as --since/--until
* Expose more mounts information in `docker inspect`
### Runtime
+ Add new Fluentd logging driver
+ Allow `docker import` to load from local files
+ Add logging driver for GELF via UDP
+ Allow to copy files from host to containers with `docker cp`
+ Promote volume drivers from experimental to master
+ Add rollover log driver, and --log-driver-opts flag
+ Add memory swappiness tuning options
* Remove cgroup read-only flag when privileged
* Make /proc, /sys, & /dev readonly for readonly containers
* Add cgroup bind mount by default
* Overlay: Export metadata for container and image in `docker inspect`
* Devicemapper: external device activation
* Devicemapper: Compare uuid of base device on startup
* Remove RC4 from the list of registry cipher suites
* Add syslog-facility option
* LXC execdriver compatibility with recent LXC versions
* Mark LXC execriver as deprecated (to be removed with the migration to runc)
### Plugins
* Separate plugin sockets and specs locations
* Allow TLS connections to plugins
### Bug fixes
- Add missing 'Names' field to /containers/json API output
- Make `docker rmi --dangling` safe when pulling
- Devicemapper: Change default basesize to 100G
- Go Scheduler issue with sync.Mutex and gcc
- Fix issue where Search API endpoint would panic due to empty AuthConfig
- Set image canonical names correctly
- Check dockerinit only if lxc driver is used
- Fix ulimit usage of nproc
- Always attach STDIN if -i,--interactive is specified
- Show error messages when saving container state fails
- Fixed incorrect assumption on --bridge=none treated as disable network
- Check for invalid port specifications in host configuration
- Fix endpoint leave failure for --net=host mode
- Fix goroutine leak in the stats API if the container is not running
- Check for apparmor file before reading it
- Fix DOCKER_TLS_VERIFY being ignored
- Set umask to the default on startup
- Correct the message of pause and unpause a non-running container
- Adjust disallowed CpuShares in container creation
- ZFS: correctly apply selinux context
- Display empty string instead of <nil> when IP opt is nil
- `docker kill` returns error when container is not running
- Fix COPY/ADD quoted/json form
- Fix goroutine leak on logs -f with no output
- Remove panic in nat package on invalid hostport
- Fix container linking in Fedora 22
- Fix error caused using default gateways outside of the allocated range
- Format times in inspect command with a template as RFC3339Nano
- Make registry client to accept 2xx and 3xx http status responses as successful
- Fix race issue that caused the daemon to crash with certain layer downloads failed in a specific order.
- Fix error when the docker ps format was not valid.
- Remove redundant ip forward check.
- Fix issue trying to push images to repository mirrors.
- Fix error cleaning up network entrypoints when there is an initialization issue.
## 1.7.1 (2015-07-14)
#### Runtime
- Fix default user spawning exec process with `docker exec`
- Make `--bridge=none` not to configure the network bridge
- Publish networking stats properly
- Fix implicit devicemapper selection with static binaries
- Fix socket connections that hung intermittently
- Fix bridge interface creation on CentOS/RHEL 6.6
- Fix local dns lookups added to resolv.conf
- Fix copy command mounting volumes
- Fix read/write privileges in volumes mounted with --volumes-from
#### Remote API
- Fix unmarshalling of Command and Entrypoint
- Set limit for minimum client version supported
- Validate port specification
- Return proper errors when attach/reattach fail
#### Distribution
- Fix pulling private images
- Fix fallback between registry V2 and V1
## 1.7.0 (2015-06-16)
#### Runtime
@@ -197,7 +47,7 @@
- Prohibit mount of /sys
#### Runtime
- Update AppArmor policy to not allow mounts
- Update Apparmor policy to not allow mounts
## 1.6.0 (2015-04-07)
@@ -491,7 +341,7 @@
#### Hack
* Clean up "go test" output from "make test" to be much more readable/scannable.
* Exclude more "definitely not unit tested Go source code" directories from hack/make/test.
* Excluse more "definitely not unit tested Go source code" directories from hack/make/test.
+ Generate md5 and sha256 hashes when building, and upload them via hack/release.sh.
- Include contributed completions in Ubuntu PPA.
+ Add cli integration tests.
@@ -771,7 +621,7 @@ With the ongoing changes to the networking and execution subsystems of docker te
* The ADD instruction now supports caching, which avoids unnecessarily re-uploading the same source content again and again when it hasnt changed
* The new ONBUILD instruction adds to your image a “trigger” instruction to be executed at a later time, when the image is used as the base for another build
* Docker now ships with an experimental storage driver which uses the BTRFS filesystem for copy-on-write
* Docker is officially supported on Mac OS X
* Docker is officially supported on Mac OSX
* The Docker daemon supports systemd socket activation
## 0.7.6 (2014-01-14)
@@ -825,12 +675,12 @@ With the ongoing changes to the networking and execution subsystems of docker te
- Fix ADD caching issue with . prefixed path
- Fix docker build on devicemapper by reverting sparse file tar option
- Fix issue with file caching and prevent wrong cache hit
* Use same error handling while unmarshalling CMD and ENTRYPOINT
* Use same error handling while unmarshalling CMD and ENTRYPOINT
#### Documentation
* Simplify and streamline Amazon Quickstart
* Install instructions use unprefixed Fedora image
* Install instructions use unprefixed fedora image
* Update instructions for mtu flag for Docker on GCE
+ Add Ubuntu Saucy to installation
- Fix for wrong version warning on master instead of latest
@@ -1005,7 +855,7 @@ With the ongoing changes to the networking and execution subsystems of docker te
* Improve unit tests
* The test suite now runs all tests even if one fails
* Refactor C in Go (Devmapper)
- Fix OS X compilation
- Fix OSX compilation
## 0.7.0 (2013-11-25)
@@ -1023,7 +873,7 @@ With the ongoing changes to the networking and execution subsystems of docker te
#### Runtime
* Improve stability, fixes some race conditions
* Improve stability, fixes some race conditons
* Skip the volumes mounted when deleting the volumes of container.
* Fix layer size computation: handle hard links correctly
* Use the work Path for docker cp CONTAINER:PATH

View File

@@ -28,8 +28,8 @@ Please **DO NOT** file a public issue, instead send your report privately to
[security@docker.com](mailto:security@docker.com),
Security reports are greatly appreciated and we will publicly thank you for it.
We also like to send gifts&mdash;if you're into Docker schwag, make sure to let
us know. We currently do not offer a paid security bounty program, but are not
We also like to send gifts&mdash;if you're into Docker schwag make sure to let
us know We currently do not offer a paid security bounty program, but are not
ruling it out in the future.
@@ -220,7 +220,7 @@ set of patches that should be reviewed together: for example, upgrading the
version of a vendored dependency and taking advantage of its now available new
feature constitute two separate units of work. Implementing a new function and
calling it in another file constitute a single logical unit of work. The very
high majority of submissions should have a single commit, so if in doubt: squash
high majory of submissions should have a single commit, so if in doubt: squash
down to one.
After every commit, [make sure the test suite passes]
@@ -234,8 +234,6 @@ close an issue. Including references automatically closes the issue on a merge.
Please do not add yourself to the `AUTHORS` file, as it is regenerated regularly
from the Git history.
Please see the [Coding Style](#coding-style) for further guidelines.
### Merge approval
Docker maintainers use LGTM (Looks Good To Me) in comments on the code review to
@@ -319,7 +317,7 @@ maintainer to make a difference on the project!
### IRC meetings
There are two monthly meetings taking place on #docker-dev IRC to accommodate all
There are two monthly meetings taking place on #docker-dev IRC to accomodate all
timezones. Anybody can propose a topic for discussion prior to the meeting.
If you feel the conversation is going off-topic, feel free to point it out.
@@ -387,49 +385,3 @@ do need a fair way to deal with people who are making our community suck.
appeals, we know that mistakes happen, and we'll work with you to come up with a
fair solution if there has been a misunderstanding.
## Coding Style
Unless explicitly stated, we follow all coding guidelines from the Go
community. While some of these standards may seem arbitrary, they somehow seem
to result in a solid, consistent codebase.
It is possible that the code base does not currently comply with these
guidelines. We are not looking for a massive PR that fixes this, since that
goes against the spirit of the guidelines. All new contributions should make a
best effort to clean up and make the code base better than they left it.
Obviously, apply your best judgement. Remember, the goal here is to make the
code base easier for humans to navigate and understand. Always keep that in
mind when nudging others to comply.
The rules:
1. All code should be formatted with `gofmt -s`.
2. All code should pass the default levels of
[`golint`](https://github.com/golang/lint).
3. All code should follow the guidelines covered in [Effective
Go](http://golang.org/doc/effective_go.html) and [Go Code Review
Comments](https://github.com/golang/go/wiki/CodeReviewComments).
4. Comment the code. Tell us the why, the history and the context.
5. Document _all_ declarations and methods, even private ones. Declare
expectations, caveats and anything else that may be important. If a type
gets exported, having the comments already there will ensure it's ready.
6. Variable name length should be proportional to it's context and no longer.
`noCommaALongVariableNameLikeThisIsNotMoreClearWhenASimpleCommentWouldDo`.
In practice, short methods will have short variable names and globals will
have longer names.
7. No underscores in package names. If you need a compound name, step back,
and re-examine why you need a compound name. If you still think you need a
compound name, lose the underscore.
8. No utils or helpers packages. If a function is not general enough to
warrant it's own package, it has not been written generally enough to be a
part of a util package. Just leave it unexported and well-documented.
9. All tests should run with `go test` and outside tooling should not be
required. No, we don't need another unit testing framework. Assertion
packages are acceptable if they provide _real_ incremental value.
10. Even though we call these "rules" above, they are actually just
guidelines. Since you've read all the rules, you now know that.
If you are having trouble getting into the mood of idiomatic Go, we recommend
reading through [Effective Go](http://golang.org/doc/effective_go.html). The
[Go Blog](http://blog.golang.org/) is also a great resource. Drinking the
kool-aid is a lot easier than going thirsty.

View File

@@ -19,14 +19,14 @@
# -e GPG_PASSPHRASE=gloubiboulga \
# docker hack/release.sh
#
# Note: AppArmor used to mess with privileged mode, but this is no longer
# Note: Apparmor used to mess with privileged mode, but this is no longer
# the case. Therefore, you don't have to disable it anymore.
#
FROM ubuntu:14.04
MAINTAINER Tianon Gravi <admwiggin@gmail.com> (@tianon)
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys E871F18B51E0147C77796AC81196BA81F6B0FC61
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net --recv-keys E871F18B51E0147C77796AC81196BA81F6B0FC61
RUN echo deb http://ppa.launchpad.net/zfs-native/stable/ubuntu trusty main > /etc/apt/sources.list.d/zfs.list
# Packaged dependencies
@@ -37,7 +37,6 @@ RUN apt-get update && apt-get install -y \
bash-completion \
btrfs-tools \
build-essential \
createrepo \
curl \
dpkg-sig \
git \
@@ -70,7 +69,7 @@ RUN cd /usr/local/lvm2 \
# see https://git.fedorahosted.org/cgit/lvm2.git/tree/INSTALL
# Install lxc
ENV LXC_VERSION 1.1.2
ENV LXC_VERSION 1.0.7
RUN mkdir -p /usr/src/lxc \
&& curl -sSL https://linuxcontainers.org/downloads/lxc/lxc-${LXC_VERSION}.tar.gz | tar -v -C /usr/src/lxc/ -xz --strip-components=1
RUN cd /usr/src/lxc \
@@ -117,37 +116,21 @@ RUN git clone https://github.com/golang/tools.git /go/src/golang.org/x/tools \
&& (cd /go/src/golang.org/x/tools && git checkout -q $GO_TOOLS_COMMIT) \
&& go install -v golang.org/x/tools/cmd/cover \
&& go install -v golang.org/x/tools/cmd/vet
# Grab Go's lint tool
ENV GO_LINT_COMMIT f42f5c1c440621302702cb0741e9d2ca547ae80f
RUN git clone https://github.com/golang/lint.git /go/src/github.com/golang/lint \
&& (cd /go/src/github.com/golang/lint && git checkout -q $GO_LINT_COMMIT) \
&& go install -v github.com/golang/lint/golint
# TODO replace FPM with some very minimal debhelper stuff
RUN gem install --no-rdoc --no-ri fpm --version 1.3.2
# Install registry
ENV REGISTRY_COMMIT ec87e9b6971d831f0eff752ddb54fb64693e51cd
ENV REGISTRY_COMMIT d957768537c5af40e4f4cd96871f7b2bde9e2923
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/docker/distribution.git "$GOPATH/src/github.com/docker/distribution" \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -o /usr/local/bin/registry-v2 github.com/docker/distribution/cmd/registry \
&& rm -rf "$GOPATH"
# Install notary server
ENV NOTARY_COMMIT 8e8122eb5528f621afcd4e2854c47302f17392f7
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
&& (cd "$GOPATH/src/github.com/docker/notary" && git checkout -q "$NOTARY_COMMIT") \
&& GOPATH="$GOPATH/src/github.com/docker/notary/Godeps/_workspace:$GOPATH" \
go build -o /usr/local/bin/notary-server github.com/docker/notary/cmd/notary-server \
&& rm -rf "$GOPATH"
&& git clone https://github.com/docker/distribution.git /go/src/github.com/docker/distribution \
&& (cd /go/src/github.com/docker/distribution && git checkout -q $REGISTRY_COMMIT) \
&& GOPATH=/go/src/github.com/docker/distribution/Godeps/_workspace:/go \
go build -o /go/bin/registry-v2 github.com/docker/distribution/cmd/registry \
&& rm -rf /go/src/github.com/docker/distribution/
# Get the "docker-py" source so we can run their integration tests
ENV DOCKER_PY_COMMIT 8a87001d09852058f08a807ab6e8491d57ca1e88
ENV DOCKER_PY_COMMIT 91985b239764fe54714fa0a93d52aa362357d251
RUN git clone https://github.com/docker/docker-py.git /docker-py \
&& cd /docker-py \
&& git checkout -q $DOCKER_PY_COMMIT
@@ -179,36 +162,26 @@ RUN ln -sv $PWD/contrib/completion/bash/docker /etc/bash_completion.d/docker
# Get useful and necessary Hub images so we can "docker load" locally instead of pulling
COPY contrib/download-frozen-image.sh /go/src/github.com/docker/docker/contrib/
RUN ./contrib/download-frozen-image.sh /docker-frozen-images \
busybox:latest@8c2e06607696bd4afb3d03b687e361cc43cf8ec1a4a725bc96e39f05ba97dd55 \
hello-world:frozen@91c95931e552b11604fea91c2f537284149ec32fff0f700a4769cfd31d7696ae \
busybox:latest@4986bf8c15363d1c5d15512d5266f8777bfba4974ac56e3270e7760f6f0a8125 \
hello-world:frozen@e45a5af57b00862e5ef5782a9925979a02ba2b12dff832fd0991335f4a11e5c5 \
jess/unshare@5c9f6ea50341a2a8eb6677527f2bdedbf331ae894a41714fda770fb130f3314d
# see also "hack/make/.ensure-frozen-images" (which needs to be updated any time this list is)
# Download man page generator
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone -b v1.0.3 https://github.com/cpuguy83/go-md2man.git "$GOPATH/src/github.com/cpuguy83/go-md2man" \
&& git clone -b v1.2 https://github.com/russross/blackfriday.git "$GOPATH/src/github.com/russross/blackfriday" \
&& go get -v -d github.com/cpuguy83/go-md2man \
&& go build -v -o /usr/local/bin/go-md2man github.com/cpuguy83/go-md2man \
&& rm -rf "$GOPATH"
&& git clone -b v1.0.1 https://github.com/cpuguy83/go-md2man.git /go/src/github.com/cpuguy83/go-md2man \
&& git clone -b v1.2 https://github.com/russross/blackfriday.git /go/src/github.com/russross/blackfriday
# Download toml validator
ENV TOMLV_COMMIT 9baf8a8a9f2ed20a8e54160840c492f937eeaf9a
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/BurntSushi/toml.git "$GOPATH/src/github.com/BurntSushi/toml" \
&& (cd "$GOPATH/src/github.com/BurntSushi/toml" && git checkout -q "$TOMLV_COMMIT") \
&& go build -v -o /usr/local/bin/tomlv github.com/BurntSushi/toml/cmd/tomlv \
&& rm -rf "$GOPATH"
&& git clone https://github.com/BurntSushi/toml.git /go/src/github.com/BurntSushi/toml \
&& (cd /go/src/github.com/BurntSushi/toml && git checkout -q $TOMLV_COMMIT)
# Build/install the tool for embedding resources in Windows binaries
ENV RSRC_COMMIT e48dbf1b7fc464a9e85fcec450dddf80816b76e0
RUN set -x \
&& git clone https://github.com/akavel/rsrc.git /go/src/github.com/akavel/rsrc \
&& cd /go/src/github.com/akavel/rsrc \
&& git checkout -q $RSRC_COMMIT \
&& go install -v
# copy vendor/ because go-md2man needs golang.org/x/net
COPY vendor /go/src/github.com/docker/docker/vendor
RUN go install -v github.com/cpuguy83/go-md2man \
github.com/BurntSushi/toml/cmd/tomlv
# Wrap all commands in the "docker-in-docker" script to allow nested containers
ENTRYPOINT ["hack/dind"]

View File

@@ -124,8 +124,98 @@ the relevant operators.
* If the change affects the governance, philosophy, goals or principles of the project,
it must be approved by BDFL.
* A pull request can be in 1 of 5 distinct states, for each of which there is a corresponding label
that needs to be applied. `Rules.review.states` contains the list of states with possible targets
for each.
"""
# Triage
[Rules.review.states.0-needs-triage]
# Maintainers are expected to triage new incoming pull requests by removing
# the `0-triage` label and adding the correct labels (e.g. `1-design-review`)
# potentially skipping some steps depending on the kind of pull request.
# Use common sense for judging.
#
# Checking for DCO should be done at this stage.
#
# If an owner, responsible for closing or merging, can be assigned to the PR,
# the better.
close = "e.g. unresponsive contributor without DCO"
3-docs-review = "non-proposal documentation-only change"
2-code-review = "e.g. trivial bugfix"
1-design-review = "general case"
# Design review
[Rules.review.states.1-needs-design-review]
# Maintainers are expected to comment on the design of the pull request.
# Review of documentation is expected only in the context of design validation,
# not for stylistic changes.
#
# Ideally, documentation should reflect the expected behavior of the code.
# No code review should take place in this step.
#
# Once design is approved, a maintainer should make sure to remove this label
# and add the next one.
close = "design rejected"
3-docs-review = "proposals with only documentation changes"
2-code-review = "general case"
# Code review
[Rules.review.states.2-needs-code-review]
# Maintainers are expected to review the code and ensure that it is good
# quality and in accordance with the documentation in the PR.
#
# If documentation is absent but expected, maintainers should ask for documentation.
#
# All tests should pass.
#
# Once code is approved according to the rules of the subsystem, a maintainer
# should make sure to remove this label and add the next one.
close = ""
1-design-review = "raises design concerns"
4-merge = "trivial change not impacting documentation"
3-docs-review = "general case"
# Docs review
[Rules.review.states.3-needs-docs-review]
# Maintainers are expected to review the documentation in its bigger context,
# ensuring consistency, completeness, validity, and breadth of coverage across
# all extent and new documentation.
#
# They should ask for any editorial change that makes the documentation more
# consistent and easier to understand.
#
# Changes and additions to docs must be reviewed and approved (LGTM'd) by a minimum of
# two docs sub-project maintainers. If the docs change originates with a docs
# maintainer, only one additional LGTM is required (since we assume a docs maintainer
# approves of their own PR).
#
# Once documentation is approved (see below), a maintainer should make sure to remove this
# label and add the next one.
close = ""
2-code-review = "requires more code changes"
1-design-review = "raises design concerns"
4-merge = "general case"
# Merge
[Rules.review.states.4-needs-merge]
# Maintainers are expected to merge this pull request as soon as possible.
# They can ask for a rebase, or carry the pull request themselves.
# These should be the easy PRs to merge.
close = "carry PR"
merge = ""
[Rules.DCO]
title = "Helping contributors with the DCO"
@@ -224,7 +314,7 @@ made through a pull request.
"jfrazelle",
"crosbymichael"
]
[Org.Operators.community]
people = [
"theadactyl"
@@ -236,7 +326,7 @@ made through a pull request.
# day of a new maintainer, the best advice should be "follow the C.M.'s example and you'll
# be fine".
"Chief Maintainer" = "crosbymichael"
# The community manager is responsible for serving the project community, including users,
# contributors and partners. This involves:
# - facilitating communication between maintainers, contributors and users
@@ -245,7 +335,7 @@ made through a pull request.
# - anything the project community needs to be successful
#
# The community manager is a point of contact for any contributor who has questions, concerns
# or feedback about project operations.
# or feedback about project operations.
"Community Manager" = "theadactyl"
[Org."Core maintainers"]
@@ -267,7 +357,6 @@ made through a pull request.
people = [
"calavera",
"crosbymichael",
"erikh",
"estesp",
@@ -447,11 +536,6 @@ made through a pull request.
Email = "ben@firshman.co.uk"
GitHub = "bfirsh"
[people.calavera]
Name = "David Calavera"
Email = "david.calavera@gmail.com"
GitHub = "calavera"
[people.cpuguy83]
Name = "Brian Goff"
Email = "cpuguy83@gmail.com"
@@ -566,7 +650,7 @@ made through a pull request.
Name = "Sebastiaan van Stijn"
Email = "github@gone.nl"
GitHub = "thaJeztah"
[people.theadactyl]
Name = "Thea Lamkin"
Email = "thea@docker.com"

View File

@@ -6,7 +6,6 @@
DOCKER_ENVS := \
-e BUILDFLAGS \
-e DOCKER_CLIENTONLY \
-e DOCKER_DEBUG \
-e DOCKER_EXECDRIVER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GRAPHDRIVER \
@@ -46,11 +45,6 @@ binary: build
cross: build
$(DOCKER_RUN_DOCKER) hack/make.sh binary cross
deb: build
$(DOCKER_RUN_DOCKER) hack/make.sh binary build-deb
rpm: build
$(DOCKER_RUN_DOCKER) hack/make.sh binary build-rpm
test: build
$(DOCKER_RUN_DOCKER) hack/make.sh binary cross test-unit test-integration-cli test-docker-py
@@ -65,7 +59,7 @@ test-docker-py: build
$(DOCKER_RUN_DOCKER) hack/make.sh binary test-docker-py
validate: build
$(DOCKER_RUN_DOCKER) hack/make.sh validate-dco validate-gofmt validate-pkg validate-lint validate-test validate-toml validate-vet
$(DOCKER_RUN_DOCKER) hack/make.sh validate-dco validate-gofmt validate-test validate-toml validate-vet
shell: build
$(DOCKER_RUN_DOCKER) bash
@@ -75,6 +69,3 @@ build: bundles
bundles:
mkdir bundles
docs:
$(MAKE) -C docs docs

View File

@@ -13,7 +13,7 @@ databases, and backend services without depending on a particular stack
or provider.
Docker began as an open-source implementation of the deployment engine which
powers [dotCloud](https://www.dotcloud.com), a popular Platform-as-a-Service.
powers [dotCloud](https://dotcloud.com), a popular Platform-as-a-Service.
It benefits directly from the experience accumulated over several years
of large-scale operation and support of hundreds of thousands of
applications and databases.
@@ -30,7 +30,7 @@ security@docker.com and not by creating a github issue.
A common method for distributing applications and sandboxing their
execution is to use virtual machines, or VMs. Typical VM formats are
VMware's vmdk, Oracle VirtualBox's vdi, and Amazon EC2's ami. In theory
VMware's vmdk, Oracle Virtualbox's vdi, and Amazon EC2's ami. In theory
these formats should allow every developer to automatically package
their application into a "machine" for easy distribution and deployment.
In practice, that almost never happens, for a few reasons:
@@ -58,7 +58,7 @@ takes place at the kernel level. Most modern operating system kernels
now support the primitives necessary for containerization, including
Linux with [openvz](https://openvz.org),
[vserver](http://linux-vserver.org) and more recently
[lxc](https://linuxcontainers.org/), Solaris with
[lxc](http://lxc.sourceforge.net), Solaris with
[zones](https://docs.oracle.com/cd/E26502_01/html/E29024/preface-1.html#scrolltoc),
and FreeBSD with
[Jails](https://www.freebsd.org/doc/handbook/jails.html).
@@ -168,9 +168,9 @@ Under the hood
Under the hood, Docker is built on the following components:
* The
[cgroups](https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt)
[cgroup](http://blog.dotcloud.com/kernel-secrets-from-the-paas-garage-part-24-c)
and
[namespaces](http://man7.org/linux/man-pages/man7/namespaces.7.html)
[namespacing](http://blog.dotcloud.com/under-the-hood-linux-kernels-on-dotcloud-part)
capabilities of the Linux kernel
* The [Go](https://golang.org) programming language
* The [Docker Image Specification](https://github.com/docker/docker/blob/master/image/spec/v1.md)
@@ -183,7 +183,7 @@ Contributing to Docker
[![Jenkins Build Status](https://jenkins.dockerproject.org/job/Docker%20Master/badge/icon)](https://jenkins.dockerproject.org/job/Docker%20Master/)
Want to hack on Docker? Awesome! We have [instructions to help you get
started contributing code or documentation](https://docs.docker.com/project/who-written-for/).
started contributing code or documentation.](https://docs.docker.com/project/who-written-for/).
These instructions are probably not perfect, please let us know if anything
feels wrong or incomplete. Better yet, submit a PR and improve them yourself.
@@ -289,7 +289,7 @@ system
* [Docker Compose](https://github.com/docker/compose) (formerly Fig):
Define and run multi-container apps
* [Kitematic](https://github.com/kitematic/kitematic): The easiest way to use
Docker on Mac and Windows
Docker on a Mac
If you know of another project underway that should be listed here, please help
us keep this list up-to-date by submitting a PR.

View File

@@ -1,183 +0,0 @@
Docker Engine Roadmap
=====================
### How should I use this document?
This document provides description of items that the project decided to prioritize. This should
serve as a reference point for Docker contributors to understand where the project is going, and
help determine if a contribution could be conflicting with some longer terms plans.
The fact that a feature isn't listed here doesn't mean that a patch for it will automatically be
refused (except for those mentioned as "frozen features" below)! We are always happy to receive
patches for new cool features we haven't thought about, or didn't judge priority. Please however
understand that such patches might take longer for us to review.
### How can I help?
Short term objectives are listed in the [wiki](https://github.com/docker/docker/wiki) and described
in [Issues](https://github.com/docker/docker/issues?q=is%3Aopen+is%3Aissue+label%3Aroadmap). Our
goal is to split down the workload in such way that anybody can jump in and help. Please comment on
issues if you want to take it to avoid duplicating effort! Similarly, if a maintainer is already
assigned on an issue you'd like to participate in, pinging him on IRC or GitHub to offer your help is
the best way to go.
### How can I add something to the roadmap?
The roadmap process is new to the Docker Engine: we are only beginning to structure and document the
project objectives. Our immediate goal is to be more transparent, and work with our community to
focus our efforts on fewer prioritized topics.
We hope to offer in the near future a process allowing anyone to propose a topic to the roadmap, but
we are not quite there yet. For the time being, the BDFL remains the keeper of the roadmap, and we
won't be accepting pull requests adding or removing items from this file.
# 1. Features and refactoring
## 1.1 Security
Security is a top objective for the Docker Engine. The most notable items we intend to provide in
the near future are:
- Trusted distribution of images: the effort is driven by the [distribution](https://github.com/docker/distribution)
group but will have significant impact on the Engine
- [User namespaces](https://github.com/docker/docker/pull/12648)
- [Seccomp support](https://github.com/docker/libcontainer/pull/613)
## 1.2 Plumbing project
We define a plumbing tool as a standalone piece of software usable and meaningful on its own. In
the current state of the Docker Engine, most subsystems provide independent functionalities (such
the builder, pushing and pulling images, running applications in a containerized environment, etc)
but all are coupled in a single binary. We want to offer the users to flexibility to use only the
pieces they need, and we will also gain in maintainability by splitting the project among multiple
repositories.
As it currently stands, the rough design outlines is to have:
- Low level plumbing tools, each dealing with one responsibility (e.g., [runC](https://runc.io))
- Docker subsystems services, each exposing an elementary concept over an API, and relying on one or
multiple lower level plumbing tools for their implementation (e.g., network management)
- Docker Engine to expose higher level actions (e.g., create a container with volume `V` and network
`N`), while still providing pass-through access to the individual subsystems.
The architectural details are still being worked on, but one thing we know for sure is that we need
to technically decouple the pieces.
### 1.2.1 Runtime
A Runtime tool already exists today in the form of [runC](https://github.com/opencontainers/runc).
We intend to modify the Engine to directly call out to a binary implementing the Open Containers
Specification such as runC rather than relying on libcontainer to set the container runtime up.
This plan will deprecate the existing [`execdriver`](https://github.com/docker/docker/tree/master/daemon/execdriver)
as different runtime backends will be implemented as separated binaries instead of being compiled
into the Engine.
### 1.2.2 Builder
The Builder (i.e., the ability to build an image from a Dockerfile) is already nicely decoupled,
but would benefit from being entirely separated from the Engine, and rely on the standard Engine
API for its operations.
### 1.2.3 Distribution
Distribution already has a [dedicated repository](https://github.com/docker/distribution) which
holds the implementation for Registry v2 and client libraries. We could imagine going further by
having the Engine call out to a binary providing image distribution related functionalities.
There are two short term goals related to image distribution. The first is stabilize and simplify
the push/pull code. Following that is the conversion to the more secure Registry V2 protocol.
### 1.2.4 Networking
Most of networking related code was already decoupled today in [libnetwork](https://github.com/docker/libnetwork).
As with other ingredients, we might want to take it a step further and make it a meaningful utility
that the Engine would call out to instead of a library.
## 1.3 Plugins
An initiative around plugins started with Docker 1.7.0, with the goal of allowing for out of
process extensibility of some Docker functionalities, starting with volumes and networking. The
approach is to provide specific extension points rather than generic hooking facilities. We also
deliberately keep the extensions API the simplest possible, expanding as we discover valid use
cases that cannot be implemented.
At the time of writing:
- Plugin support is merged as an experimental feature: real world use cases and user feedback will
help us refine the UX to make the feature more user friendly.
- There are no immediate plans to expand on the number of pluggable subsystems.
- Golang 1.5 might add language support for [plugins](https://docs.google.com/document/d/1nr-TQHw_er6GOQRsF6T43GGhFDelrAP0NqSS_00RgZQ)
which we consider supporting as an alternative to JSON/HTTP.
## 1.4 Volume management
Volumes are not a first class citizen in the Engine today: we would like better volume management,
similar to the way network are managed in the new [CNM](https://github.com/docker/docker/issues/9983).
## 1.5 Better API implementation
The current Engine API is insufficiently typed, versioned, and ultimately hard to maintain. We
also suffer from the lack of a common implementation with [Swarm](https://github.com/docker/swarm).
## 1.6 Checkpoint/restore
Support for checkpoint/restore was [merged](https://github.com/docker/libcontainer/pull/479) in
[libcontainer](https://github.com/docker/libcontainer) and made available through [runC](https://runc.io):
we intend to take advantage of it in the Engine.
# 2 Frozen features
## 2.1 Docker exec
We won't accept patches expanding the surface of `docker exec`, which we intend to keep as a
*debugging* feature, as well as being strongly dependent on the the Runtime ingredient effort.
## 2.2 Dockerfile syntax
The Dockerfile syntax as we know it is simple, and has proven succesful in supporting all our
[official images](https://github.com/docker-library/official-images). Although this is *not* a
definitive move, we temporarily won't accept more patches to the Dockerfile syntax for several
reasons:
- Long term impact of syntax changes is a sensitive matter that require an amount of attention
the volume of Engine codebase and activity today doesn't allow us to provide.
- Allowing the Builder to be implemented as a separate utility consuming the Engine's API will
open the door for many possibilities, such as offering alternate syntaxes or DSL for existing
languages without cluttering the Engine's codebase.
- A standalone Builder will also offer the opportunity for a better dedicated group of maintainers
to own the Dockerfile syntax and decide collectively on the direction to give it.
- Our experience with official images tend to show that no new instruction or syntax expansion is
*strictly* necessary for the majority of use cases, and although we are aware many things are still
lacking for many, we cannot make it a priority yet for the above reasons.
Again, this is not about saying that the Dockerfile syntax is done, it's about making choices about
what we want to do first!
## 2.3 Remote Registry Operations
A large amount of work is ongoing in the area of image distribution and
provenance. This includes moving to the V2 Registry API and heavily
refactoring the code that powers these features. The desired result is more
secure, reliable and easier to use image distribution.
Part of the problem with this part of the code base is the lack of a stable
and flexible interface. If new features are added that access the registry
without solidifying these interfaces, achieving feature parity will continue
to be elusive. While we get a handle on this situation, we are imposing a
moratorium on new code that accesses the Registry API in commands that don't
already make remote calls.
Currently, only the following commands cause interaction with a remote
registry:
- push
- pull
- run
- build
- search
- login
In the interest of stabilizing the registry access model during this ongoing
work, we are not accepting additions to other commands that will cause remote
interaction with the Registry API. This moratorium will lift when the goals of
the distribution project have been met.

View File

@@ -1 +1 @@
1.8.2
1.7.0

View File

@@ -8,7 +8,6 @@ import (
"github.com/Sirupsen/logrus"
"github.com/docker/docker/api/types"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/signal"
)
@@ -17,23 +16,23 @@ import (
//
// Usage: docker attach [OPTIONS] CONTAINER
func (cli *DockerCli) CmdAttach(args ...string) error {
cmd := Cli.Subcmd("attach", []string{"CONTAINER"}, "Attach to a running container", true)
noStdin := cmd.Bool([]string{"#nostdin", "-no-stdin"}, false, "Do not attach STDIN")
proxy := cmd.Bool([]string{"#sig-proxy", "-sig-proxy"}, true, "Proxy all received signals to the process")
var (
cmd = cli.Subcmd("attach", "CONTAINER", "Attach to a running container", true)
noStdin = cmd.Bool([]string{"#nostdin", "-no-stdin"}, false, "Do not attach STDIN")
proxy = cmd.Bool([]string{"#sig-proxy", "-sig-proxy"}, true, "Proxy all received signals to the process")
)
cmd.Require(flag.Exact, 1)
cmd.ParseFlags(args, true)
name := cmd.Arg(0)
serverResp, err := cli.call("GET", "/containers/"+cmd.Arg(0)+"/json", nil, nil)
stream, _, err := cli.call("GET", "/containers/"+name+"/json", nil, nil)
if err != nil {
return err
}
defer serverResp.body.Close()
var c types.ContainerJSON
if err := json.NewDecoder(serverResp.body).Decode(&c); err != nil {
if err := json.NewDecoder(stream).Decode(&c); err != nil {
return err
}
@@ -77,7 +76,7 @@ func (cli *DockerCli) CmdAttach(args ...string) error {
return err
}
if status != 0 {
return Cli.StatusError{StatusCode: status}
return StatusError{StatusCode: status}
}
return nil

View File

@@ -1,7 +1,6 @@
package client
import (
"archive/tar"
"bufio"
"encoding/base64"
"encoding/json"
@@ -14,25 +13,20 @@ import (
"os/exec"
"path"
"path/filepath"
"regexp"
"runtime"
"strconv"
"strings"
"github.com/docker/docker/api"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/graph/tags"
"github.com/docker/docker/opts"
"github.com/docker/docker/pkg/archive"
"github.com/docker/docker/pkg/fileutils"
"github.com/docker/docker/pkg/httputils"
"github.com/docker/docker/pkg/jsonmessage"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/parsers"
"github.com/docker/docker/pkg/progressreader"
"github.com/docker/docker/pkg/streamformatter"
"github.com/docker/docker/pkg/symlink"
"github.com/docker/docker/pkg/ulimit"
"github.com/docker/docker/pkg/units"
"github.com/docker/docker/pkg/urlutil"
"github.com/docker/docker/registry"
@@ -49,7 +43,7 @@ const (
//
// Usage: docker build [OPTIONS] PATH | URL | -
func (cli *DockerCli) CmdBuild(args ...string) error {
cmd := Cli.Subcmd("build", []string{"PATH | URL | -"}, "Build a new image from the source code at PATH", true)
cmd := cli.Subcmd("build", "PATH | URL | -", "Build a new image from the source code at PATH", true)
tag := cmd.String([]string{"t", "-tag"}, "", "Repository name (and optionally a tag) for the image")
suppressOutput := cmd.Bool([]string{"q", "-quiet"}, false, "Suppress the verbose output generated by the containers")
noCache := cmd.Bool([]string{"#no-cache", "-no-cache"}, false, "Do not use cache when building the image")
@@ -66,117 +60,154 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
flCPUSetMems := cmd.String([]string{"-cpuset-mems"}, "", "MEMs in which to allow execution (0-3, 0,1)")
flCgroupParent := cmd.String([]string{"-cgroup-parent"}, "", "Optional parent cgroup for the container")
ulimits := make(map[string]*ulimit.Ulimit)
flUlimits := opts.NewUlimitOpt(&ulimits)
cmd.Var(flUlimits, []string{"-ulimit"}, "Ulimit options")
cmd.Require(flag.Exact, 1)
// For trusted pull on "FROM <image>" instruction.
addTrustedFlags(cmd, true)
cmd.ParseFlags(args, true)
var (
context io.ReadCloser
context archive.Archive
isRemote bool
err error
)
_, err = exec.LookPath("git")
hasGit := err == nil
if cmd.Arg(0) == "-" {
// As a special case, 'docker build -' will build from either an empty context with the
// contents of stdin as a Dockerfile, or a tar-ed context from stdin.
buf := bufio.NewReader(cli.in)
magic, err := buf.Peek(tarHeaderSize)
if err != nil && err != io.EOF {
return fmt.Errorf("failed to peek context header from STDIN: %v", err)
}
if !archive.IsArchive(magic) {
dockerfile, err := ioutil.ReadAll(buf)
if err != nil {
return fmt.Errorf("failed to read Dockerfile from STDIN: %v", err)
}
specifiedContext := cmd.Arg(0)
// -f option has no meaning when we're reading it from stdin,
// so just use our default Dockerfile name
*dockerfileName = api.DefaultDockerfileName
context, err = archive.Generate(*dockerfileName, string(dockerfile))
} else {
context = ioutil.NopCloser(buf)
}
} else if urlutil.IsURL(cmd.Arg(0)) && (!urlutil.IsGitURL(cmd.Arg(0)) || !hasGit) {
isRemote = true
} else {
root := cmd.Arg(0)
if urlutil.IsGitURL(root) {
root, err = utils.GitClone(root)
if err != nil {
return err
}
defer os.RemoveAll(root)
}
if _, err := os.Stat(root); err != nil {
return err
}
var (
contextDir string
tempDir string
relDockerfile string
)
absRoot, err := filepath.Abs(root)
if err != nil {
return err
}
switch {
case specifiedContext == "-":
tempDir, relDockerfile, err = getContextFromReader(cli.in, *dockerfileName)
case urlutil.IsGitURL(specifiedContext) && hasGit:
tempDir, relDockerfile, err = getContextFromGitURL(specifiedContext, *dockerfileName)
case urlutil.IsURL(specifiedContext):
tempDir, relDockerfile, err = getContextFromURL(cli.out, specifiedContext, *dockerfileName)
default:
contextDir, relDockerfile, err = getContextFromLocalDir(specifiedContext, *dockerfileName)
filename := *dockerfileName // path to Dockerfile
if *dockerfileName == "" {
// No -f/--file was specified so use the default
*dockerfileName = api.DefaultDockerfileName
filename = filepath.Join(absRoot, *dockerfileName)
// Just to be nice ;-) look for 'dockerfile' too but only
// use it if we found it, otherwise ignore this check
if _, err = os.Lstat(filename); os.IsNotExist(err) {
tmpFN := path.Join(absRoot, strings.ToLower(*dockerfileName))
if _, err = os.Lstat(tmpFN); err == nil {
*dockerfileName = strings.ToLower(*dockerfileName)
filename = tmpFN
}
}
}
origDockerfile := *dockerfileName // used for error msg
if filename, err = filepath.Abs(filename); err != nil {
return err
}
// Verify that 'filename' is within the build context
filename, err = symlink.FollowSymlinkInScope(filename, absRoot)
if err != nil {
return fmt.Errorf("The Dockerfile (%s) must be within the build context (%s)", origDockerfile, root)
}
// Now reset the dockerfileName to be relative to the build context
*dockerfileName, err = filepath.Rel(absRoot, filename)
if err != nil {
return err
}
// And canonicalize dockerfile name to a platform-independent one
*dockerfileName, err = archive.CanonicalTarNameForPath(*dockerfileName)
if err != nil {
return fmt.Errorf("Cannot canonicalize dockerfile path %s: %v", *dockerfileName, err)
}
if _, err = os.Lstat(filename); os.IsNotExist(err) {
return fmt.Errorf("Cannot locate Dockerfile: %s", origDockerfile)
}
var includes = []string{"."}
excludes, err := utils.ReadDockerIgnore(path.Join(root, ".dockerignore"))
if err != nil {
return err
}
// If .dockerignore mentions .dockerignore or the Dockerfile
// then make sure we send both files over to the daemon
// because Dockerfile is, obviously, needed no matter what, and
// .dockerignore is needed to know if either one needs to be
// removed. The deamon will remove them for us, if needed, after it
// parses the Dockerfile.
keepThem1, _ := fileutils.Matches(".dockerignore", excludes)
keepThem2, _ := fileutils.Matches(*dockerfileName, excludes)
if keepThem1 || keepThem2 {
includes = append(includes, ".dockerignore", *dockerfileName)
}
if err := utils.ValidateContextDirectory(root, excludes); err != nil {
return fmt.Errorf("Error checking context is accessible: '%s'. Please check permissions and try again.", err)
}
options := &archive.TarOptions{
Compression: archive.Uncompressed,
ExcludePatterns: excludes,
IncludeFiles: includes,
}
context, err = archive.TarWithOptions(root, options)
if err != nil {
return err
}
}
if err != nil {
return fmt.Errorf("unable to prepare context: %s", err)
// windows: show error message about modified file permissions
// FIXME: this is not a valid warning when the daemon is running windows. should be removed once docker engine for windows can build.
if runtime.GOOS == "windows" {
fmt.Fprintln(cli.err, `SECURITY WARNING: You are building a Docker image from Windows against a Linux Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.`)
}
if tempDir != "" {
defer os.RemoveAll(tempDir)
contextDir = tempDir
}
// Resolve the FROM lines in the Dockerfile to trusted digest references
// using Notary. On a successful build, we must tag the resolved digests
// to the original name specified in the Dockerfile.
newDockerfile, resolvedTags, err := rewriteDockerfileFrom(filepath.Join(contextDir, relDockerfile), cli.trustedReference)
if err != nil {
return fmt.Errorf("unable to process Dockerfile: %v", err)
}
defer newDockerfile.Close()
// And canonicalize dockerfile name to a platform-independent one
relDockerfile, err = archive.CanonicalTarNameForPath(relDockerfile)
if err != nil {
return fmt.Errorf("cannot canonicalize dockerfile path %s: %v", relDockerfile, err)
}
var includes = []string{"."}
excludes, err := utils.ReadDockerIgnore(path.Join(contextDir, ".dockerignore"))
if err != nil {
return err
}
if err := utils.ValidateContextDirectory(contextDir, excludes); err != nil {
return fmt.Errorf("Error checking context: '%s'.", err)
}
// If .dockerignore mentions .dockerignore or the Dockerfile
// then make sure we send both files over to the daemon
// because Dockerfile is, obviously, needed no matter what, and
// .dockerignore is needed to know if either one needs to be
// removed. The deamon will remove them for us, if needed, after it
// parses the Dockerfile. Ignore errors here, as they will have been
// caught by ValidateContextDirectory above.
keepThem1, _ := fileutils.Matches(".dockerignore", excludes)
keepThem2, _ := fileutils.Matches(relDockerfile, excludes)
if keepThem1 || keepThem2 {
includes = append(includes, ".dockerignore", relDockerfile)
}
context, err = archive.TarWithOptions(contextDir, &archive.TarOptions{
Compression: archive.Uncompressed,
ExcludePatterns: excludes,
IncludeFiles: includes,
})
if err != nil {
return err
}
// Wrap the tar archive to replace the Dockerfile entry with the rewritten
// Dockerfile which uses trusted pulls.
context = replaceDockerfileTarWrapper(context, newDockerfile, relDockerfile)
var body io.Reader
// Setup an upload progress bar
// FIXME: ProgressReader shouldn't be this annoying to use
sf := streamformatter.NewStreamFormatter()
var body io.Reader = progressreader.New(progressreader.Config{
In: context,
Out: cli.out,
Formatter: sf,
NewLines: true,
ID: "",
Action: "Sending build context to Docker daemon",
})
if context != nil {
sf := streamformatter.NewStreamFormatter()
body = progressreader.New(progressreader.Config{
In: context,
Out: cli.out,
Formatter: sf,
NewLines: true,
ID: "",
Action: "Sending build context to Docker daemon",
})
}
var memory int64
if *flMemoryString != "" {
@@ -249,14 +280,7 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
v.Set("memswap", strconv.FormatInt(memorySwap, 10))
v.Set("cgroupparent", *flCgroupParent)
v.Set("dockerfile", relDockerfile)
ulimitsVar := flUlimits.GetList()
ulimitsJson, err := json.Marshal(ulimitsVar)
if err != nil {
return err
}
v.Set("ulimits", string(ulimitsJson))
v.Set("dockerfile", *dockerfileName)
headers := http.Header(make(map[string][]string))
buf, err := json.Marshal(cli.configFile.AuthConfigs)
@@ -264,371 +288,23 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
return err
}
headers.Add("X-Registry-Config", base64.URLEncoding.EncodeToString(buf))
headers.Set("Content-Type", "application/tar")
if context != nil {
headers.Set("Content-Type", "application/tar")
}
sopts := &streamOpts{
rawTerminal: true,
in: body,
out: cli.out,
headers: headers,
}
serverResp, err := cli.stream("POST", fmt.Sprintf("/build?%s", v.Encode()), sopts)
// Windows: show error message about modified file permissions.
if runtime.GOOS == "windows" {
h, err := httputils.ParseServerHeader(serverResp.header.Get("Server"))
if err == nil {
if h.OS != "windows" {
fmt.Fprintln(cli.err, `SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.`)
}
}
}
err = cli.stream("POST", fmt.Sprintf("/build?%s", v.Encode()), sopts)
if jerr, ok := err.(*jsonmessage.JSONError); ok {
// If no error code is set, default to 1
if jerr.Code == 0 {
jerr.Code = 1
}
return Cli.StatusError{Status: jerr.Message, StatusCode: jerr.Code}
return StatusError{Status: jerr.Message, StatusCode: jerr.Code}
}
if err != nil {
return err
}
// Since the build was successful, now we must tag any of the resolved
// images from the above Dockerfile rewrite.
for _, resolved := range resolvedTags {
if err := cli.tagTrusted(resolved.repoInfo, resolved.digestRef, resolved.tagRef); err != nil {
return err
}
}
return nil
}
// getDockerfileRelPath uses the given context directory for a `docker build`
// and returns the absolute path to the context directory, the relative path of
// the dockerfile in that context directory, and a non-nil error on success.
func getDockerfileRelPath(givenContextDir, givenDockerfile string) (absContextDir, relDockerfile string, err error) {
if absContextDir, err = filepath.Abs(givenContextDir); err != nil {
return "", "", fmt.Errorf("unable to get absolute context directory: %v", err)
}
// The context dir might be a symbolic link, so follow it to the actual
// target directory.
absContextDir, err = filepath.EvalSymlinks(absContextDir)
if err != nil {
return "", "", fmt.Errorf("unable to evaluate symlinks in context path: %v", err)
}
stat, err := os.Lstat(absContextDir)
if err != nil {
return "", "", fmt.Errorf("unable to stat context directory %q: %v", absContextDir, err)
}
if !stat.IsDir() {
return "", "", fmt.Errorf("context must be a directory: %s", absContextDir)
}
absDockerfile := givenDockerfile
if absDockerfile == "" {
// No -f/--file was specified so use the default relative to the
// context directory.
absDockerfile = filepath.Join(absContextDir, api.DefaultDockerfileName)
// Just to be nice ;-) look for 'dockerfile' too but only
// use it if we found it, otherwise ignore this check
if _, err = os.Lstat(absDockerfile); os.IsNotExist(err) {
altPath := filepath.Join(absContextDir, strings.ToLower(api.DefaultDockerfileName))
if _, err = os.Lstat(altPath); err == nil {
absDockerfile = altPath
}
}
}
// If not already an absolute path, the Dockerfile path should be joined to
// the base directory.
if !filepath.IsAbs(absDockerfile) {
absDockerfile = filepath.Join(absContextDir, absDockerfile)
}
// Verify that 'filename' is within the build context
absDockerfile, err = symlink.FollowSymlinkInScope(absDockerfile, absContextDir)
if err != nil {
return "", "", fmt.Errorf("The Dockerfile (%s) must be within the build context (%s)", givenDockerfile, givenContextDir)
}
if _, err := os.Lstat(absDockerfile); err != nil {
if os.IsNotExist(err) {
return "", "", fmt.Errorf("Cannot locate Dockerfile: absDockerfile: %q", absDockerfile)
}
return "", "", fmt.Errorf("unable to stat Dockerfile: %v", err)
}
if relDockerfile, err = filepath.Rel(absContextDir, absDockerfile); err != nil {
return "", "", fmt.Errorf("unable to get relative Dockerfile path: %v", err)
}
return absContextDir, relDockerfile, nil
}
// writeToFile copies from the given reader and writes it to a file with the
// given filename.
func writeToFile(r io.Reader, filename string) error {
file, err := os.OpenFile(filename, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, os.FileMode(0600))
if err != nil {
return fmt.Errorf("unable to create file: %v", err)
}
defer file.Close()
if _, err := io.Copy(file, r); err != nil {
return fmt.Errorf("unable to write file: %v", err)
}
return nil
}
// getContextFromReader will read the contents of the given reader as either a
// Dockerfile or tar archive to be extracted to a temporary directory used as
// the context directory. Returns the absolute path to the temporary context
// directory, the relative path of the dockerfile in that context directory,
// and a non-nil error on success.
func getContextFromReader(r io.Reader, dockerfileName string) (absContextDir, relDockerfile string, err error) {
buf := bufio.NewReader(r)
magic, err := buf.Peek(tarHeaderSize)
if err != nil && err != io.EOF {
return "", "", fmt.Errorf("failed to peek context header from STDIN: %v", err)
}
if absContextDir, err = ioutil.TempDir("", "docker-build-context-"); err != nil {
return "", "", fmt.Errorf("unbale to create temporary context directory: %v", err)
}
defer func(d string) {
if err != nil {
os.RemoveAll(d)
}
}(absContextDir)
if !archive.IsArchive(magic) { // Input should be read as a Dockerfile.
// -f option has no meaning when we're reading it from stdin,
// so just use our default Dockerfile name
relDockerfile = api.DefaultDockerfileName
return absContextDir, relDockerfile, writeToFile(buf, filepath.Join(absContextDir, relDockerfile))
}
if err := archive.Untar(buf, absContextDir, nil); err != nil {
return "", "", fmt.Errorf("unable to extract stdin to temporary context direcotry: %v", err)
}
return getDockerfileRelPath(absContextDir, dockerfileName)
}
// getContextFromGitURL uses a Git URL as context for a `docker build`. The
// git repo is cloned into a temporary directory used as the context directory.
// Returns the absolute path to the temporary context directory, the relative
// path of the dockerfile in that context directory, and a non-nil error on
// success.
func getContextFromGitURL(gitURL, dockerfileName string) (absContextDir, relDockerfile string, err error) {
if absContextDir, err = utils.GitClone(gitURL); err != nil {
return "", "", fmt.Errorf("unable to 'git clone' to temporary context directory: %v", err)
}
return getDockerfileRelPath(absContextDir, dockerfileName)
}
// getContextFromURL uses a remote URL as context for a `docker build`. The
// remote resource is downloaded as either a Dockerfile or a context tar
// archive and stored in a temporary directory used as the context directory.
// Returns the absolute path to the temporary context directory, the relative
// path of the dockerfile in that context directory, and a non-nil error on
// success.
func getContextFromURL(out io.Writer, remoteURL, dockerfileName string) (absContextDir, relDockerfile string, err error) {
response, err := httputils.Download(remoteURL)
if err != nil {
return "", "", fmt.Errorf("unable to download remote context %s: %v", remoteURL, err)
}
defer response.Body.Close()
// Pass the response body through a progress reader.
progReader := &progressreader.Config{
In: response.Body,
Out: out,
Formatter: streamformatter.NewStreamFormatter(),
Size: int(response.ContentLength),
NewLines: true,
ID: "",
Action: fmt.Sprintf("Downloading build context from remote url: %s", remoteURL),
}
return getContextFromReader(progReader, dockerfileName)
}
// getContextFromLocalDir uses the given local directory as context for a
// `docker build`. Returns the absolute path to the local context directory,
// the relative path of the dockerfile in that context directory, and a non-nil
// error on success.
func getContextFromLocalDir(localDir, dockerfileName string) (absContextDir, relDockerfile string, err error) {
// When using a local context directory, when the Dockerfile is specified
// with the `-f/--file` option then it is considered relative to the
// current directory and not the context directory.
if dockerfileName != "" {
if dockerfileName, err = filepath.Abs(dockerfileName); err != nil {
return "", "", fmt.Errorf("unable to get absolute path to Dockerfile: %v", err)
}
}
return getDockerfileRelPath(localDir, dockerfileName)
}
var dockerfileFromLinePattern = regexp.MustCompile(`(?i)^[\s]*FROM[ \f\r\t\v]+(?P<image>[^ \f\r\t\v\n#]+)`)
type trustedDockerfile struct {
*os.File
size int64
}
func (td *trustedDockerfile) Close() error {
td.File.Close()
return os.Remove(td.File.Name())
}
// resolvedTag records the repository, tag, and resolved digest reference
// from a Dockerfile rewrite.
type resolvedTag struct {
repoInfo *registry.RepositoryInfo
digestRef, tagRef registry.Reference
}
// rewriteDockerfileFrom rewrites the given Dockerfile by resolving images in
// "FROM <image>" instructions to a digest reference. `translator` is a
// function that takes a repository name and tag reference and returns a
// trusted digest reference.
func rewriteDockerfileFrom(dockerfileName string, translator func(string, registry.Reference) (registry.Reference, error)) (newDockerfile *trustedDockerfile, resolvedTags []*resolvedTag, err error) {
dockerfile, err := os.Open(dockerfileName)
if err != nil {
return nil, nil, fmt.Errorf("unable to open Dockerfile: %v", err)
}
defer dockerfile.Close()
scanner := bufio.NewScanner(dockerfile)
// Make a tempfile to store the rewritten Dockerfile.
tempFile, err := ioutil.TempFile("", "trusted-dockerfile-")
if err != nil {
return nil, nil, fmt.Errorf("unable to make temporary trusted Dockerfile: %v", err)
}
trustedFile := &trustedDockerfile{
File: tempFile,
}
defer func() {
if err != nil {
// Close the tempfile if there was an error during Notary lookups.
// Otherwise the caller should close it.
trustedFile.Close()
}
}()
// Scan the lines of the Dockerfile, looking for a "FROM" line.
for scanner.Scan() {
line := scanner.Text()
matches := dockerfileFromLinePattern.FindStringSubmatch(line)
if matches != nil && matches[1] != "scratch" {
// Replace the line with a resolved "FROM repo@digest"
repo, tag := parsers.ParseRepositoryTag(matches[1])
if tag == "" {
tag = tags.DEFAULTTAG
}
repoInfo, err := registry.ParseRepositoryInfo(repo)
if err != nil {
return nil, nil, fmt.Errorf("unable to parse repository info: %v", err)
}
ref := registry.ParseReference(tag)
if !ref.HasDigest() && isTrusted() {
trustedRef, err := translator(repo, ref)
if err != nil {
return nil, nil, err
}
line = dockerfileFromLinePattern.ReplaceAllLiteralString(line, fmt.Sprintf("FROM %s", trustedRef.ImageName(repo)))
resolvedTags = append(resolvedTags, &resolvedTag{
repoInfo: repoInfo,
digestRef: trustedRef,
tagRef: ref,
})
}
}
n, err := fmt.Fprintln(tempFile, line)
if err != nil {
return nil, nil, err
}
trustedFile.size += int64(n)
}
tempFile.Seek(0, os.SEEK_SET)
return trustedFile, resolvedTags, scanner.Err()
}
// replaceDockerfileTarWrapper wraps the given input tar archive stream and
// replaces the entry with the given Dockerfile name with the contents of the
// new Dockerfile. Returns a new tar archive stream with the replaced
// Dockerfile.
func replaceDockerfileTarWrapper(inputTarStream io.ReadCloser, newDockerfile *trustedDockerfile, dockerfileName string) io.ReadCloser {
pipeReader, pipeWriter := io.Pipe()
go func() {
tarReader := tar.NewReader(inputTarStream)
tarWriter := tar.NewWriter(pipeWriter)
defer inputTarStream.Close()
for {
hdr, err := tarReader.Next()
if err == io.EOF {
// Signals end of archive.
tarWriter.Close()
pipeWriter.Close()
return
}
if err != nil {
pipeWriter.CloseWithError(err)
return
}
var content io.Reader = tarReader
if hdr.Name == dockerfileName {
// This entry is the Dockerfile. Since the tar archive was
// generated from a directory on the local filesystem, the
// Dockerfile will only appear once in the archive.
hdr.Size = newDockerfile.size
content = newDockerfile
}
if err := tarWriter.WriteHeader(hdr); err != nil {
pipeWriter.CloseWithError(err)
return
}
if _, err := io.Copy(tarWriter, content); err != nil {
pipeWriter.CloseWithError(err)
return
}
}
}()
return pipeReader
return err
}

View File

@@ -2,34 +2,30 @@ package client
import (
"crypto/tls"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"os"
"path/filepath"
"reflect"
"strings"
"text/template"
"github.com/docker/docker/cli"
"github.com/docker/docker/cliconfig"
"github.com/docker/docker/opts"
"github.com/docker/docker/pkg/sockets"
"github.com/docker/docker/pkg/homedir"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/term"
"github.com/docker/docker/pkg/tlsconfig"
"github.com/docker/docker/utils"
)
// DockerCli represents the docker command line client.
// Instances of the client can be returned from NewDockerCli.
type DockerCli struct {
// initializing closure
init func() error
// proto holds the client protocol i.e. unix.
proto string
// addr holds the client address.
addr string
// basePath holds the path to prepend to the requests
basePath string
// configFile has the client configuration file
configFile *cliconfig.ConfigFile
@@ -58,11 +54,83 @@ type DockerCli struct {
transport *http.Transport
}
func (cli *DockerCli) Initialize() error {
if cli.init == nil {
return nil
var funcMap = template.FuncMap{
"json": func(v interface{}) string {
a, _ := json.Marshal(v)
return string(a)
},
}
func (cli *DockerCli) Out() io.Writer {
return cli.out
}
func (cli *DockerCli) Err() io.Writer {
return cli.err
}
func (cli *DockerCli) getMethod(args ...string) (func(...string) error, bool) {
camelArgs := make([]string, len(args))
for i, s := range args {
if len(s) == 0 {
return nil, false
}
camelArgs[i] = strings.ToUpper(s[:1]) + strings.ToLower(s[1:])
}
return cli.init()
methodName := "Cmd" + strings.Join(camelArgs, "")
method := reflect.ValueOf(cli).MethodByName(methodName)
if !method.IsValid() {
return nil, false
}
return method.Interface().(func(...string) error), true
}
// Cmd executes the specified command.
func (cli *DockerCli) Cmd(args ...string) error {
if len(args) > 1 {
method, exists := cli.getMethod(args[:2]...)
if exists {
return method(args[2:]...)
}
}
if len(args) > 0 {
method, exists := cli.getMethod(args[0])
if !exists {
return fmt.Errorf("docker: '%s' is not a docker command.\nSee 'docker --help'.", args[0])
}
return method(args[1:]...)
}
return cli.CmdHelp()
}
// Subcmd is a subcommand of the main "docker" command.
// A subcommand represents an action that can be performed
// from the Docker command line client.
//
// To see all available subcommands, run "docker --help".
func (cli *DockerCli) Subcmd(name, signature, description string, exitOnError bool) *flag.FlagSet {
var errorHandling flag.ErrorHandling
if exitOnError {
errorHandling = flag.ExitOnError
} else {
errorHandling = flag.ContinueOnError
}
flags := flag.NewFlagSet(name, errorHandling)
if signature != "" {
signature = " " + signature
}
flags.Usage = func() {
flags.ShortUsage()
flags.PrintDefaults()
}
flags.ShortUsage = func() {
options := ""
if flags.FlagCountUndeprecated() > 0 {
options = " [OPTIONS]"
}
fmt.Fprintf(flags.Out(), "\nUsage: docker %s%s%s\n\n%s\n", name, options, signature, description)
}
return flags
}
// CheckTtyInput checks if we are trying to attach to a container tty
@@ -77,86 +145,59 @@ func (cli *DockerCli) CheckTtyInput(attachStdin, ttyMode bool) error {
return nil
}
func (cli *DockerCli) PsFormat() string {
return cli.configFile.PsFormat
}
// NewDockerCli returns a DockerCli instance with IO output and error streams set by in, out and err.
// The key file, protocol (i.e. unix) and address are passed in as strings, along with the tls.Config. If the tls.Config
// is set the client scheme will be set to https.
// The client will be given a 32-second timeout (see https://github.com/docker/docker/pull/8035).
func NewDockerCli(in io.ReadCloser, out, err io.Writer, clientFlags *cli.ClientFlags) *DockerCli {
cli := &DockerCli{
in: in,
out: out,
err: err,
keyFile: clientFlags.Common.TrustKey,
func NewDockerCli(in io.ReadCloser, out, err io.Writer, keyFile string, proto, addr string, tlsConfig *tls.Config) *DockerCli {
var (
inFd uintptr
outFd uintptr
isTerminalIn = false
isTerminalOut = false
scheme = "http"
)
if tlsConfig != nil {
scheme = "https"
}
if in != nil {
inFd, isTerminalIn = term.GetFdInfo(in)
}
cli.init = func() error {
clientFlags.PostParse()
hosts := clientFlags.Common.Hosts
switch len(hosts) {
case 0:
defaultHost := os.Getenv("DOCKER_HOST")
if defaultHost == "" {
defaultHost = opts.DefaultHost
}
defaultHost, err := opts.ValidateHost(defaultHost)
if err != nil {
return err
}
hosts = []string{defaultHost}
case 1:
// only accept one host to talk to
default:
return errors.New("Please specify only one -H")
}
protoAddrParts := strings.SplitN(hosts[0], "://", 2)
cli.proto, cli.addr = protoAddrParts[0], protoAddrParts[1]
if cli.proto == "tcp" {
// error is checked in pkg/parsers already
parsed, _ := url.Parse("tcp://" + cli.addr)
cli.addr = parsed.Host
cli.basePath = parsed.Path
}
if clientFlags.Common.TLSOptions != nil {
cli.scheme = "https"
var e error
cli.tlsConfig, e = tlsconfig.Client(*clientFlags.Common.TLSOptions)
if e != nil {
return e
}
} else {
cli.scheme = "http"
}
if cli.in != nil {
cli.inFd, cli.isTerminalIn = term.GetFdInfo(cli.in)
}
if cli.out != nil {
cli.outFd, cli.isTerminalOut = term.GetFdInfo(cli.out)
}
// The transport is created here for reuse during the client session.
cli.transport = &http.Transport{
TLSClientConfig: cli.tlsConfig,
}
sockets.ConfigureTCPTransport(cli.transport, cli.proto, cli.addr)
configFile, e := cliconfig.Load(cliconfig.ConfigDir())
if e != nil {
fmt.Fprintf(cli.err, "WARNING: Error loading config file:%v\n", e)
}
cli.configFile = configFile
return nil
if out != nil {
outFd, isTerminalOut = term.GetFdInfo(out)
}
return cli
if err == nil {
err = out
}
// The transport is created here for reuse during the client session.
tr := &http.Transport{
TLSClientConfig: tlsConfig,
}
utils.ConfigureTCPTransport(tr, proto, addr)
configFile, e := cliconfig.Load(filepath.Join(homedir.Get(), ".docker"))
if e != nil {
fmt.Fprintf(err, "WARNING: Error loading config file:%v\n", e)
}
return &DockerCli{
proto: proto,
addr: addr,
configFile: configFile,
in: in,
out: out,
err: err,
keyFile: keyFile,
inFd: inFd,
outFd: outFd,
isTerminalIn: isTerminalIn,
isTerminalOut: isTerminalOut,
tlsConfig: tlsConfig,
scheme: scheme,
transport: tr,
}
}

View File

@@ -3,3 +3,15 @@
// Run "docker help SUBCOMMAND" or "docker SUBCOMMAND --help" to see more information on any Docker subcommand, including the full list of options supported for the subcommand.
// See https://docs.docker.com/installation/ for instructions on installing Docker.
package client
import "fmt"
// An StatusError reports an unsuccessful exit by a command.
type StatusError struct {
Status string
StatusCode int
}
func (e StatusError) Error() string {
return fmt.Sprintf("Status: %s, Code: %d", e.Status, e.StatusCode)
}

View File

@@ -6,7 +6,6 @@ import (
"net/url"
"github.com/docker/docker/api/types"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/opts"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/parsers"
@@ -18,7 +17,7 @@ import (
//
// Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
func (cli *DockerCli) CmdCommit(args ...string) error {
cmd := Cli.Subcmd("commit", []string{"CONTAINER [REPOSITORY[:TAG]]"}, "Create a new image from a container's changes", true)
cmd := cli.Subcmd("commit", "CONTAINER [REPOSITORY[:TAG]]", "Create a new image from a container's changes", true)
flPause := cmd.Bool([]string{"p", "-pause"}, true, "Pause container during commit")
flComment := cmd.String([]string{"m", "-message"}, "", "Commit message")
flAuthor := cmd.String([]string{"a", "#author", "-author"}, "", "Author (e.g., \"John Hannibal Smith <hannibal@a-team.com>\")")
@@ -28,7 +27,6 @@ func (cli *DockerCli) CmdCommit(args ...string) error {
flConfig := cmd.String([]string{"#run", "#-run"}, "", "This option is deprecated and will be removed in a future version in favor of inline Dockerfile-compatible commands")
cmd.Require(flag.Max, 2)
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
var (
@@ -68,14 +66,12 @@ func (cli *DockerCli) CmdCommit(args ...string) error {
return err
}
}
serverResp, err := cli.call("POST", "/commit?"+v.Encode(), config, nil)
stream, _, err := cli.call("POST", "/commit?"+v.Encode(), config, nil)
if err != nil {
return err
}
defer serverResp.body.Close()
if err := json.NewDecoder(serverResp.body).Decode(&response); err != nil {
if err := json.NewDecoder(stream).Decode(&response); err != nil {
return err
}

View File

@@ -1,324 +1,57 @@
package client
import (
"encoding/base64"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"os"
"path/filepath"
"strings"
"github.com/docker/docker/api/types"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/pkg/archive"
flag "github.com/docker/docker/pkg/mflag"
)
type copyDirection int
const (
fromContainer copyDirection = (1 << iota)
toContainer
acrossContainers = fromContainer | toContainer
)
// CmdCp copies files/folders to or from a path in a container.
// CmdCp copies files/folders from a path on the container to a directory on the host running the command.
//
// When copying from a container, if LOCALPATH is '-' the data is written as a
// tar archive file to STDOUT.
// If HOSTDIR is '-', the data is written as a tar file to STDOUT.
//
// When copying to a container, if LOCALPATH is '-' the data is read as a tar
// archive file from STDIN, and the destination CONTAINER:PATH, must specify
// a directory.
//
// Usage:
// docker cp CONTAINER:PATH LOCALPATH|-
// docker cp LOCALPATH|- CONTAINER:PATH
// Usage: docker cp CONTAINER:PATH HOSTDIR
func (cli *DockerCli) CmdCp(args ...string) error {
cmd := Cli.Subcmd(
"cp",
[]string{"CONTAINER:PATH LOCALPATH|-", "LOCALPATH|- CONTAINER:PATH"},
strings.Join([]string{
"Copy files/folders between a container and your host.\n",
"Use '-' as the source to read a tar archive from stdin\n",
"and extract it to a directory destination in a container.\n",
"Use '-' as the destination to stream a tar archive of a\n",
"container source to stdout.",
}, ""),
true,
)
cmd := cli.Subcmd("cp", "CONTAINER:PATH HOSTDIR|-", "Copy files/folders from a PATH on the container to a HOSTDIR on the host\nrunning the command. Use '-' to write the data as a tar file to STDOUT.", true)
cmd.Require(flag.Exact, 2)
cmd.ParseFlags(args, true)
if cmd.Arg(0) == "" {
return fmt.Errorf("source can not be empty")
}
if cmd.Arg(1) == "" {
return fmt.Errorf("destination can not be empty")
// deal with path name with `:`
info := strings.SplitN(cmd.Arg(0), ":", 2)
if len(info) != 2 {
return fmt.Errorf("Error: Path not specified")
}
srcContainer, srcPath := splitCpArg(cmd.Arg(0))
dstContainer, dstPath := splitCpArg(cmd.Arg(1))
var direction copyDirection
if srcContainer != "" {
direction |= fromContainer
cfg := &types.CopyConfig{
Resource: info[1],
}
if dstContainer != "" {
direction |= toContainer
stream, statusCode, err := cli.call("POST", "/containers/"+info[0]+"/copy", cfg, nil)
if stream != nil {
defer stream.Close()
}
switch direction {
case fromContainer:
return cli.copyFromContainer(srcContainer, srcPath, dstPath)
case toContainer:
return cli.copyToContainer(srcPath, dstContainer, dstPath)
case acrossContainers:
// Copying between containers isn't supported.
return fmt.Errorf("copying between containers is not supported")
default:
// User didn't specify any container.
return fmt.Errorf("must specify at least one container source")
if statusCode == 404 {
return fmt.Errorf("No such container: %v", info[0])
}
}
// We use `:` as a delimiter between CONTAINER and PATH, but `:` could also be
// in a valid LOCALPATH, like `file:name.txt`. We can resolve this ambiguity by
// requiring a LOCALPATH with a `:` to be made explicit with a relative or
// absolute path:
// `/path/to/file:name.txt` or `./file:name.txt`
//
// This is apparently how `scp` handles this as well:
// http://www.cyberciti.biz/faq/rsync-scp-file-name-with-colon-punctuation-in-it/
//
// We can't simply check for a filepath separator because container names may
// have a separator, e.g., "host0/cname1" if container is in a Docker cluster,
// so we have to check for a `/` or `.` prefix. Also, in the case of a Windows
// client, a `:` could be part of an absolute Windows path, in which case it
// is immediately proceeded by a backslash.
func splitCpArg(arg string) (container, path string) {
if filepath.IsAbs(arg) {
// Explicit local absolute path, e.g., `C:\foo` or `/foo`.
return "", arg
}
parts := strings.SplitN(arg, ":", 2)
if len(parts) == 1 || strings.HasPrefix(parts[0], ".") {
// Either there's no `:` in the arg
// OR it's an explicit local relative path like `./file:name.txt`.
return "", arg
}
return parts[0], parts[1]
}
func (cli *DockerCli) statContainerPath(containerName, path string) (types.ContainerPathStat, error) {
var stat types.ContainerPathStat
query := make(url.Values, 1)
query.Set("path", filepath.ToSlash(path)) // Normalize the paths used in the API.
urlStr := fmt.Sprintf("/containers/%s/archive?%s", containerName, query.Encode())
response, err := cli.call("HEAD", urlStr, nil, nil)
if err != nil {
return stat, err
}
defer response.body.Close()
if response.statusCode != http.StatusOK {
return stat, fmt.Errorf("unexpected status code from daemon: %d", response.statusCode)
}
return getContainerPathStatFromHeader(response.header)
}
func getContainerPathStatFromHeader(header http.Header) (types.ContainerPathStat, error) {
var stat types.ContainerPathStat
encodedStat := header.Get("X-Docker-Container-Path-Stat")
statDecoder := base64.NewDecoder(base64.StdEncoding, strings.NewReader(encodedStat))
err := json.NewDecoder(statDecoder).Decode(&stat)
if err != nil {
err = fmt.Errorf("unable to decode container path stat header: %s", err)
}
return stat, err
}
func resolveLocalPath(localPath string) (absPath string, err error) {
if absPath, err = filepath.Abs(localPath); err != nil {
return
}
return archive.PreserveTrailingDotOrSeparator(absPath, localPath), nil
}
func (cli *DockerCli) copyFromContainer(srcContainer, srcPath, dstPath string) (err error) {
if dstPath != "-" {
// Get an absolute destination path.
dstPath, err = resolveLocalPath(dstPath)
if err != nil {
return err
}
}
query := make(url.Values, 1)
query.Set("path", filepath.ToSlash(srcPath)) // Normalize the paths used in the API.
urlStr := fmt.Sprintf("/containers/%s/archive?%s", srcContainer, query.Encode())
response, err := cli.call("GET", urlStr, nil, nil)
if err != nil {
return err
}
defer response.body.Close()
if response.statusCode != http.StatusOK {
return fmt.Errorf("unexpected status code from daemon: %d", response.statusCode)
}
if dstPath == "-" {
// Send the response to STDOUT.
_, err = io.Copy(os.Stdout, response.body)
return err
}
// In order to get the copy behavior right, we need to know information
// about both the source and the destination. The response headers include
// stat info about the source that we can use in deciding exactly how to
// copy it locally. Along with the stat info about the local destination,
// we have everything we need to handle the multiple possibilities there
// can be when copying a file/dir from one location to another file/dir.
stat, err := getContainerPathStatFromHeader(response.header)
if err != nil {
return fmt.Errorf("unable to get resource stat from response: %s", err)
}
// Prepare source copy info.
srcInfo := archive.CopyInfo{
Path: srcPath,
Exists: true,
IsDir: stat.Mode.IsDir(),
}
// See comments in the implementation of `archive.CopyTo` for exactly what
// goes into deciding how and whether the source archive needs to be
// altered for the correct copy behavior.
return archive.CopyTo(response.body, srcInfo, dstPath)
}
func (cli *DockerCli) copyToContainer(srcPath, dstContainer, dstPath string) (err error) {
if srcPath != "-" {
// Get an absolute source path.
srcPath, err = resolveLocalPath(srcPath)
hostPath := cmd.Arg(1)
if statusCode == 200 {
if hostPath == "-" {
_, err = io.Copy(cli.out, stream)
} else {
err = archive.Untar(stream, hostPath, &archive.TarOptions{NoLchown: true})
}
if err != nil {
return err
}
}
// In order to get the copy behavior right, we need to know information
// about both the source and destination. The API is a simple tar
// archive/extract API but we can use the stat info header about the
// destination to be more informed about exactly what the destination is.
// Prepare destination copy info by stat-ing the container path.
dstInfo := archive.CopyInfo{Path: dstPath}
dstStat, err := cli.statContainerPath(dstContainer, dstPath)
// If the destination is a symbolic link, we should evaluate it.
if err == nil && dstStat.Mode&os.ModeSymlink != 0 {
linkTarget := dstStat.LinkTarget
if !filepath.IsAbs(linkTarget) {
// Join with the parent directory.
dstParent, _ := archive.SplitPathDirEntry(dstPath)
linkTarget = filepath.Join(dstParent, linkTarget)
}
dstInfo.Path = linkTarget
dstStat, err = cli.statContainerPath(dstContainer, linkTarget)
}
// Ignore any error and assume that the parent directory of the destination
// path exists, in which case the copy may still succeed. If there is any
// type of conflict (e.g., non-directory overwriting an existing directory
// or vice versia) the extraction will fail. If the destination simply did
// not exist, but the parent directory does, the extraction will still
// succeed.
if err == nil {
dstInfo.Exists, dstInfo.IsDir = true, dstStat.Mode.IsDir()
}
var (
content io.Reader
resolvedDstPath string
)
if srcPath == "-" {
// Use STDIN.
content = os.Stdin
resolvedDstPath = dstInfo.Path
if !dstInfo.IsDir {
return fmt.Errorf("destination %q must be a directory", fmt.Sprintf("%s:%s", dstContainer, dstPath))
}
} else {
// Prepare source copy info.
srcInfo, err := archive.CopyInfoSourcePath(srcPath)
if err != nil {
return err
}
srcArchive, err := archive.TarResource(srcInfo)
if err != nil {
return err
}
defer srcArchive.Close()
// With the stat info about the local source as well as the
// destination, we have enough information to know whether we need to
// alter the archive that we upload so that when the server extracts
// it to the specified directory in the container we get the disired
// copy behavior.
// See comments in the implementation of `archive.PrepareArchiveCopy`
// for exactly what goes into deciding how and whether the source
// archive needs to be altered for the correct copy behavior when it is
// extracted. This function also infers from the source and destination
// info which directory to extract to, which may be the parent of the
// destination that the user specified.
dstDir, preparedArchive, err := archive.PrepareArchiveCopy(srcArchive, srcInfo, dstInfo)
if err != nil {
return err
}
defer preparedArchive.Close()
resolvedDstPath = dstDir
content = preparedArchive
}
query := make(url.Values, 2)
query.Set("path", filepath.ToSlash(resolvedDstPath)) // Normalize the paths used in the API.
// Do not allow for an existing directory to be overwritten by a non-directory and vice versa.
query.Set("noOverwriteDirNonDir", "true")
urlStr := fmt.Sprintf("/containers/%s/archive?%s", dstContainer, query.Encode())
response, err := cli.stream("PUT", urlStr, &streamOpts{in: content})
if err != nil {
return err
}
defer response.body.Close()
if response.statusCode != http.StatusOK {
return fmt.Errorf("unexpected status code from daemon: %d", response.statusCode)
}
return nil
}

View File

@@ -10,11 +10,11 @@ import (
"strings"
"github.com/docker/docker/api/types"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/graph/tags"
"github.com/docker/docker/pkg/parsers"
"github.com/docker/docker/registry"
"github.com/docker/docker/runconfig"
"github.com/docker/docker/utils"
)
func (cli *DockerCli) pullImage(image string) error {
@@ -52,7 +52,7 @@ func (cli *DockerCli) pullImageCustomOut(image string, out io.Writer) error {
out: out,
headers: map[string][]string{"X-Registry-Auth": registryAuthHeader},
}
if _, err := cli.stream("POST", "/images/create?"+v.Encode(), sopts); err != nil {
if err := cli.stream("POST", "/images/create?"+v.Encode(), sopts); err != nil {
return err
}
return nil
@@ -94,54 +94,30 @@ func (cli *DockerCli) createContainer(config *runconfig.Config, hostConfig *runc
defer containerIDFile.Close()
}
repo, tag := parsers.ParseRepositoryTag(config.Image)
if tag == "" {
tag = tags.DEFAULTTAG
}
ref := registry.ParseReference(tag)
var trustedRef registry.Reference
if isTrusted() && !ref.HasDigest() {
var err error
trustedRef, err = cli.trustedReference(repo, ref)
if err != nil {
return nil, err
}
config.Image = trustedRef.ImageName(repo)
}
//create the container
serverResp, err := cli.call("POST", "/containers/create?"+containerValues.Encode(), mergedConfig, nil)
stream, statusCode, err := cli.call("POST", "/containers/create?"+containerValues.Encode(), mergedConfig, nil)
//if image not found try to pull it
if serverResp.statusCode == 404 && strings.Contains(err.Error(), config.Image) {
fmt.Fprintf(cli.err, "Unable to find image '%s' locally\n", ref.ImageName(repo))
if statusCode == 404 && strings.Contains(err.Error(), config.Image) {
repo, tag := parsers.ParseRepositoryTag(config.Image)
if tag == "" {
tag = tags.DEFAULTTAG
}
fmt.Fprintf(cli.err, "Unable to find image '%s' locally\n", utils.ImageReference(repo, tag))
// we don't want to write to stdout anything apart from container.ID
if err = cli.pullImageCustomOut(config.Image, cli.err); err != nil {
return nil, err
}
if trustedRef != nil && !ref.HasDigest() {
repoInfo, err := registry.ParseRepositoryInfo(repo)
if err != nil {
return nil, err
}
if err := cli.tagTrusted(repoInfo, trustedRef, ref); err != nil {
return nil, err
}
}
// Retry
if serverResp, err = cli.call("POST", "/containers/create?"+containerValues.Encode(), mergedConfig, nil); err != nil {
if stream, _, err = cli.call("POST", "/containers/create?"+containerValues.Encode(), mergedConfig, nil); err != nil {
return nil, err
}
} else if err != nil {
return nil, err
}
defer serverResp.body.Close()
var response types.ContainerCreateResponse
if err := json.NewDecoder(serverResp.body).Decode(&response); err != nil {
if err := json.NewDecoder(stream).Decode(&response); err != nil {
return nil, err
}
for _, warning := range response.Warnings {
@@ -159,8 +135,7 @@ func (cli *DockerCli) createContainer(config *runconfig.Config, hostConfig *runc
//
// Usage: docker create [OPTIONS] IMAGE [COMMAND] [ARG...]
func (cli *DockerCli) CmdCreate(args ...string) error {
cmd := Cli.Subcmd("create", []string{"IMAGE [COMMAND] [ARG...]"}, "Create a new container", true)
addTrustedFlags(cmd, true)
cmd := cli.Subcmd("create", "IMAGE [COMMAND] [ARG...]", "Create a new container", true)
// These are flags not stored in Config/HostConfig
var (

View File

@@ -5,7 +5,6 @@ import (
"fmt"
"github.com/docker/docker/api/types"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/pkg/archive"
flag "github.com/docker/docker/pkg/mflag"
)
@@ -18,24 +17,21 @@ import (
//
// Usage: docker diff CONTAINER
func (cli *DockerCli) CmdDiff(args ...string) error {
cmd := Cli.Subcmd("diff", []string{"CONTAINER"}, "Inspect changes on a container's filesystem", true)
cmd := cli.Subcmd("diff", "CONTAINER", "Inspect changes on a container's filesystem", true)
cmd.Require(flag.Exact, 1)
cmd.ParseFlags(args, true)
if cmd.Arg(0) == "" {
return fmt.Errorf("Container name cannot be empty")
}
serverResp, err := cli.call("GET", "/containers/"+cmd.Arg(0)+"/changes", nil, nil)
rdr, _, err := cli.call("GET", "/containers/"+cmd.Arg(0)+"/changes", nil, nil)
if err != nil {
return err
}
defer serverResp.body.Close()
changes := []types.ContainerChange{}
if err := json.NewDecoder(serverResp.body).Decode(&changes); err != nil {
if err := json.NewDecoder(rdr).Decode(&changes); err != nil {
return err
}

View File

@@ -2,9 +2,7 @@ package client
import (
"net/url"
"time"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/opts"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/parsers/filters"
@@ -15,7 +13,7 @@ import (
//
// Usage: docker events [OPTIONS]
func (cli *DockerCli) CmdEvents(args ...string) error {
cmd := Cli.Subcmd("events", nil, "Get real time events from the server", true)
cmd := cli.Subcmd("events", "", "Get real time events from the server", true)
since := cmd.String([]string{"#since", "-since"}, "", "Show all events created since timestamp")
until := cmd.String([]string{"-until"}, "", "Stream events until this timestamp")
flFilter := opts.NewListOpts(nil)
@@ -38,12 +36,11 @@ func (cli *DockerCli) CmdEvents(args ...string) error {
return err
}
}
ref := time.Now()
if *since != "" {
v.Set("since", timeutils.GetTimestamp(*since, ref))
v.Set("since", timeutils.GetTimestamp(*since))
}
if *until != "" {
v.Set("until", timeutils.GetTimestamp(*until, ref))
v.Set("until", timeutils.GetTimestamp(*until))
}
if len(eventFilterArgs) > 0 {
filterJSON, err := filters.ToParam(eventFilterArgs)
@@ -56,7 +53,7 @@ func (cli *DockerCli) CmdEvents(args ...string) error {
rawTerminal: true,
out: cli.out,
}
if _, err := cli.stream("GET", "/events?"+v.Encode(), sopts); err != nil {
if err := cli.stream("GET", "/events?"+v.Encode(), sopts); err != nil {
return err
}
return nil

View File

@@ -7,7 +7,6 @@ import (
"github.com/Sirupsen/logrus"
"github.com/docker/docker/api/types"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/pkg/promise"
"github.com/docker/docker/runconfig"
)
@@ -16,23 +15,21 @@ import (
//
// Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
func (cli *DockerCli) CmdExec(args ...string) error {
cmd := Cli.Subcmd("exec", []string{"CONTAINER COMMAND [ARG...]"}, "Run a command in a running container", true)
cmd := cli.Subcmd("exec", "CONTAINER COMMAND [ARG...]", "Run a command in a running container", true)
execConfig, err := runconfig.ParseExec(cmd, args)
// just in case the ParseExec does not exit
if execConfig.Container == "" || err != nil {
return Cli.StatusError{StatusCode: 1}
return StatusError{StatusCode: 1}
}
serverResp, err := cli.call("POST", "/containers/"+execConfig.Container+"/exec", execConfig, nil)
stream, _, err := cli.call("POST", "/containers/"+execConfig.Container+"/exec", execConfig, nil)
if err != nil {
return err
}
defer serverResp.body.Close()
var response types.ContainerExecCreateResponse
if err := json.NewDecoder(serverResp.body).Decode(&response); err != nil {
if err := json.NewDecoder(stream).Decode(&response); err != nil {
return err
}
@@ -127,7 +124,7 @@ func (cli *DockerCli) CmdExec(args ...string) error {
}
if status != 0 {
return Cli.StatusError{StatusCode: status}
return StatusError{StatusCode: status}
}
return nil

View File

@@ -5,7 +5,6 @@ import (
"io"
"os"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
@@ -15,7 +14,7 @@ import (
//
// Usage: docker export [OPTIONS] CONTAINER
func (cli *DockerCli) CmdExport(args ...string) error {
cmd := Cli.Subcmd("export", []string{"CONTAINER"}, "Export the contents of a container's filesystem as a tar archive", true)
cmd := cli.Subcmd("export", "CONTAINER", "Export a filesystem as a tar archive (streamed to STDOUT by default)", true)
outfile := cmd.String([]string{"o", "-output"}, "", "Write to a file, instead of STDOUT")
cmd.Require(flag.Exact, 1)
@@ -39,7 +38,7 @@ func (cli *DockerCli) CmdExport(args ...string) error {
rawTerminal: true,
out: output,
}
if _, err := cli.stream("GET", "/containers/"+image+"/export", sopts); err != nil {
if err := cli.stream("GET", "/containers/"+image+"/export", sopts); err != nil {
return err
}

34
api/client/help.go Normal file
View File

@@ -0,0 +1,34 @@
package client
import (
"fmt"
flag "github.com/docker/docker/pkg/mflag"
)
// CmdHelp displays information on a Docker command.
//
// If more than one command is specified, information is only shown for the first command.
//
// Usage: docker help COMMAND or docker COMMAND --help
func (cli *DockerCli) CmdHelp(args ...string) error {
if len(args) > 1 {
method, exists := cli.getMethod(args[:2]...)
if exists {
method("--help")
return nil
}
}
if len(args) > 0 {
method, exists := cli.getMethod(args[0])
if !exists {
return fmt.Errorf("docker: '%s' is not a docker command. See 'docker --help'.", args[0])
}
method("--help")
return nil
}
flag.Usage()
return nil
}

View File

@@ -138,18 +138,18 @@ func (cli *DockerCli) hijack(method, path string, setRawTerminal bool, in io.Rea
if err != nil {
return err
}
req, err := http.NewRequest(method, fmt.Sprintf("%s/v%s%s", cli.basePath, api.Version, path), params)
req, err := http.NewRequest(method, fmt.Sprintf("/v%s%s", api.APIVERSION, path), params)
if err != nil {
return err
}
// Add CLI Config's HTTP Headers BEFORE we set the Docker headers
// then the user can't change OUR headers
for k, v := range cli.configFile.HTTPHeaders {
for k, v := range cli.configFile.HttpHeaders {
req.Header.Set(k, v)
}
req.Header.Set("User-Agent", "Docker-Client/"+dockerversion.VERSION+" ("+runtime.GOOS+")")
req.Header.Set("User-Agent", "Docker-Client/"+dockerversion.VERSION)
req.Header.Set("Content-Type", "text/plain")
req.Header.Set("Connection", "Upgrade")
req.Header.Set("Upgrade", "tcp")

View File

@@ -7,7 +7,6 @@ import (
"time"
"github.com/docker/docker/api/types"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/stringid"
"github.com/docker/docker/pkg/stringutils"
@@ -18,23 +17,20 @@ import (
//
// Usage: docker history [OPTIONS] IMAGE
func (cli *DockerCli) CmdHistory(args ...string) error {
cmd := Cli.Subcmd("history", []string{"IMAGE"}, "Show the history of an image", true)
cmd := cli.Subcmd("history", "IMAGE", "Show the history of an image", true)
human := cmd.Bool([]string{"H", "-human"}, true, "Print sizes and dates in human readable format")
quiet := cmd.Bool([]string{"q", "-quiet"}, false, "Only show numeric IDs")
noTrunc := cmd.Bool([]string{"#notrunc", "-no-trunc"}, false, "Don't truncate output")
cmd.Require(flag.Exact, 1)
cmd.ParseFlags(args, true)
serverResp, err := cli.call("GET", "/images/"+cmd.Arg(0)+"/history", nil, nil)
rdr, _, err := cli.call("GET", "/images/"+cmd.Arg(0)+"/history", nil, nil)
if err != nil {
return err
}
defer serverResp.body.Close()
history := []types.ImageHistory{}
if err := json.NewDecoder(serverResp.body).Decode(&history); err != nil {
if err := json.NewDecoder(rdr).Decode(&history); err != nil {
return err
}

View File

@@ -8,7 +8,6 @@ import (
"time"
"github.com/docker/docker/api/types"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/opts"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/parsers"
@@ -22,7 +21,7 @@ import (
//
// Usage: docker images [OPTIONS] [REPOSITORY]
func (cli *DockerCli) CmdImages(args ...string) error {
cmd := Cli.Subcmd("images", []string{"[REPOSITORY]"}, "List images", true)
cmd := cli.Subcmd("images", "[REPOSITORY]", "List images", true)
quiet := cmd.Bool([]string{"q", "-quiet"}, false, "Only show numeric IDs")
all := cmd.Bool([]string{"a", "-all"}, false, "Show all images (default hides intermediate images)")
noTrunc := cmd.Bool([]string{"#notrunc", "-no-trunc"}, false, "Don't truncate output")
@@ -31,7 +30,6 @@ func (cli *DockerCli) CmdImages(args ...string) error {
flFilter := opts.NewListOpts(nil)
cmd.Var(&flFilter, []string{"f", "-filter"}, "Filter output based on conditions provided")
cmd.Require(flag.Max, 1)
cmd.ParseFlags(args, true)
// Consolidate all filter flags, and sanity check them early.
@@ -63,15 +61,13 @@ func (cli *DockerCli) CmdImages(args ...string) error {
v.Set("all", "1")
}
serverResp, err := cli.call("GET", "/images/json?"+v.Encode(), nil, nil)
rdr, _, err := cli.call("GET", "/images/json?"+v.Encode(), nil, nil)
if err != nil {
return err
}
defer serverResp.body.Close()
images := []types.Image{}
if err := json.NewDecoder(serverResp.body).Decode(&images); err != nil {
if err := json.NewDecoder(rdr).Decode(&images); err != nil {
return err
}

View File

@@ -4,23 +4,20 @@ import (
"fmt"
"io"
"net/url"
"os"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/opts"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/parsers"
"github.com/docker/docker/pkg/urlutil"
"github.com/docker/docker/registry"
)
// CmdImport creates an empty filesystem image, imports the contents of the tarball into the image, and optionally tags the image.
//
// The URL argument is the address of a tarball (.tar, .tar.gz, .tgz, .bzip, .tar.xz, .txz) file or a path to local file relative to docker client. If the URL is '-', then the tar file is read from STDIN.
// The URL argument is the address of a tarball (.tar, .tar.gz, .tgz, .bzip, .tar.xz, .txz) file. If the URL is '-', then the tar file is read from STDIN.
//
// Usage: docker import [OPTIONS] file|URL|- [REPOSITORY[:TAG]]
// Usage: docker import [OPTIONS] URL [REPOSITORY[:TAG]]
func (cli *DockerCli) CmdImport(args ...string) error {
cmd := Cli.Subcmd("import", []string{"file|URL|- [REPOSITORY[:TAG]]"}, "Create an empty filesystem image and import the contents of the\ntarball (.tar, .tar.gz, .tgz, .bzip, .tar.xz, .txz) into it, then\noptionally tag it.", true)
cmd := cli.Subcmd("import", "URL|- [REPOSITORY[:TAG]]", "Create an empty filesystem image and import the contents of the\ntarball (.tar, .tar.gz, .tgz, .bzip, .tar.xz, .txz) into it, then\noptionally tag it.", true)
flChanges := opts.NewListOpts(nil)
cmd.Var(&flChanges, []string{"c", "-change"}, "Apply Dockerfile instruction to the created image")
cmd.Require(flag.Min, 1)
@@ -39,7 +36,7 @@ func (cli *DockerCli) CmdImport(args ...string) error {
v.Add("changes", change)
}
if cmd.NArg() == 3 {
fmt.Fprintf(cli.err, "[DEPRECATED] The format 'file|URL|- [REPOSITORY [TAG]]' has been deprecated. Please use file|URL|- [REPOSITORY[:TAG]]\n")
fmt.Fprintf(cli.err, "[DEPRECATED] The format 'URL|- [REPOSITORY [TAG]]' has been deprecated. Please use URL|- [REPOSITORY[:TAG]]\n")
v.Set("tag", cmd.Arg(2))
}
@@ -55,15 +52,6 @@ func (cli *DockerCli) CmdImport(args ...string) error {
if src == "-" {
in = cli.in
} else if !urlutil.IsURL(src) {
v.Set("fromSrc", "-")
file, err := os.Open(src)
if err != nil {
return err
}
defer file.Close()
in = file
}
sopts := &streamOpts{
@@ -72,6 +60,5 @@ func (cli *DockerCli) CmdImport(args ...string) error {
out: cli.out,
}
_, err := cli.stream("POST", "/images/create?"+v.Encode(), sopts)
return err
return cli.stream("POST", "/images/create?"+v.Encode(), sopts)
}

View File

@@ -5,8 +5,6 @@ import (
"fmt"
"github.com/docker/docker/api/types"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/pkg/httputils"
"github.com/docker/docker/pkg/ioutils"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/units"
@@ -16,20 +14,17 @@ import (
//
// Usage: docker info
func (cli *DockerCli) CmdInfo(args ...string) error {
cmd := Cli.Subcmd("info", nil, "Display system-wide information", true)
cmd := cli.Subcmd("info", "", "Display system-wide information", true)
cmd.Require(flag.Exact, 0)
cmd.ParseFlags(args, true)
serverResp, err := cli.call("GET", "/info", nil, nil)
rdr, _, err := cli.call("GET", "/info", nil, nil)
if err != nil {
return err
}
defer serverResp.body.Close()
info := &types.Info{}
if err := json.NewDecoder(serverResp.body).Decode(info); err != nil {
if err := json.NewDecoder(rdr).Decode(info); err != nil {
return fmt.Errorf("Error reading remote info: %v", err)
}
@@ -72,27 +67,15 @@ func (cli *DockerCli) CmdInfo(args ...string) error {
fmt.Fprintf(cli.out, "Registry: %v\n", info.IndexServerAddress)
}
}
// Only output these warnings if the server supports these features
if h, err := httputils.ParseServerHeader(serverResp.header.Get("Server")); err == nil {
if h.OS != "windows" {
if !info.MemoryLimit {
fmt.Fprintf(cli.err, "WARNING: No memory limit support\n")
}
if !info.SwapLimit {
fmt.Fprintf(cli.err, "WARNING: No swap limit support\n")
}
if !info.IPv4Forwarding {
fmt.Fprintf(cli.err, "WARNING: IPv4 forwarding is disabled.\n")
}
if !info.BridgeNfIptables {
fmt.Fprintf(cli.err, "WARNING: bridge-nf-call-iptables is disabled\n")
}
if !info.BridgeNfIp6tables {
fmt.Fprintf(cli.err, "WARNING: bridge-nf-call-ip6tables is disabled\n")
}
}
if !info.MemoryLimit {
fmt.Fprintf(cli.err, "WARNING: No memory limit support\n")
}
if !info.SwapLimit {
fmt.Fprintf(cli.err, "WARNING: No swap limit support\n")
}
if !info.IPv4Forwarding {
fmt.Fprintf(cli.err, "WARNING: IPv4 forwarding is disabled.\n")
}
if info.Labels != nil {
fmt.Fprintln(cli.out, "Labels:")
for _, attribute := range info.Labels {

View File

@@ -9,84 +9,51 @@ import (
"text/template"
"github.com/docker/docker/api/types"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
var funcMap = template.FuncMap{
"json": func(v interface{}) string {
a, _ := json.Marshal(v)
return string(a)
},
}
// CmdInspect displays low-level information on one or more containers or images.
//
// Usage: docker inspect [OPTIONS] CONTAINER|IMAGE [CONTAINER|IMAGE...]
func (cli *DockerCli) CmdInspect(args ...string) error {
cmd := Cli.Subcmd("inspect", []string{"CONTAINER|IMAGE [CONTAINER|IMAGE...]"}, "Return low-level information on a container or image", true)
cmd := cli.Subcmd("inspect", "CONTAINER|IMAGE [CONTAINER|IMAGE...]", "Return low-level information on a container or image", true)
tmplStr := cmd.String([]string{"f", "#format", "-format"}, "", "Format the output using the given go template")
inspectType := cmd.String([]string{"-type"}, "", "Return JSON for specified type, (e.g image or container)")
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
var tmpl *template.Template
var err error
var obj []byte
if *tmplStr != "" {
var err error
if tmpl, err = template.New("").Funcs(funcMap).Parse(*tmplStr); err != nil {
return Cli.StatusError{StatusCode: 64,
return StatusError{StatusCode: 64,
Status: "Template parsing error: " + err.Error()}
}
}
if *inspectType != "" && *inspectType != "container" && *inspectType != "image" {
return fmt.Errorf("%q is not a valid value for --type", *inspectType)
}
indented := new(bytes.Buffer)
indented.WriteString("[\n")
status := 0
isImage := false
for _, name := range cmd.Args() {
if *inspectType == "" || *inspectType == "container" {
obj, _, err = readBody(cli.call("GET", "/containers/"+name+"/json", nil, nil))
if err != nil && *inspectType == "container" {
if strings.Contains(err.Error(), "No such") {
fmt.Fprintf(cli.err, "Error: No such container: %s\n", name)
} else {
fmt.Fprintf(cli.err, "%s", err)
}
status = 1
continue
}
}
if obj == nil && (*inspectType == "" || *inspectType == "image") {
obj, _, err := readBody(cli.call("GET", "/containers/"+name+"/json", nil, nil))
if err != nil {
obj, _, err = readBody(cli.call("GET", "/images/"+name+"/json", nil, nil))
isImage = true
if err != nil {
if strings.Contains(err.Error(), "No such") {
if *inspectType == "" {
fmt.Fprintf(cli.err, "Error: No such image or container: %s\n", name)
} else {
fmt.Fprintf(cli.err, "Error: No such image: %s\n", name)
}
fmt.Fprintf(cli.err, "Error: No such image or container: %s\n", name)
} else {
fmt.Fprintf(cli.err, "%s", err)
}
status = 1
continue
}
}
if tmpl == nil {
if err := json.Indent(indented, obj, "", " "); err != nil {
if err = json.Indent(indented, obj, "", " "); err != nil {
fmt.Fprintf(cli.err, "%s\n", err)
status = 1
continue
@@ -142,16 +109,13 @@ func (cli *DockerCli) CmdInspect(args ...string) error {
indented.WriteString("]\n")
if tmpl == nil {
// Note that we will always write "[]" when "-f" isn't specified,
// to make sure the output would always be array, see
// https://github.com/docker/docker/pull/9500#issuecomment-65846734
if _, err := io.Copy(cli.out, indented); err != nil {
return err
}
}
if status != 0 {
return Cli.StatusError{StatusCode: status}
return StatusError{StatusCode: status}
}
return nil
}

View File

@@ -3,7 +3,6 @@ package client
import (
"fmt"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
@@ -11,7 +10,7 @@ import (
//
// Usage: docker kill [OPTIONS] CONTAINER [CONTAINER...]
func (cli *DockerCli) CmdKill(args ...string) error {
cmd := Cli.Subcmd("kill", []string{"CONTAINER [CONTAINER...]"}, "Kill a running container using SIGKILL or a specified signal", true)
cmd := cli.Subcmd("kill", "CONTAINER [CONTAINER...]", "Kill a running container using SIGKILL or a specified signal", true)
signal := cmd.String([]string{"s", "-signal"}, "KILL", "Signal to send to the container")
cmd.Require(flag.Min, 1)

View File

@@ -4,7 +4,6 @@ import (
"io"
"os"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
@@ -14,7 +13,7 @@ import (
//
// Usage: docker load [OPTIONS]
func (cli *DockerCli) CmdLoad(args ...string) error {
cmd := Cli.Subcmd("load", nil, "Load an image from a tar archive or STDIN", true)
cmd := cli.Subcmd("load", "", "Load an image from a tar archive on STDIN", true)
infile := cmd.String([]string{"i", "-input"}, "", "Read from a tar archive file, instead of STDIN")
cmd.Require(flag.Exact, 0)
@@ -35,7 +34,7 @@ func (cli *DockerCli) CmdLoad(args ...string) error {
in: input,
out: cli.out,
}
if _, err := cli.stream("POST", "/images/load", sopts); err != nil {
if err := cli.stream("POST", "/images/load", sopts); err != nil {
return err
}
return nil

View File

@@ -9,7 +9,6 @@ import (
"strings"
"github.com/docker/docker/api/types"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/cliconfig"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/term"
@@ -22,7 +21,7 @@ import (
//
// Usage: docker login SERVER
func (cli *DockerCli) CmdLogin(args ...string) error {
cmd := Cli.Subcmd("login", []string{"[SERVER]"}, "Register or log in to a Docker registry server, if no server is\nspecified \""+registry.IndexServer+"\" is the default.", true)
cmd := cli.Subcmd("login", "[SERVER]", "Register or log in to a Docker registry server, if no server is\nspecified \""+registry.IndexServerAddress()+"\" is the default.", true)
cmd.Require(flag.Max, 1)
var username, password, email string
@@ -33,7 +32,7 @@ func (cli *DockerCli) CmdLogin(args ...string) error {
cmd.ParseFlags(args, true)
serverAddress := registry.IndexServer
serverAddress := registry.IndexServerAddress()
if len(cmd.Args()) > 0 {
serverAddress = cmd.Arg(0)
}
@@ -114,8 +113,8 @@ func (cli *DockerCli) CmdLogin(args ...string) error {
authconfig.ServerAddress = serverAddress
cli.configFile.AuthConfigs[serverAddress] = authconfig
serverResp, err := cli.call("POST", "/auth", cli.configFile.AuthConfigs[serverAddress], nil)
if serverResp.statusCode == 401 {
stream, statusCode, err := cli.call("POST", "/auth", cli.configFile.AuthConfigs[serverAddress], nil)
if statusCode == 401 {
delete(cli.configFile.AuthConfigs, serverAddress)
if err2 := cli.configFile.Save(); err2 != nil {
fmt.Fprintf(cli.out, "WARNING: could not save config file: %v\n", err2)
@@ -126,10 +125,8 @@ func (cli *DockerCli) CmdLogin(args ...string) error {
return err
}
defer serverResp.body.Close()
var response types.AuthResponse
if err := json.NewDecoder(serverResp.body).Decode(&response); err != nil {
if err := json.NewDecoder(stream).Decode(&response); err != nil {
// Upon error, remove entry
delete(cli.configFile.AuthConfigs, serverAddress)
return err

View File

@@ -3,7 +3,6 @@ package client
import (
"fmt"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/registry"
)
@@ -14,12 +13,11 @@ import (
//
// Usage: docker logout [SERVER]
func (cli *DockerCli) CmdLogout(args ...string) error {
cmd := Cli.Subcmd("logout", []string{"[SERVER]"}, "Log out from a Docker registry, if no server is\nspecified \""+registry.IndexServer+"\" is the default.", true)
cmd := cli.Subcmd("logout", "[SERVER]", "Log out from a Docker registry, if no server is\nspecified \""+registry.IndexServerAddress()+"\" is the default.", true)
cmd.Require(flag.Max, 1)
cmd.ParseFlags(args, true)
serverAddress := registry.IndexServer
serverAddress := registry.IndexServerAddress()
if len(cmd.Args()) > 0 {
serverAddress = cmd.Arg(0)
}

View File

@@ -4,10 +4,8 @@ import (
"encoding/json"
"fmt"
"net/url"
"time"
"github.com/docker/docker/api/types"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/timeutils"
)
@@ -16,24 +14,26 @@ import (
//
// docker logs [OPTIONS] CONTAINER
func (cli *DockerCli) CmdLogs(args ...string) error {
cmd := Cli.Subcmd("logs", []string{"CONTAINER"}, "Fetch the logs of a container", true)
follow := cmd.Bool([]string{"f", "-follow"}, false, "Follow log output")
since := cmd.String([]string{"-since"}, "", "Show logs since timestamp")
times := cmd.Bool([]string{"t", "-timestamps"}, false, "Show timestamps")
tail := cmd.String([]string{"-tail"}, "all", "Number of lines to show from the end of the logs")
var (
cmd = cli.Subcmd("logs", "CONTAINER", "Fetch the logs of a container", true)
follow = cmd.Bool([]string{"f", "-follow"}, false, "Follow log output")
since = cmd.String([]string{"-since"}, "", "Show logs since timestamp")
times = cmd.Bool([]string{"t", "-timestamps"}, false, "Show timestamps")
tail = cmd.String([]string{"-tail"}, "all", "Number of lines to show from the end of the logs")
)
cmd.Require(flag.Exact, 1)
cmd.ParseFlags(args, true)
name := cmd.Arg(0)
serverResp, err := cli.call("GET", "/containers/"+name+"/json", nil, nil)
stream, _, err := cli.call("GET", "/containers/"+name+"/json", nil, nil)
if err != nil {
return err
}
var c types.ContainerJSON
if err := json.NewDecoder(serverResp.body).Decode(&c); err != nil {
if err := json.NewDecoder(stream).Decode(&c); err != nil {
return err
}
@@ -46,7 +46,7 @@ func (cli *DockerCli) CmdLogs(args ...string) error {
v.Set("stderr", "1")
if *since != "" {
v.Set("since", timeutils.GetTimestamp(*since, time.Now()))
v.Set("since", timeutils.GetTimestamp(*since))
}
if *times {
@@ -64,6 +64,5 @@ func (cli *DockerCli) CmdLogs(args ...string) error {
err: cli.err,
}
_, err = cli.stream("GET", "/containers/"+name+"/logs?"+v.Encode(), sopts)
return err
return cli.stream("GET", "/containers/"+name+"/logs?"+v.Encode(), sopts)
}

View File

@@ -1,15 +0,0 @@
// +build experimental
package client
import (
"os"
nwclient "github.com/docker/libnetwork/client"
)
func (cli *DockerCli) CmdNetwork(args ...string) error {
nCli := nwclient.NewNetworkCli(cli.out, cli.err, nwclient.CallFunc(cli.callWrapper))
args = append([]string{"network"}, args...)
return nCli.Cmd(os.Args[0], args...)
}

View File

@@ -3,7 +3,6 @@ package client
import (
"fmt"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
@@ -11,9 +10,8 @@ import (
//
// Usage: docker pause CONTAINER [CONTAINER...]
func (cli *DockerCli) CmdPause(args ...string) error {
cmd := Cli.Subcmd("pause", []string{"CONTAINER [CONTAINER...]"}, "Pause all processes within a container", true)
cmd := cli.Subcmd("pause", "CONTAINER [CONTAINER...]", "Pause all processes within a container", true)
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
var errNames []string

View File

@@ -5,9 +5,8 @@ import (
"fmt"
"strings"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/nat"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/nat"
)
// CmdPort lists port mappings for a container.
@@ -15,25 +14,22 @@ import (
//
// Usage: docker port CONTAINER [PRIVATE_PORT[/PROTO]]
func (cli *DockerCli) CmdPort(args ...string) error {
cmd := Cli.Subcmd("port", []string{"CONTAINER [PRIVATE_PORT[/PROTO]]"}, "List port mappings for the CONTAINER, or lookup the public-facing port that\nis NAT-ed to the PRIVATE_PORT", true)
cmd := cli.Subcmd("port", "CONTAINER [PRIVATE_PORT[/PROTO]]", "List port mappings for the CONTAINER, or lookup the public-facing port that\nis NAT-ed to the PRIVATE_PORT", true)
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
serverResp, err := cli.call("GET", "/containers/"+cmd.Arg(0)+"/json", nil, nil)
stream, _, err := cli.call("GET", "/containers/"+cmd.Arg(0)+"/json", nil, nil)
if err != nil {
return err
}
defer serverResp.body.Close()
var c struct {
NetworkSettings struct {
Ports nat.PortMap
}
}
if err := json.NewDecoder(serverResp.body).Decode(&c); err != nil {
if err := json.NewDecoder(stream).Decode(&c); err != nil {
return err
}
@@ -49,13 +45,9 @@ func (cli *DockerCli) CmdPort(args ...string) error {
proto = parts[1]
}
natPort := port + "/" + proto
newP, err := nat.NewPort(proto, port)
if err != nil {
return err
}
if frontends, exists := c.NetworkSettings.Ports[newP]; exists && frontends != nil {
if frontends, exists := c.NetworkSettings.Ports[nat.Port(port+"/"+proto)]; exists && frontends != nil {
for _, frontend := range frontends {
fmt.Fprintf(cli.out, "%s:%s\n", frontend.HostIP, frontend.HostPort)
fmt.Fprintf(cli.out, "%s:%s\n", frontend.HostIp, frontend.HostPort)
}
return nil
}
@@ -64,7 +56,7 @@ func (cli *DockerCli) CmdPort(args ...string) error {
for from, frontends := range c.NetworkSettings.Ports {
for _, frontend := range frontends {
fmt.Fprintf(cli.out, "%s -> %s:%s\n", from, frontend.HostIP, frontend.HostPort)
fmt.Fprintf(cli.out, "%s -> %s:%s\n", from, frontend.HostIp, frontend.HostPort)
}
}

View File

@@ -2,15 +2,21 @@ package client
import (
"encoding/json"
"fmt"
"net/url"
"strconv"
"strings"
"text/tabwriter"
"time"
"github.com/docker/docker/api/client/ps"
"github.com/docker/docker/api"
"github.com/docker/docker/api/types"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/opts"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/parsers/filters"
"github.com/docker/docker/pkg/stringid"
"github.com/docker/docker/pkg/stringutils"
"github.com/docker/docker/pkg/units"
)
// CmdPs outputs a list of Docker containers.
@@ -23,7 +29,7 @@ func (cli *DockerCli) CmdPs(args ...string) error {
psFilterArgs = filters.Args{}
v = url.Values{}
cmd = Cli.Subcmd("ps", nil, "List containers", true)
cmd = cli.Subcmd("ps", "", "List containers", true)
quiet = cmd.Bool([]string{"q", "-quiet"}, false, "Only display numeric IDs")
size = cmd.Bool([]string{"s", "-size"}, false, "Display total file sizes")
all = cmd.Bool([]string{"a", "-all"}, false, "Show all containers (default shows just running)")
@@ -32,7 +38,6 @@ func (cli *DockerCli) CmdPs(args ...string) error {
since = cmd.String([]string{"#sinceId", "#-since-id", "-since"}, "", "Show created since Id or Name, include non-running")
before = cmd.String([]string{"#beforeId", "#-before-id", "-before"}, "", "Show only container created before Id or Name")
last = cmd.Int([]string{"n"}, -1, "Show n last created containers, include non-running")
format = cmd.String([]string{"-format"}, "", "Pretty-print containers using a Go template")
flFilter = opts.NewListOpts(nil)
)
cmd.Require(flag.Exact, 0)
@@ -81,36 +86,90 @@ func (cli *DockerCli) CmdPs(args ...string) error {
v.Set("filters", filterJSON)
}
serverResp, err := cli.call("GET", "/containers/json?"+v.Encode(), nil, nil)
rdr, _, err := cli.call("GET", "/containers/json?"+v.Encode(), nil, nil)
if err != nil {
return err
}
defer serverResp.body.Close()
containers := []types.Container{}
if err := json.NewDecoder(serverResp.body).Decode(&containers); err != nil {
if err := json.NewDecoder(rdr).Decode(&containers); err != nil {
return err
}
f := *format
if len(f) == 0 {
if len(cli.PsFormat()) > 0 && !*quiet {
f = cli.PsFormat()
w := tabwriter.NewWriter(cli.out, 20, 1, 3, ' ', 0)
if !*quiet {
fmt.Fprint(w, "CONTAINER ID\tIMAGE\tCOMMAND\tCREATED\tSTATUS\tPORTS\tNAMES")
if *size {
fmt.Fprintln(w, "\tSIZE")
} else {
f = "table"
fmt.Fprint(w, "\n")
}
}
psCtx := ps.Context{
Output: cli.out,
Format: f,
Quiet: *quiet,
Size: *size,
Trunc: !*noTrunc,
stripNamePrefix := func(ss []string) []string {
for i, s := range ss {
ss[i] = s[1:]
}
return ss
}
ps.Format(psCtx, containers)
for _, container := range containers {
ID := container.ID
if !*noTrunc {
ID = stringid.TruncateID(ID)
}
if *quiet {
fmt.Fprintln(w, ID)
continue
}
var (
names = stripNamePrefix(container.Names)
command = strconv.Quote(container.Command)
)
if !*noTrunc {
command = stringutils.Truncate(command, 20)
// only display the default name for the container with notrunc is passed
for _, name := range names {
if len(strings.Split(name, "/")) == 1 {
names = []string{name}
break
}
}
}
image := container.Image
if image == "" {
image = "<no image>"
}
fmt.Fprintf(w, "%s\t%s\t%s\t%s ago\t%s\t%s\t%s\t", ID, image, command,
units.HumanDuration(time.Now().UTC().Sub(time.Unix(int64(container.Created), 0))),
container.Status, api.DisplayablePorts(container.Ports), strings.Join(names, ","))
if *size {
if container.SizeRootFs > 0 {
fmt.Fprintf(w, "%s (virtual %s)\n", units.HumanSize(float64(container.SizeRw)), units.HumanSize(float64(container.SizeRootFs)))
} else {
fmt.Fprintf(w, "%s\n", units.HumanSize(float64(container.SizeRw)))
}
continue
}
fmt.Fprint(w, "\n")
}
if !*quiet {
w.Flush()
}
return nil
}

View File

@@ -1,220 +0,0 @@
package ps
import (
"bytes"
"fmt"
"strconv"
"strings"
"text/tabwriter"
"text/template"
"time"
"github.com/docker/docker/api"
"github.com/docker/docker/api/types"
"github.com/docker/docker/pkg/stringid"
"github.com/docker/docker/pkg/stringutils"
"github.com/docker/docker/pkg/units"
)
const (
tableKey = "table"
idHeader = "CONTAINER ID"
imageHeader = "IMAGE"
namesHeader = "NAMES"
commandHeader = "COMMAND"
createdAtHeader = "CREATED AT"
runningForHeader = "CREATED"
statusHeader = "STATUS"
portsHeader = "PORTS"
sizeHeader = "SIZE"
labelsHeader = "LABELS"
)
type containerContext struct {
trunc bool
header []string
c types.Container
}
func (c *containerContext) ID() string {
c.addHeader(idHeader)
if c.trunc {
return stringid.TruncateID(c.c.ID)
}
return c.c.ID
}
func (c *containerContext) Names() string {
c.addHeader(namesHeader)
names := stripNamePrefix(c.c.Names)
if c.trunc {
for _, name := range names {
if len(strings.Split(name, "/")) == 1 {
names = []string{name}
break
}
}
}
return strings.Join(names, ",")
}
func (c *containerContext) Image() string {
c.addHeader(imageHeader)
if c.c.Image == "" {
return "<no image>"
}
return c.c.Image
}
func (c *containerContext) Command() string {
c.addHeader(commandHeader)
command := c.c.Command
if c.trunc {
command = stringutils.Truncate(command, 20)
}
return strconv.Quote(command)
}
func (c *containerContext) CreatedAt() string {
c.addHeader(createdAtHeader)
return time.Unix(int64(c.c.Created), 0).String()
}
func (c *containerContext) RunningFor() string {
c.addHeader(runningForHeader)
createdAt := time.Unix(int64(c.c.Created), 0)
return units.HumanDuration(time.Now().UTC().Sub(createdAt))
}
func (c *containerContext) Ports() string {
c.addHeader(portsHeader)
return api.DisplayablePorts(c.c.Ports)
}
func (c *containerContext) Status() string {
c.addHeader(statusHeader)
return c.c.Status
}
func (c *containerContext) Size() string {
c.addHeader(sizeHeader)
srw := units.HumanSize(float64(c.c.SizeRw))
sv := units.HumanSize(float64(c.c.SizeRootFs))
sf := srw
if c.c.SizeRootFs > 0 {
sf = fmt.Sprintf("%s (virtual %s)", srw, sv)
}
return sf
}
func (c *containerContext) Labels() string {
c.addHeader(labelsHeader)
if c.c.Labels == nil {
return ""
}
var joinLabels []string
for k, v := range c.c.Labels {
joinLabels = append(joinLabels, fmt.Sprintf("%s=%s", k, v))
}
return strings.Join(joinLabels, ",")
}
func (c *containerContext) Label(name string) string {
n := strings.Split(name, ".")
r := strings.NewReplacer("-", " ", "_", " ")
h := r.Replace(n[len(n)-1])
c.addHeader(h)
if c.c.Labels == nil {
return ""
}
return c.c.Labels[name]
}
func (c *containerContext) fullHeader() string {
if c.header == nil {
return ""
}
return strings.Join(c.header, "\t")
}
func (c *containerContext) addHeader(header string) {
if c.header == nil {
c.header = []string{}
}
c.header = append(c.header, strings.ToUpper(header))
}
func customFormat(ctx Context, containers []types.Container) {
var (
table bool
header string
format = ctx.Format
buffer = bytes.NewBufferString("")
)
if strings.HasPrefix(ctx.Format, tableKey) {
table = true
format = format[len(tableKey):]
}
format = strings.Trim(format, " ")
r := strings.NewReplacer(`\t`, "\t", `\n`, "\n")
format = r.Replace(format)
if table && ctx.Size {
format += "\t{{.Size}}"
}
tmpl, err := template.New("").Parse(format)
if err != nil {
buffer.WriteString(fmt.Sprintf("Template parsing error: %v\n", err))
buffer.WriteTo(ctx.Output)
return
}
for _, container := range containers {
containerCtx := &containerContext{
trunc: ctx.Trunc,
c: container,
}
if err := tmpl.Execute(buffer, containerCtx); err != nil {
buffer = bytes.NewBufferString(fmt.Sprintf("Template parsing error: %v\n", err))
buffer.WriteTo(ctx.Output)
return
}
if table && len(header) == 0 {
header = containerCtx.fullHeader()
}
buffer.WriteString("\n")
}
if table {
if len(header) == 0 {
// if we still don't have a header, we didn't have any containers so we need to fake it to get the right headers from the template
containerCtx := &containerContext{}
tmpl.Execute(bytes.NewBufferString(""), containerCtx)
header = containerCtx.fullHeader()
}
t := tabwriter.NewWriter(ctx.Output, 20, 1, 3, ' ', 0)
t.Write([]byte(header))
t.Write([]byte("\n"))
buffer.WriteTo(t)
t.Flush()
} else {
buffer.WriteTo(ctx.Output)
}
}
func stripNamePrefix(ss []string) []string {
for i, s := range ss {
ss[i] = s[1:]
}
return ss
}

View File

@@ -1,102 +0,0 @@
package ps
import (
"bytes"
"reflect"
"strings"
"testing"
"time"
"github.com/docker/docker/api/types"
"github.com/docker/docker/pkg/stringid"
)
func TestContainerPsContext(t *testing.T) {
containerId := stringid.GenerateRandomID()
unix := time.Now().Unix()
var ctx containerContext
cases := []struct {
container types.Container
trunc bool
expValue string
expHeader string
call func() string
}{
{types.Container{ID: containerId}, true, stringid.TruncateID(containerId), idHeader, ctx.ID},
{types.Container{Names: []string{"/foobar_baz"}}, true, "foobar_baz", namesHeader, ctx.Names},
{types.Container{Image: "ubuntu"}, true, "ubuntu", imageHeader, ctx.Image},
{types.Container{Image: ""}, true, "<no image>", imageHeader, ctx.Image},
{types.Container{Command: "sh -c 'ls -la'"}, true, `"sh -c 'ls -la'"`, commandHeader, ctx.Command},
{types.Container{Created: int(unix)}, true, time.Unix(unix, 0).String(), createdAtHeader, ctx.CreatedAt},
{types.Container{Ports: []types.Port{{PrivatePort: 8080, PublicPort: 8080, Type: "tcp"}}}, true, "8080/tcp", portsHeader, ctx.Ports},
{types.Container{Status: "RUNNING"}, true, "RUNNING", statusHeader, ctx.Status},
{types.Container{SizeRw: 10}, true, "10 B", sizeHeader, ctx.Size},
{types.Container{SizeRw: 10, SizeRootFs: 20}, true, "10 B (virtual 20 B)", sizeHeader, ctx.Size},
{types.Container{Labels: map[string]string{"cpu": "6", "storage": "ssd"}}, true, "cpu=6,storage=ssd", labelsHeader, ctx.Labels},
}
for _, c := range cases {
ctx = containerContext{c: c.container, trunc: c.trunc}
v := c.call()
if strings.Contains(v, ",") {
// comma-separated values means probably a map input, which won't
// be guaranteed to have the same order as our expected value
// We'll create maps and use reflect.DeepEquals to check instead:
entriesMap := make(map[string]string)
expMap := make(map[string]string)
entries := strings.Split(v, ",")
expectedEntries := strings.Split(c.expValue, ",")
for _, entry := range entries {
keyval := strings.Split(entry, "=")
entriesMap[keyval[0]] = keyval[1]
}
for _, expected := range expectedEntries {
keyval := strings.Split(expected, "=")
expMap[keyval[0]] = keyval[1]
}
if !reflect.DeepEqual(expMap, entriesMap) {
t.Fatalf("Expected entries: %v, got: %v", c.expValue, v)
}
} else if v != c.expValue {
t.Fatalf("Expected %s, was %s\n", c.expValue, v)
}
h := ctx.fullHeader()
if h != c.expHeader {
t.Fatalf("Expected %s, was %s\n", c.expHeader, h)
}
}
c := types.Container{Labels: map[string]string{"com.docker.swarm.swarm-id": "33", "com.docker.swarm.node_name": "ubuntu"}}
ctx = containerContext{c: c, trunc: true}
sid := ctx.Label("com.docker.swarm.swarm-id")
node := ctx.Label("com.docker.swarm.node_name")
if sid != "33" {
t.Fatalf("Expected 33, was %s\n", sid)
}
if node != "ubuntu" {
t.Fatalf("Expected ubuntu, was %s\n", node)
}
h := ctx.fullHeader()
if h != "SWARM ID\tNODE NAME" {
t.Fatalf("Expected %s, was %s\n", "SWARM ID\tNODE NAME", h)
}
}
func TestContainerPsFormatError(t *testing.T) {
out := bytes.NewBufferString("")
ctx := Context{
Format: "{{InvalidFunction}}",
Output: out,
}
customFormat(ctx, make([]types.Container, 0))
if out.String() != "Template parsing error: template: :1: function \"InvalidFunction\" not defined\n" {
t.Fatalf("Expected format error, got `%v`\n", out.String())
}
}

View File

@@ -1,65 +0,0 @@
package ps
import (
"io"
"github.com/docker/docker/api/types"
)
const (
tableFormatKey = "table"
rawFormatKey = "raw"
defaultTableFormat = "table {{.ID}}\t{{.Image}}\t{{.Command}}\t{{.RunningFor}} ago\t{{.Status}}\t{{.Ports}}\t{{.Names}}"
defaultQuietFormat = "{{.ID}}"
)
type Context struct {
Output io.Writer
Format string
Size bool
Quiet bool
Trunc bool
}
func Format(ctx Context, containers []types.Container) {
switch ctx.Format {
case tableFormatKey:
tableFormat(ctx, containers)
case rawFormatKey:
rawFormat(ctx, containers)
default:
customFormat(ctx, containers)
}
}
func rawFormat(ctx Context, containers []types.Container) {
if ctx.Quiet {
ctx.Format = `container_id: {{.ID}}`
} else {
ctx.Format = `container_id: {{.ID}}
image: {{.Image}}
command: {{.Command}}
created_at: {{.CreatedAt}}
status: {{.Status}}
names: {{.Names}}
labels: {{.Labels}}
ports: {{.Ports}}
`
if ctx.Size {
ctx.Format += `size: {{.Size}}
`
}
}
customFormat(ctx, containers)
}
func tableFormat(ctx Context, containers []types.Container) {
ctx.Format = defaultTableFormat
if ctx.Quiet {
ctx.Format = defaultQuietFormat
}
customFormat(ctx, containers)
}

View File

@@ -4,34 +4,37 @@ import (
"fmt"
"net/url"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/graph/tags"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/parsers"
"github.com/docker/docker/registry"
"github.com/docker/docker/utils"
)
// CmdPull pulls an image or a repository from the registry.
//
// Usage: docker pull [OPTIONS] IMAGENAME[:TAG|@DIGEST]
func (cli *DockerCli) CmdPull(args ...string) error {
cmd := Cli.Subcmd("pull", []string{"NAME[:TAG|@DIGEST]"}, "Pull an image or a repository from a registry", true)
cmd := cli.Subcmd("pull", "NAME[:TAG|@DIGEST]", "Pull an image or a repository from the registry", true)
allTags := cmd.Bool([]string{"a", "-all-tags"}, false, "Download all tagged images in the repository")
addTrustedFlags(cmd, true)
cmd.Require(flag.Exact, 1)
cmd.ParseFlags(args, true)
remote := cmd.Arg(0)
var (
v = url.Values{}
remote = cmd.Arg(0)
newRemote = remote
)
taglessRemote, tag := parsers.ParseRepositoryTag(remote)
if tag == "" && !*allTags {
tag = tags.DEFAULTTAG
fmt.Fprintf(cli.out, "Using default tag: %s\n", tag)
} else if tag != "" && *allTags {
newRemote = utils.ImageReference(taglessRemote, tags.DEFAULTTAG)
}
if tag != "" && *allTags {
return fmt.Errorf("tag can't be used with --all-tags/-a")
}
ref := registry.ParseReference(tag)
v.Set("fromImage", newRemote)
// Resolve the Repository name from fqn to RepositoryInfo
repoInfo, err := registry.ParseRepositoryInfo(taglessRemote)
@@ -39,15 +42,6 @@ func (cli *DockerCli) CmdPull(args ...string) error {
return err
}
if isTrusted() && !ref.HasDigest() {
// Check if tag is digest
authConfig := registry.ResolveAuthConfig(cli.configFile, repoInfo.Index)
return cli.trustedPull(repoInfo, ref, authConfig)
}
v := url.Values{}
v.Set("fromImage", ref.ImageName(taglessRemote))
_, _, err = cli.clientRequestAttemptLogin("POST", "/images/create?"+v.Encode(), nil, cli.out, repoInfo.Index, "pull")
return err
}

View File

@@ -4,7 +4,6 @@ import (
"fmt"
"net/url"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/parsers"
"github.com/docker/docker/registry"
@@ -14,13 +13,14 @@ import (
//
// Usage: docker push NAME[:TAG]
func (cli *DockerCli) CmdPush(args ...string) error {
cmd := Cli.Subcmd("push", []string{"NAME[:TAG]"}, "Push an image or a repository to a registry", true)
addTrustedFlags(cmd, false)
cmd := cli.Subcmd("push", "NAME[:TAG]", "Push an image or a repository to the registry", true)
cmd.Require(flag.Exact, 1)
cmd.ParseFlags(args, true)
remote, tag := parsers.ParseRepositoryTag(cmd.Arg(0))
name := cmd.Arg(0)
remote, tag := parsers.ParseRepositoryTag(name)
// Resolve the Repository name from fqn to RepositoryInfo
repoInfo, err := registry.ParseRepositoryInfo(remote)
@@ -41,10 +41,6 @@ func (cli *DockerCli) CmdPush(args ...string) error {
return fmt.Errorf("You cannot push a \"root\" repository. Please rename your repository to <user>/<repo> (ex: %s/%s)", username, repoInfo.LocalName)
}
if isTrusted() {
return cli.trustedPush(repoInfo, tag, authConfig)
}
v := url.Values{}
v.Set("tag", tag)

View File

@@ -3,7 +3,6 @@ package client
import (
"fmt"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
@@ -11,9 +10,8 @@ import (
//
// Usage: docker rename OLD_NAME NEW_NAME
func (cli *DockerCli) CmdRename(args ...string) error {
cmd := Cli.Subcmd("rename", []string{"OLD_NAME NEW_NAME"}, "Rename a container", true)
cmd := cli.Subcmd("rename", "OLD_NAME NEW_NAME", "Rename a container", true)
cmd.Require(flag.Exact, 2)
cmd.ParseFlags(args, true)
oldName := cmd.Arg(0)

View File

@@ -5,7 +5,6 @@ import (
"net/url"
"strconv"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
@@ -13,7 +12,7 @@ import (
//
// Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
func (cli *DockerCli) CmdRestart(args ...string) error {
cmd := Cli.Subcmd("restart", []string{"CONTAINER [CONTAINER...]"}, "Restart a running container", true)
cmd := cli.Subcmd("restart", "CONTAINER [CONTAINER...]", "Restart a running container", true)
nSeconds := cmd.Int([]string{"t", "-time"}, 10, "Seconds to wait for stop before killing the container")
cmd.Require(flag.Min, 1)

View File

@@ -5,7 +5,6 @@ import (
"net/url"
"strings"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
@@ -13,7 +12,7 @@ import (
//
// Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...]
func (cli *DockerCli) CmdRm(args ...string) error {
cmd := Cli.Subcmd("rm", []string{"CONTAINER [CONTAINER...]"}, "Remove one or more containers", true)
cmd := cli.Subcmd("rm", "CONTAINER [CONTAINER...]", "Remove one or more containers", true)
v := cmd.Bool([]string{"v", "-volumes"}, false, "Remove the volumes associated with the container")
link := cmd.Bool([]string{"l", "#link", "-link"}, false, "Remove the specified link")
force := cmd.Bool([]string{"f", "-force"}, false, "Force the removal of a running container (uses SIGKILL)")

View File

@@ -6,7 +6,6 @@ import (
"net/url"
"github.com/docker/docker/api/types"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
@@ -14,11 +13,12 @@ import (
//
// Usage: docker rmi [OPTIONS] IMAGE [IMAGE...]
func (cli *DockerCli) CmdRmi(args ...string) error {
cmd := Cli.Subcmd("rmi", []string{"IMAGE [IMAGE...]"}, "Remove one or more images", true)
force := cmd.Bool([]string{"f", "-force"}, false, "Force removal of the image")
noprune := cmd.Bool([]string{"-no-prune"}, false, "Do not delete untagged parents")
var (
cmd = cli.Subcmd("rmi", "IMAGE [IMAGE...]", "Remove one or more images", true)
force = cmd.Bool([]string{"f", "-force"}, false, "Force removal of the image")
noprune = cmd.Bool([]string{"-no-prune"}, false, "Do not delete untagged parents")
)
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
v := url.Values{}
@@ -31,15 +31,13 @@ func (cli *DockerCli) CmdRmi(args ...string) error {
var errNames []string
for _, name := range cmd.Args() {
serverResp, err := cli.call("DELETE", "/images/"+name+"?"+v.Encode(), nil, nil)
rdr, _, err := cli.call("DELETE", "/images/"+name+"?"+v.Encode(), nil, nil)
if err != nil {
fmt.Fprintf(cli.err, "%s\n", err)
errNames = append(errNames, name)
} else {
defer serverResp.body.Close()
dels := []types.ImageDelete{}
if err := json.NewDecoder(serverResp.body).Decode(&dels); err != nil {
if err := json.NewDecoder(rdr).Decode(&dels); err != nil {
fmt.Fprintf(cli.err, "%s\n", err)
errNames = append(errNames, name)
continue

View File

@@ -5,10 +5,8 @@ import (
"io"
"net/url"
"os"
"runtime"
"github.com/Sirupsen/logrus"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/opts"
"github.com/docker/docker/pkg/promise"
"github.com/docker/docker/pkg/signal"
@@ -40,8 +38,7 @@ func (cid *cidFile) Write(id string) error {
//
// Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
func (cli *DockerCli) CmdRun(args ...string) error {
cmd := Cli.Subcmd("run", []string{"IMAGE [COMMAND] [ARG...]"}, "Run a command in a new container", true)
addTrustedFlags(cmd, true)
cmd := cli.Subcmd("run", "IMAGE [COMMAND] [ARG...]", "Run a command in a new container", true)
// These are flags not stored in Config/HostConfig
var (
@@ -106,13 +103,6 @@ func (cli *DockerCli) CmdRun(args ...string) error {
sigProxy = false
}
// Telling the Windows daemon the initial size of the tty during start makes
// a far better user experience rather than relying on subsequent resizes
// to cause things to catch up.
if runtime.GOOS == "windows" {
hostConfig.ConsoleSize[0], hostConfig.ConsoleSize[1] = cli.getTtySize()
}
createResponse, err := cli.createContainer(config, hostConfig, hostConfig.ContainerIDFile, *flName)
if err != nil {
return err
@@ -251,7 +241,7 @@ func (cli *DockerCli) CmdRun(args ...string) error {
}
}
if status != 0 {
return Cli.StatusError{StatusCode: status}
return StatusError{StatusCode: status}
}
return nil
}

View File

@@ -6,7 +6,6 @@ import (
"net/url"
"os"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
@@ -16,7 +15,7 @@ import (
//
// Usage: docker save [OPTIONS] IMAGE [IMAGE...]
func (cli *DockerCli) CmdSave(args ...string) error {
cmd := Cli.Subcmd("save", []string{"IMAGE [IMAGE...]"}, "Save an image(s) to a tar archive (streamed to STDOUT by default)", true)
cmd := cli.Subcmd("save", "IMAGE [IMAGE...]", "Save an image(s) to a tar archive (streamed to STDOUT by default)", true)
outfile := cmd.String([]string{"o", "-output"}, "", "Write to an file, instead of STDOUT")
cmd.Require(flag.Min, 1)
@@ -42,7 +41,7 @@ func (cli *DockerCli) CmdSave(args ...string) error {
if len(cmd.Args()) == 1 {
image := cmd.Arg(0)
if _, err := cli.stream("GET", "/images/"+image+"/get", sopts); err != nil {
if err := cli.stream("GET", "/images/"+image+"/get", sopts); err != nil {
return err
}
} else {
@@ -50,7 +49,7 @@ func (cli *DockerCli) CmdSave(args ...string) error {
for _, arg := range cmd.Args() {
v.Add("names", arg)
}
if _, err := cli.stream("GET", "/images/get?"+v.Encode(), sopts); err != nil {
if err := cli.stream("GET", "/images/get?"+v.Encode(), sopts); err != nil {
return err
}
}

View File

@@ -8,7 +8,6 @@ import (
"strings"
"text/tabwriter"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/parsers"
"github.com/docker/docker/pkg/stringutils"
@@ -26,7 +25,7 @@ func (r ByStars) Less(i, j int) bool { return r[i].StarCount < r[j].StarCount }
//
// Usage: docker search [OPTIONS] TERM
func (cli *DockerCli) CmdSearch(args ...string) error {
cmd := Cli.Subcmd("search", []string{"TERM"}, "Search the Docker Hub for images", true)
cmd := cli.Subcmd("search", "TERM", "Search the Docker Hub for images", true)
noTrunc := cmd.Bool([]string{"#notrunc", "-no-trunc"}, false, "Don't truncate output")
trusted := cmd.Bool([]string{"#t", "#trusted", "#-trusted"}, false, "Only show trusted builds")
automated := cmd.Bool([]string{"-automated"}, false, "Only show automated builds")
@@ -51,8 +50,6 @@ func (cli *DockerCli) CmdSearch(args ...string) error {
return err
}
defer rdr.Close()
results := ByStars{}
if err := json.NewDecoder(rdr).Decode(&results); err != nil {
return err

View File

@@ -1,15 +0,0 @@
// +build experimental
package client
import (
"os"
nwclient "github.com/docker/libnetwork/client"
)
func (cli *DockerCli) CmdService(args ...string) error {
nCli := nwclient.NewNetworkCli(cli.out, cli.err, nwclient.CallFunc(cli.callWrapper))
args = append([]string{"service"}, args...)
return nCli.Cmd(os.Args[0], args...)
}

View File

@@ -9,7 +9,6 @@ import (
"github.com/Sirupsen/logrus"
"github.com/docker/docker/api/types"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/promise"
"github.com/docker/docker/pkg/signal"
@@ -45,32 +44,30 @@ func (cli *DockerCli) forwardAllSignals(cid string) chan os.Signal {
//
// Usage: docker start [OPTIONS] CONTAINER [CONTAINER...]
func (cli *DockerCli) CmdStart(args ...string) error {
cmd := Cli.Subcmd("start", []string{"CONTAINER [CONTAINER...]"}, "Start one or more stopped containers", true)
attach := cmd.Bool([]string{"a", "-attach"}, false, "Attach STDOUT/STDERR and forward signals")
openStdin := cmd.Bool([]string{"i", "-interactive"}, false, "Attach container's STDIN")
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
var (
cErr chan error
tty bool
cmd = cli.Subcmd("start", "CONTAINER [CONTAINER...]", "Start one or more stopped containers", true)
attach = cmd.Bool([]string{"a", "-attach"}, false, "Attach STDOUT/STDERR and forward signals")
openStdin = cmd.Bool([]string{"i", "-interactive"}, false, "Attach container's STDIN")
)
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
if *attach || *openStdin {
if cmd.NArg() > 1 {
return fmt.Errorf("You cannot start and attach multiple containers at once.")
}
serverResp, err := cli.call("GET", "/containers/"+cmd.Arg(0)+"/json", nil, nil)
stream, _, err := cli.call("GET", "/containers/"+cmd.Arg(0)+"/json", nil, nil)
if err != nil {
return err
}
defer serverResp.body.Close()
var c types.ContainerJSON
if err := json.NewDecoder(serverResp.body).Decode(&c); err != nil {
if err := json.NewDecoder(stream).Decode(&c); err != nil {
return err
}
@@ -163,7 +160,7 @@ func (cli *DockerCli) CmdStart(args ...string) error {
return err
}
if status != 0 {
return Cli.StatusError{StatusCode: status}
return StatusError{StatusCode: status}
}
}
return nil

View File

@@ -12,7 +12,6 @@ import (
"time"
"github.com/docker/docker/api/types"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/units"
)
@@ -36,20 +35,18 @@ func (s *containerStats) Collect(cli *DockerCli, streamStats bool) {
} else {
v.Set("stream", "0")
}
serverResp, err := cli.call("GET", "/containers/"+s.Name+"/stats?"+v.Encode(), nil, nil)
stream, _, err := cli.call("GET", "/containers/"+s.Name+"/stats?"+v.Encode(), nil, nil)
if err != nil {
s.mu.Lock()
s.err = err
s.mu.Unlock()
return
}
defer serverResp.body.Close()
defer stream.Close()
var (
previousCPU uint64
previousSystem uint64
dec = json.NewDecoder(serverResp.body)
dec = json.NewDecoder(stream)
u = make(chan error, 1)
)
go func() {
@@ -125,10 +122,9 @@ func (s *containerStats) Display(w io.Writer) error {
//
// Usage: docker stats CONTAINER [CONTAINER...]
func (cli *DockerCli) CmdStats(args ...string) error {
cmd := Cli.Subcmd("stats", []string{"CONTAINER [CONTAINER...]"}, "Display a live stream of one or more containers' resource usage statistics", true)
cmd := cli.Subcmd("stats", "CONTAINER [CONTAINER...]", "Display a live stream of one or more containers' resource usage statistics", true)
noStream := cmd.Bool([]string{"-no-stream"}, false, "Disable streaming stats and only pull the first result")
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
names := cmd.Args()

View File

@@ -5,7 +5,6 @@ import (
"net/url"
"strconv"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
@@ -15,7 +14,7 @@ import (
//
// Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
func (cli *DockerCli) CmdStop(args ...string) error {
cmd := Cli.Subcmd("stop", []string{"CONTAINER [CONTAINER...]"}, "Stop a running container by sending SIGTERM and then SIGKILL after a\ngrace period", true)
cmd := cli.Subcmd("stop", "CONTAINER [CONTAINER...]", "Stop a running container by sending SIGTERM and then SIGKILL after a\ngrace period", true)
nSeconds := cmd.Int([]string{"t", "-time"}, 10, "Seconds to wait for stop before killing it")
cmd.Require(flag.Min, 1)

View File

@@ -3,7 +3,6 @@ package client
import (
"net/url"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/parsers"
"github.com/docker/docker/registry"
@@ -13,7 +12,7 @@ import (
//
// Usage: docker tag [OPTIONS] IMAGE[:TAG] [REGISTRYHOST/][USERNAME/]NAME[:TAG]
func (cli *DockerCli) CmdTag(args ...string) error {
cmd := Cli.Subcmd("tag", []string{"IMAGE[:TAG] [REGISTRYHOST/][USERNAME/]NAME[:TAG]"}, "Tag an image into a repository", true)
cmd := cli.Subcmd("tag", "IMAGE[:TAG] [REGISTRYHOST/][USERNAME/]NAME[:TAG]", "Tag an image into a repository", true)
force := cmd.Bool([]string{"f", "#force", "-force"}, false, "Force")
cmd.Require(flag.Exact, 2)

View File

@@ -8,7 +8,6 @@ import (
"text/tabwriter"
"github.com/docker/docker/api/types"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
@@ -16,7 +15,7 @@ import (
//
// Usage: docker top CONTAINER
func (cli *DockerCli) CmdTop(args ...string) error {
cmd := Cli.Subcmd("top", []string{"CONTAINER [ps OPTIONS]"}, "Display the running processes of a container", true)
cmd := cli.Subcmd("top", "CONTAINER [ps OPTIONS]", "Display the running processes of a container", true)
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
@@ -26,15 +25,13 @@ func (cli *DockerCli) CmdTop(args ...string) error {
val.Set("ps_args", strings.Join(cmd.Args()[1:], " "))
}
serverResp, err := cli.call("GET", "/containers/"+cmd.Arg(0)+"/top?"+val.Encode(), nil, nil)
stream, _, err := cli.call("GET", "/containers/"+cmd.Arg(0)+"/top?"+val.Encode(), nil, nil)
if err != nil {
return err
}
defer serverResp.body.Close()
procList := types.ContainerProcessList{}
if err := json.NewDecoder(serverResp.body).Decode(&procList); err != nil {
if err := json.NewDecoder(stream).Decode(&procList); err != nil {
return err
}

View File

@@ -1,454 +0,0 @@
package client
import (
"bufio"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"io"
"net"
"net/http"
"net/url"
"os"
"path/filepath"
"regexp"
"sort"
"strconv"
"strings"
"time"
"github.com/Sirupsen/logrus"
"github.com/docker/distribution/digest"
"github.com/docker/distribution/registry/client/auth"
"github.com/docker/distribution/registry/client/transport"
"github.com/docker/docker/cliconfig"
"github.com/docker/docker/pkg/ansiescape"
"github.com/docker/docker/pkg/ioutils"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/tlsconfig"
"github.com/docker/docker/registry"
"github.com/docker/notary/client"
"github.com/docker/notary/pkg/passphrase"
"github.com/docker/notary/trustmanager"
"github.com/endophage/gotuf/data"
)
var untrusted bool
func addTrustedFlags(fs *flag.FlagSet, verify bool) {
var trusted bool
if e := os.Getenv("DOCKER_CONTENT_TRUST"); e != "" {
if t, err := strconv.ParseBool(e); t || err != nil {
// treat any other value as true
trusted = true
}
}
message := "Skip image signing"
if verify {
message = "Skip image verification"
}
fs.BoolVar(&untrusted, []string{"-disable-content-trust"}, !trusted, message)
}
func isTrusted() bool {
return !untrusted
}
var targetRegexp = regexp.MustCompile(`([\S]+): digest: ([\S]+) size: ([\d]+)`)
type target struct {
reference registry.Reference
digest digest.Digest
size int64
}
func (cli *DockerCli) trustDirectory() string {
return filepath.Join(cliconfig.ConfigDir(), "trust")
}
// certificateDirectory returns the directory containing
// TLS certificates for the given server. An error is
// returned if there was an error parsing the server string.
func (cli *DockerCli) certificateDirectory(server string) (string, error) {
u, err := url.Parse(server)
if err != nil {
return "", err
}
return filepath.Join(cliconfig.ConfigDir(), "tls", u.Host), nil
}
func trustServer(index *registry.IndexInfo) string {
if s := os.Getenv("DOCKER_CONTENT_TRUST_SERVER"); s != "" {
if !strings.HasPrefix(s, "https://") {
return "https://" + s
}
return s
}
if index.Official {
return registry.NotaryServer
}
return "https://" + index.Name
}
type simpleCredentialStore struct {
auth cliconfig.AuthConfig
}
func (scs simpleCredentialStore) Basic(u *url.URL) (string, string) {
return scs.auth.Username, scs.auth.Password
}
func (cli *DockerCli) getNotaryRepository(repoInfo *registry.RepositoryInfo, authConfig cliconfig.AuthConfig) (*client.NotaryRepository, error) {
server := trustServer(repoInfo.Index)
if !strings.HasPrefix(server, "https://") {
return nil, errors.New("unsupported scheme: https required for trust server")
}
var cfg = tlsconfig.ClientDefault
cfg.InsecureSkipVerify = !repoInfo.Index.Secure
// Get certificate base directory
certDir, err := cli.certificateDirectory(server)
if err != nil {
return nil, err
}
logrus.Debugf("reading certificate directory: %s", certDir)
if err := registry.ReadCertsDirectory(&cfg, certDir); err != nil {
return nil, err
}
base := &http.Transport{
Proxy: http.ProxyFromEnvironment,
Dial: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
DualStack: true,
}).Dial,
TLSHandshakeTimeout: 10 * time.Second,
TLSClientConfig: &cfg,
DisableKeepAlives: true,
}
// Skip configuration headers since request is not going to Docker daemon
modifiers := registry.DockerHeaders(http.Header{})
authTransport := transport.NewTransport(base, modifiers...)
pingClient := &http.Client{
Transport: authTransport,
Timeout: 5 * time.Second,
}
endpointStr := server + "/v2/"
req, err := http.NewRequest("GET", endpointStr, nil)
if err != nil {
return nil, err
}
resp, err := pingClient.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
challengeManager := auth.NewSimpleChallengeManager()
if err := challengeManager.AddResponse(resp); err != nil {
return nil, err
}
creds := simpleCredentialStore{auth: authConfig}
tokenHandler := auth.NewTokenHandler(authTransport, creds, repoInfo.CanonicalName, "push", "pull")
basicHandler := auth.NewBasicHandler(creds)
modifiers = append(modifiers, transport.RequestModifier(auth.NewAuthorizer(challengeManager, tokenHandler, basicHandler)))
tr := transport.NewTransport(base, modifiers...)
return client.NewNotaryRepository(cli.trustDirectory(), repoInfo.CanonicalName, server, tr, cli.getPassphraseRetriever())
}
func convertTarget(t client.Target) (target, error) {
h, ok := t.Hashes["sha256"]
if !ok {
return target{}, errors.New("no valid hash, expecting sha256")
}
return target{
reference: registry.ParseReference(t.Name),
digest: digest.NewDigestFromHex("sha256", hex.EncodeToString(h)),
size: t.Length,
}, nil
}
func (cli *DockerCli) getPassphraseRetriever() passphrase.Retriever {
aliasMap := map[string]string{
"root": "offline",
"snapshot": "tagging",
"targets": "tagging",
}
baseRetriever := passphrase.PromptRetrieverWithInOut(cli.in, cli.out, aliasMap)
env := map[string]string{
"root": os.Getenv("DOCKER_CONTENT_TRUST_OFFLINE_PASSPHRASE"),
"snapshot": os.Getenv("DOCKER_CONTENT_TRUST_TAGGING_PASSPHRASE"),
"targets": os.Getenv("DOCKER_CONTENT_TRUST_TAGGING_PASSPHRASE"),
}
return func(keyName string, alias string, createNew bool, numAttempts int) (string, bool, error) {
if v := env[alias]; v != "" {
return v, numAttempts > 1, nil
}
return baseRetriever(keyName, alias, createNew, numAttempts)
}
}
func (cli *DockerCli) trustedReference(repo string, ref registry.Reference) (registry.Reference, error) {
repoInfo, err := registry.ParseRepositoryInfo(repo)
if err != nil {
return nil, err
}
// Resolve the Auth config relevant for this server
authConfig := registry.ResolveAuthConfig(cli.configFile, repoInfo.Index)
notaryRepo, err := cli.getNotaryRepository(repoInfo, authConfig)
if err != nil {
fmt.Fprintf(cli.out, "Error establishing connection to trust repository: %s\n", err)
return nil, err
}
t, err := notaryRepo.GetTargetByName(ref.String())
if err != nil {
return nil, err
}
r, err := convertTarget(*t)
if err != nil {
return nil, err
}
return registry.DigestReference(r.digest), nil
}
func (cli *DockerCli) tagTrusted(repoInfo *registry.RepositoryInfo, trustedRef, ref registry.Reference) error {
fullName := trustedRef.ImageName(repoInfo.LocalName)
fmt.Fprintf(cli.out, "Tagging %s as %s\n", fullName, ref.ImageName(repoInfo.LocalName))
tv := url.Values{}
tv.Set("repo", repoInfo.LocalName)
tv.Set("tag", ref.String())
tv.Set("force", "1")
if _, _, err := readBody(cli.call("POST", "/images/"+fullName+"/tag?"+tv.Encode(), nil, nil)); err != nil {
return err
}
return nil
}
func notaryError(err error) error {
switch err.(type) {
case *json.SyntaxError:
logrus.Debugf("Notary syntax error: %s", err)
return errors.New("no trust data available for remote repository")
case client.ErrExpired:
return fmt.Errorf("remote repository out-of-date: %v", err)
case trustmanager.ErrKeyNotFound:
return fmt.Errorf("signing keys not found: %v", err)
}
return err
}
func (cli *DockerCli) trustedPull(repoInfo *registry.RepositoryInfo, ref registry.Reference, authConfig cliconfig.AuthConfig) error {
var (
v = url.Values{}
refs = []target{}
)
notaryRepo, err := cli.getNotaryRepository(repoInfo, authConfig)
if err != nil {
fmt.Fprintf(cli.out, "Error establishing connection to trust repository: %s\n", err)
return err
}
if ref.String() == "" {
// List all targets
targets, err := notaryRepo.ListTargets()
if err != nil {
return notaryError(err)
}
for _, tgt := range targets {
t, err := convertTarget(*tgt)
if err != nil {
fmt.Fprintf(cli.out, "Skipping target for %q\n", repoInfo.LocalName)
continue
}
refs = append(refs, t)
}
} else {
t, err := notaryRepo.GetTargetByName(ref.String())
if err != nil {
return notaryError(err)
}
r, err := convertTarget(*t)
if err != nil {
return err
}
refs = append(refs, r)
}
v.Set("fromImage", repoInfo.LocalName)
for i, r := range refs {
displayTag := r.reference.String()
if displayTag != "" {
displayTag = ":" + displayTag
}
fmt.Fprintf(cli.out, "Pull (%d of %d): %s%s@%s\n", i+1, len(refs), repoInfo.LocalName, displayTag, r.digest)
v.Set("tag", r.digest.String())
_, _, err = cli.clientRequestAttemptLogin("POST", "/images/create?"+v.Encode(), nil, cli.out, repoInfo.Index, "pull")
if err != nil {
return err
}
// If reference is not trusted, tag by trusted reference
if !r.reference.HasDigest() {
if err := cli.tagTrusted(repoInfo, registry.DigestReference(r.digest), r.reference); err != nil {
return err
}
}
}
return nil
}
func selectKey(keys map[string]string) string {
if len(keys) == 0 {
return ""
}
keyIDs := []string{}
for k := range keys {
keyIDs = append(keyIDs, k)
}
// TODO(dmcgowan): let user choose if multiple keys, now pick consistently
sort.Strings(keyIDs)
return keyIDs[0]
}
func targetStream(in io.Writer) (io.WriteCloser, <-chan []target) {
r, w := io.Pipe()
out := io.MultiWriter(in, w)
targetChan := make(chan []target)
go func() {
targets := []target{}
scanner := bufio.NewScanner(r)
scanner.Split(ansiescape.ScanANSILines)
for scanner.Scan() {
line := scanner.Bytes()
if matches := targetRegexp.FindSubmatch(line); len(matches) == 4 {
dgst, err := digest.ParseDigest(string(matches[2]))
if err != nil {
// Line does match what is expected, continue looking for valid lines
logrus.Debugf("Bad digest value %q in matched line, ignoring\n", string(matches[2]))
continue
}
s, err := strconv.ParseInt(string(matches[3]), 10, 64)
if err != nil {
// Line does match what is expected, continue looking for valid lines
logrus.Debugf("Bad size value %q in matched line, ignoring\n", string(matches[3]))
continue
}
targets = append(targets, target{
reference: registry.ParseReference(string(matches[1])),
digest: dgst,
size: s,
})
}
}
targetChan <- targets
}()
return ioutils.NewWriteCloserWrapper(out, w.Close), targetChan
}
func (cli *DockerCli) trustedPush(repoInfo *registry.RepositoryInfo, tag string, authConfig cliconfig.AuthConfig) error {
streamOut, targetChan := targetStream(cli.out)
v := url.Values{}
v.Set("tag", tag)
_, _, err := cli.clientRequestAttemptLogin("POST", "/images/"+repoInfo.LocalName+"/push?"+v.Encode(), nil, streamOut, repoInfo.Index, "push")
// Close stream channel to finish target parsing
if err := streamOut.Close(); err != nil {
return err
}
// Check error from request
if err != nil {
return err
}
// Get target results
targets := <-targetChan
if tag == "" {
fmt.Fprintf(cli.out, "No tag specified, skipping trust metadata push\n")
return nil
}
if len(targets) == 0 {
fmt.Fprintf(cli.out, "No targets found, skipping trust metadata push\n")
return nil
}
fmt.Fprintf(cli.out, "Signing and pushing trust metadata\n")
repo, err := cli.getNotaryRepository(repoInfo, authConfig)
if err != nil {
fmt.Fprintf(cli.out, "Error establishing connection to notary repository: %s\n", err)
return err
}
for _, target := range targets {
h, err := hex.DecodeString(target.digest.Hex())
if err != nil {
return err
}
t := &client.Target{
Name: target.reference.String(),
Hashes: data.Hashes{
string(target.digest.Algorithm()): h,
},
Length: int64(target.size),
}
if err := repo.AddTarget(t); err != nil {
return err
}
}
err = repo.Publish()
if _, ok := err.(*client.ErrRepoNotInitialized); !ok {
return notaryError(err)
}
ks := repo.KeyStoreManager
keys := ks.RootKeyStore().ListKeys()
rootKey := selectKey(keys)
if rootKey == "" {
rootKey, err = ks.GenRootKey("ecdsa")
if err != nil {
return err
}
}
cryptoService, err := ks.GetRootCryptoService(rootKey)
if err != nil {
return err
}
if err := repo.Initialize(cryptoService); err != nil {
return notaryError(err)
}
fmt.Fprintf(cli.out, "Finished initializing %q\n", repoInfo.CanonicalName)
return notaryError(repo.Publish())
}

View File

@@ -3,7 +3,6 @@ package client
import (
"fmt"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
@@ -11,9 +10,8 @@ import (
//
// Usage: docker unpause CONTAINER [CONTAINER...]
func (cli *DockerCli) CmdUnpause(args ...string) error {
cmd := Cli.Subcmd("unpause", []string{"CONTAINER [CONTAINER...]"}, "Unpause all processes within a container", true)
cmd := cli.Subcmd("unpause", "CONTAINER [CONTAINER...]", "Unpause all processes within a container", true)
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
var errNames []string

View File

@@ -33,13 +33,7 @@ var (
errConnectionRefused = errors.New("Cannot connect to the Docker daemon. Is 'docker -d' running on this host?")
)
type serverResponse struct {
body io.ReadCloser
header http.Header
statusCode int
}
// HTTPClient creates a new HTTP client with the cli's client transport instance.
// HTTPClient creates a new HTP client with the cli's client transport instance.
func (cli *DockerCli) HTTPClient() *http.Client {
return &http.Client{Transport: cli.transport}
}
@@ -54,29 +48,23 @@ func (cli *DockerCli) encodeData(data interface{}) (*bytes.Buffer, error) {
return params, nil
}
func (cli *DockerCli) clientRequest(method, path string, in io.Reader, headers map[string][]string) (*serverResponse, error) {
serverResp := &serverResponse{
body: nil,
statusCode: -1,
}
func (cli *DockerCli) clientRequest(method, path string, in io.Reader, headers map[string][]string) (io.ReadCloser, string, int, error) {
expectedPayload := (method == "POST" || method == "PUT")
if expectedPayload && in == nil {
in = bytes.NewReader([]byte{})
}
req, err := http.NewRequest(method, fmt.Sprintf("%s/v%s%s", cli.basePath, api.Version, path), in)
req, err := http.NewRequest(method, fmt.Sprintf("/v%s%s", api.APIVERSION, path), in)
if err != nil {
return serverResp, err
return nil, "", -1, err
}
// Add CLI Config's HTTP Headers BEFORE we set the Docker headers
// then the user can't change OUR headers
for k, v := range cli.configFile.HTTPHeaders {
for k, v := range cli.configFile.HttpHeaders {
req.Header.Set(k, v)
}
req.Header.Set("User-Agent", "Docker-Client/"+dockerversion.VERSION+" ("+runtime.GOOS+")")
req.Header.Set("User-Agent", "Docker-Client/"+dockerversion.VERSION)
req.URL.Host = cli.addr
req.URL.Scheme = cli.scheme
@@ -91,38 +79,33 @@ func (cli *DockerCli) clientRequest(method, path string, in io.Reader, headers m
}
resp, err := cli.HTTPClient().Do(req)
statusCode := -1
if resp != nil {
serverResp.statusCode = resp.StatusCode
statusCode = resp.StatusCode
}
if err != nil {
if strings.Contains(err.Error(), "connection refused") {
return serverResp, errConnectionRefused
return nil, "", statusCode, errConnectionRefused
}
if cli.tlsConfig == nil {
return serverResp, fmt.Errorf("%v.\n* Are you trying to connect to a TLS-enabled daemon without TLS?\n* Is your docker daemon up and running?", err)
return nil, "", statusCode, fmt.Errorf("%v. Are you trying to connect to a TLS-enabled daemon without TLS?", err)
}
if cli.tlsConfig != nil && strings.Contains(err.Error(), "remote error: bad certificate") {
return serverResp, fmt.Errorf("The server probably has client authentication (--tlsverify) enabled. Please check your TLS client certification settings: %v", err)
}
return serverResp, fmt.Errorf("An error occurred trying to connect: %v", err)
return nil, "", statusCode, fmt.Errorf("An error occurred trying to connect: %v", err)
}
if serverResp.statusCode < 200 || serverResp.statusCode >= 400 {
if statusCode < 200 || statusCode >= 400 {
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return serverResp, err
return nil, "", statusCode, err
}
if len(body) == 0 {
return serverResp, fmt.Errorf("Error: request returned %s for API route and version %s, check if the server supports the requested API version", http.StatusText(serverResp.statusCode), req.URL)
return nil, "", statusCode, fmt.Errorf("Error: request returned %s for API route and version %s, check if the server supports the requested API version", http.StatusText(statusCode), req.URL)
}
return serverResp, fmt.Errorf("Error response from daemon: %s", bytes.TrimSpace(body))
return nil, "", statusCode, fmt.Errorf("Error response from daemon: %s", bytes.TrimSpace(body))
}
serverResp.body = resp.Body
serverResp.header = resp.Header
return serverResp, nil
return resp.Body, resp.Header.Get("Content-Type"), statusCode, nil
}
func (cli *DockerCli) clientRequestAttemptLogin(method, path string, in io.Reader, out io.Writer, index *registry.IndexInfo, cmdName string) (io.ReadCloser, int, error) {
@@ -136,25 +119,24 @@ func (cli *DockerCli) clientRequestAttemptLogin(method, path string, in io.Reade
}
// begin the request
serverResp, err := cli.clientRequest(method, path, in, map[string][]string{
body, contentType, statusCode, err := cli.clientRequest(method, path, in, map[string][]string{
"X-Registry-Auth": registryAuthHeader,
})
if err == nil && out != nil {
// If we are streaming output, complete the stream since
// errors may not appear until later.
err = cli.streamBody(serverResp.body, serverResp.header.Get("Content-Type"), true, out, nil)
err = cli.streamBody(body, contentType, true, out, nil)
}
if err != nil {
// Since errors in a stream appear after status 200 has been written,
// we may need to change the status code.
if strings.Contains(err.Error(), "Authentication is required") ||
strings.Contains(err.Error(), "Status 401") ||
strings.Contains(err.Error(), "401 Unauthorized") ||
strings.Contains(err.Error(), "status code 401") {
serverResp.statusCode = http.StatusUnauthorized
statusCode = http.StatusUnauthorized
}
}
return serverResp.body, serverResp.statusCode, err
return body, statusCode, err
}
// Resolve the Auth config relevant for this server
@@ -171,20 +153,10 @@ func (cli *DockerCli) clientRequestAttemptLogin(method, path string, in io.Reade
return body, statusCode, err
}
func (cli *DockerCli) callWrapper(method, path string, data interface{}, headers map[string][]string) (io.ReadCloser, http.Header, int, error) {
sr, err := cli.call(method, path, data, headers)
return sr.body, sr.header, sr.statusCode, err
}
func (cli *DockerCli) call(method, path string, data interface{}, headers map[string][]string) (*serverResponse, error) {
func (cli *DockerCli) call(method, path string, data interface{}, headers map[string][]string) (io.ReadCloser, int, error) {
params, err := cli.encodeData(data)
if err != nil {
sr := &serverResponse{
body: nil,
header: nil,
statusCode: -1,
}
return sr, nil
return nil, -1, err
}
if data != nil {
@@ -194,8 +166,8 @@ func (cli *DockerCli) call(method, path string, data interface{}, headers map[st
headers["Content-Type"] = []string{"application/json"}
}
serverResp, err := cli.clientRequest(method, path, params, headers)
return serverResp, err
body, _, statusCode, err := cli.clientRequest(method, path, params, headers)
return body, statusCode, err
}
type streamOpts struct {
@@ -206,12 +178,12 @@ type streamOpts struct {
headers map[string][]string
}
func (cli *DockerCli) stream(method, path string, opts *streamOpts) (*serverResponse, error) {
serverResp, err := cli.clientRequest(method, path, opts.in, opts.headers)
func (cli *DockerCli) stream(method, path string, opts *streamOpts) error {
body, contentType, _, err := cli.clientRequest(method, path, opts.in, opts.headers)
if err != nil {
return serverResp, err
return err
}
return serverResp, cli.streamBody(serverResp.body, serverResp.header.Get("Content-Type"), opts.rawTerminal, opts.out, opts.err)
return cli.streamBody(body, contentType, opts.rawTerminal, opts.out, opts.err)
}
func (cli *DockerCli) streamBody(body io.ReadCloser, contentType string, rawTerminal bool, stdout, stderr io.Writer) error {
@@ -256,15 +228,13 @@ func (cli *DockerCli) resizeTty(id string, isExec bool) {
}
func waitForExit(cli *DockerCli, containerID string) (int, error) {
serverResp, err := cli.call("POST", "/containers/"+containerID+"/wait", nil, nil)
stream, _, err := cli.call("POST", "/containers/"+containerID+"/wait", nil, nil)
if err != nil {
return -1, err
}
defer serverResp.body.Close()
var res types.ContainerWaitResponse
if err := json.NewDecoder(serverResp.body).Decode(&res); err != nil {
if err := json.NewDecoder(stream).Decode(&res); err != nil {
return -1, err
}
@@ -274,7 +244,7 @@ func waitForExit(cli *DockerCli, containerID string) (int, error) {
// getExitCode perform an inspect on the container. It returns
// the running state and the exit code.
func getExitCode(cli *DockerCli, containerID string) (bool, int, error) {
serverResp, err := cli.call("GET", "/containers/"+containerID+"/json", nil, nil)
stream, _, err := cli.call("GET", "/containers/"+containerID+"/json", nil, nil)
if err != nil {
// If we can't connect, then the daemon probably died.
if err != errConnectionRefused {
@@ -283,10 +253,8 @@ func getExitCode(cli *DockerCli, containerID string) (bool, int, error) {
return false, -1, nil
}
defer serverResp.body.Close()
var c types.ContainerJSON
if err := json.NewDecoder(serverResp.body).Decode(&c); err != nil {
if err := json.NewDecoder(stream).Decode(&c); err != nil {
return false, -1, err
}
@@ -296,7 +264,7 @@ func getExitCode(cli *DockerCli, containerID string) (bool, int, error) {
// getExecExitCode perform an inspect on the exec command. It returns
// the running state and the exit code.
func getExecExitCode(cli *DockerCli, execID string) (bool, int, error) {
serverResp, err := cli.call("GET", "/exec/"+execID+"/json", nil, nil)
stream, _, err := cli.call("GET", "/exec/"+execID+"/json", nil, nil)
if err != nil {
// If we can't connect, then the daemon probably died.
if err != errConnectionRefused {
@@ -305,8 +273,6 @@ func getExecExitCode(cli *DockerCli, execID string) (bool, int, error) {
return false, -1, nil
}
defer serverResp.body.Close()
//TODO: Should we reconsider having a type in api/types?
//this is a response to exex/id/json not container
var c struct {
@@ -314,7 +280,7 @@ func getExecExitCode(cli *DockerCli, execID string) (bool, int, error) {
ExitCode int
}
if err := json.NewDecoder(serverResp.body).Decode(&c); err != nil {
if err := json.NewDecoder(stream).Decode(&c); err != nil {
return false, -1, err
}
@@ -364,16 +330,16 @@ func (cli *DockerCli) getTtySize() (int, int) {
return int(ws.Height), int(ws.Width)
}
func readBody(serverResp *serverResponse, err error) ([]byte, int, error) {
if serverResp.body != nil {
defer serverResp.body.Close()
func readBody(stream io.ReadCloser, statusCode int, err error) ([]byte, int, error) {
if stream != nil {
defer stream.Close()
}
if err != nil {
return nil, serverResp.statusCode, err
return nil, statusCode, err
}
body, err := ioutil.ReadAll(serverResp.body)
body, err := ioutil.ReadAll(stream)
if err != nil {
return nil, -1, err
}
return body, serverResp.statusCode, nil
return body, statusCode, nil
}

View File

@@ -2,95 +2,60 @@ package client
import (
"encoding/json"
"fmt"
"runtime"
"text/template"
"github.com/docker/docker/api"
"github.com/docker/docker/api/types"
"github.com/docker/docker/autogen/dockerversion"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/utils"
)
var VersionTemplate = `Client:
Version: {{.Client.Version}}
API version: {{.Client.ApiVersion}}
Go version: {{.Client.GoVersion}}
Git commit: {{.Client.GitCommit}}
Built: {{.Client.BuildTime}}
OS/Arch: {{.Client.Os}}/{{.Client.Arch}}{{if .Client.Experimental}}
Experimental: {{.Client.Experimental}}{{end}}{{if .ServerOK}}
Server:
Version: {{.Server.Version}}
API version: {{.Server.ApiVersion}}
Go version: {{.Server.GoVersion}}
Git commit: {{.Server.GitCommit}}
Built: {{.Server.BuildTime}}
OS/Arch: {{.Server.Os}}/{{.Server.Arch}}{{if .Server.Experimental}}
Experimental: {{.Server.Experimental}}{{end}}{{end}}`
type VersionData struct {
Client types.Version
ServerOK bool
Server types.Version
}
// CmdVersion shows Docker version information.
//
// Available version information is shown for: client Docker version, client API version, client Go version, client Git commit, client OS/Arch, server Docker version, server API version, server Go version, server Git commit, and server OS/Arch.
//
// Usage: docker version
func (cli *DockerCli) CmdVersion(args ...string) (err error) {
cmd := Cli.Subcmd("version", nil, "Show the Docker version information.", true)
tmplStr := cmd.String([]string{"f", "#format", "-format"}, "", "Format the output using the given go template")
func (cli *DockerCli) CmdVersion(args ...string) error {
cmd := cli.Subcmd("version", "", "Show the Docker version information.", true)
cmd.Require(flag.Exact, 0)
cmd.ParseFlags(args, true)
if *tmplStr == "" {
*tmplStr = VersionTemplate
if dockerversion.VERSION != "" {
fmt.Fprintf(cli.out, "Client version: %s\n", dockerversion.VERSION)
}
fmt.Fprintf(cli.out, "Client API version: %s\n", api.APIVERSION)
fmt.Fprintf(cli.out, "Go version (client): %s\n", runtime.Version())
if dockerversion.GITCOMMIT != "" {
fmt.Fprintf(cli.out, "Git commit (client): %s\n", dockerversion.GITCOMMIT)
}
fmt.Fprintf(cli.out, "OS/Arch (client): %s/%s\n", runtime.GOOS, runtime.GOARCH)
if utils.ExperimentalBuild() {
fmt.Fprintf(cli.out, "Experimental (client): true\n")
}
var tmpl *template.Template
if tmpl, err = template.New("").Funcs(funcMap).Parse(*tmplStr); err != nil {
return Cli.StatusError{StatusCode: 64,
Status: "Template parsing error: " + err.Error()}
}
vd := VersionData{
Client: types.Version{
Version: dockerversion.VERSION,
ApiVersion: api.Version,
GoVersion: runtime.Version(),
GitCommit: dockerversion.GITCOMMIT,
BuildTime: dockerversion.BUILDTIME,
Os: runtime.GOOS,
Arch: runtime.GOARCH,
Experimental: utils.ExperimentalBuild(),
},
}
defer func() {
if err2 := tmpl.Execute(cli.out, vd); err2 != nil && err == nil {
err = err2
}
cli.out.Write([]byte{'\n'})
}()
serverResp, err := cli.call("GET", "/version", nil, nil)
stream, _, err := cli.call("GET", "/version", nil, nil)
if err != nil {
return err
}
defer serverResp.body.Close()
if err = json.NewDecoder(serverResp.body).Decode(&vd.Server); err != nil {
return Cli.StatusError{StatusCode: 1,
Status: "Error reading remote version: " + err.Error()}
var v types.Version
if err := json.NewDecoder(stream).Decode(&v); err != nil {
fmt.Fprintf(cli.err, "Error reading remote version: %s\n", err)
return err
}
vd.ServerOK = true
return
fmt.Fprintf(cli.out, "Server version: %s\n", v.Version)
if v.ApiVersion != "" {
fmt.Fprintf(cli.out, "Server API version: %s\n", v.ApiVersion)
}
fmt.Fprintf(cli.out, "Go version (server): %s\n", v.GoVersion)
fmt.Fprintf(cli.out, "Git commit (server): %s\n", v.GitCommit)
fmt.Fprintf(cli.out, "OS/Arch (server): %s/%s\n", v.Os, v.Arch)
if v.Experimental {
fmt.Fprintf(cli.out, "Experimental (server): true\n")
}
return nil
}

View File

@@ -3,7 +3,6 @@ package client
import (
"fmt"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
@@ -13,7 +12,7 @@ import (
//
// Usage: docker wait CONTAINER [CONTAINER...]
func (cli *DockerCli) CmdWait(args ...string) error {
cmd := Cli.Subcmd("wait", []string{"CONTAINER [CONTAINER...]"}, "Block until a container stops, then print its exit code.", true)
cmd := cli.Subcmd("wait", "CONTAINER [CONTAINER...]", "Block until a container stops, then print its exit code.", true)
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)

View File

@@ -16,14 +16,8 @@ import (
// Common constants for daemon and client.
const (
// Current REST API version
Version version.Version = "1.20"
// Minimun REST API version supported
MinVersion version.Version = "1.12"
// Default filename with Docker commands, read by docker build
DefaultDockerfileName string = "Dockerfile"
APIVERSION version.Version = "1.19" // Current REST API version
DefaultDockerfileName string = "Dockerfile" // Default filename with Docker commands, read by docker build
)
type ByPrivatePort []types.Port

View File

@@ -1,7 +1,6 @@
package server
import (
"fmt"
"net/http"
"strconv"
"strings"
@@ -28,29 +27,3 @@ func int64ValueOrZero(r *http.Request, k string) int64 {
}
return val
}
type archiveOptions struct {
name string
path string
}
func archiveFormValues(r *http.Request, vars map[string]string) (archiveOptions, error) {
if vars == nil {
return archiveOptions{}, fmt.Errorf("Missing parameter")
}
if err := parseForm(r); err != nil {
return archiveOptions{}, err
}
name := vars["name"]
path := r.Form.Get("path")
switch {
case name == "":
return archiveOptions{}, fmt.Errorf("bad parameter: 'name' cannot be empty")
case path == "":
return archiveOptions{}, fmt.Errorf("bad parameter: 'path' cannot be empty")
}
return archiveOptions{name, path}, nil
}

View File

@@ -1,7 +1,6 @@
package server
import (
"crypto/tls"
"encoding/base64"
"encoding/json"
"fmt"
@@ -14,8 +13,8 @@ import (
"strings"
"time"
"code.google.com/p/go.net/websocket"
"github.com/gorilla/mux"
"golang.org/x/net/websocket"
"github.com/Sirupsen/logrus"
"github.com/docker/docker/api"
@@ -34,7 +33,6 @@ import (
"github.com/docker/docker/pkg/sockets"
"github.com/docker/docker/pkg/stdcopy"
"github.com/docker/docker/pkg/streamformatter"
"github.com/docker/docker/pkg/ulimit"
"github.com/docker/docker/pkg/version"
"github.com/docker/docker/runconfig"
"github.com/docker/docker/utils"
@@ -46,7 +44,11 @@ type ServerConfig struct {
CorsHeaders string
Version string
SocketGroup string
TLSConfig *tls.Config
Tls bool
TlsVerify bool
TlsCa string
TlsCert string
TlsKey string
}
type Server struct {
@@ -245,12 +247,11 @@ func (s *Server) postAuth(version version.Version, w http.ResponseWriter, r *htt
func (s *Server) getVersion(version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
v := &types.Version{
Version: dockerversion.VERSION,
ApiVersion: api.Version,
ApiVersion: api.APIVERSION,
GitCommit: dockerversion.GITCOMMIT,
GoVersion: runtime.Version(),
Os: runtime.GOOS,
Arch: runtime.GOARCH,
BuildTime: dockerversion.BUILDTIME,
}
if version.GreaterThanOrEqualTo("1.19") {
@@ -298,13 +299,7 @@ func (s *Server) postContainersKill(version version.Version, w http.ResponseWrit
}
if err := s.daemon.ContainerKill(name, sig); err != nil {
_, isStopped := err.(daemon.ErrContainerNotRunning)
// Return error that's not caused because the container is stopped.
// Return error if the container is not running and the api is >= 1.20
// to keep backwards compatibility.
if version.GreaterThanOrEqualTo("1.20") || !isStopped {
return fmt.Errorf("Cannot kill container %s: %v", name, err)
}
return err
}
w.WriteHeader(http.StatusNoContent)
@@ -417,9 +412,6 @@ func (s *Server) getEvents(version version.Version, w http.ResponseWriter, r *ht
}
isFiltered := func(field string, filter []string) bool {
if len(field) == 0 {
return false
}
if len(filter) == 0 {
return false
}
@@ -440,9 +432,7 @@ func (s *Server) getEvents(version version.Version, w http.ResponseWriter, r *ht
d := s.daemon
es := d.EventsService
w.Header().Set("Content-Type", "application/json")
outStream := ioutils.NewWriteFlusher(w)
outStream.Write(nil) // make sure response is sent immediately
enc := json.NewEncoder(outStream)
enc := json.NewEncoder(ioutils.NewWriteFlusher(w))
getContainerId := func(cn string) string {
c, err := d.Get(cn)
@@ -458,8 +448,8 @@ func (s *Server) getEvents(version version.Version, w http.ResponseWriter, r *ht
ef["container"][i] = getContainerId(cn)
}
if isFiltered(ev.Status, ef["event"]) || (isFiltered(ev.ID, ef["image"]) &&
isFiltered(ev.From, ef["image"])) || isFiltered(ev.ID, ef["container"]) {
if isFiltered(ev.Status, ef["event"]) || isFiltered(ev.From, ef["image"]) ||
isFiltered(ev.ID, ef["container"]) {
return nil
}
@@ -594,18 +584,7 @@ func (s *Server) getContainersStats(version version.Version, w http.ResponseWrit
out = ioutils.NewWriteFlusher(w)
}
var closeNotifier <-chan bool
if notifier, ok := w.(http.CloseNotifier); ok {
closeNotifier = notifier.CloseNotify()
}
config := &daemon.ContainerStatsConfig{
Stream: stream,
OutStream: out,
Stop: closeNotifier,
}
return s.daemon.ContainerStats(vars["name"], config)
return s.daemon.ContainerStats(vars["name"], stream, out)
}
func (s *Server) getContainersLogs(version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -631,22 +610,6 @@ func (s *Server) getContainersLogs(version version.Version, w http.ResponseWrite
since = time.Unix(s, 0)
}
var closeNotifier <-chan bool
if notifier, ok := w.(http.CloseNotifier); ok {
closeNotifier = notifier.CloseNotify()
}
c, err := s.daemon.Get(vars["name"])
if err != nil {
return err
}
outStream := ioutils.NewWriteFlusher(w)
// write an empty chunk of data (this is to ensure that the
// HTTP Response is sent immediatly, even if the container has
// not yet produced any data)
outStream.Write(nil)
logsConfig := &daemon.ContainerLogsConfig{
Follow: boolValue(r, "follow"),
Timestamps: boolValue(r, "timestamps"),
@@ -654,11 +617,10 @@ func (s *Server) getContainersLogs(version version.Version, w http.ResponseWrite
Tail: r.Form.Get("tail"),
UseStdout: stdout,
UseStderr: stderr,
OutStream: outStream,
Stop: closeNotifier,
OutStream: ioutils.NewWriteFlusher(w),
}
if err := s.daemon.ContainerLogs(c, logsConfig); err != nil {
if err := s.daemon.ContainerLogs(vars["name"], logsConfig); err != nil {
fmt.Fprintf(w, "Error running logs job: %s\n", err)
}
@@ -694,7 +656,7 @@ func (s *Server) postCommit(version version.Version, w http.ResponseWriter, r *h
return err
}
cname := r.Form.Get("container")
cont := r.Form.Get("container")
pause := boolValue(r, "pause")
if r.FormValue("pause") == "" && version.GreaterThanOrEqualTo("1.13") {
@@ -706,7 +668,7 @@ func (s *Server) postCommit(version version.Version, w http.ResponseWriter, r *h
return err
}
commitCfg := &builder.CommitConfig{
containerCommitConfig := &daemon.ContainerCommitConfig{
Pause: pause,
Repo: r.Form.Get("repo"),
Tag: r.Form.Get("tag"),
@@ -716,7 +678,7 @@ func (s *Server) postCommit(version version.Version, w http.ResponseWriter, r *h
Config: c,
}
imgID, err := builder.Commit(cname, s.daemon, commitCfg)
imgID, err := builder.Commit(s.daemon, cont, containerCommitConfig)
if err != nil {
return err
}
@@ -836,7 +798,7 @@ func (s *Server) getImagesSearch(version version.Version, w http.ResponseWriter,
if err != nil {
return err
}
return writeJSON(w, http.StatusOK, query.Results)
return json.NewEncoder(w).Encode(query.Results)
}
func (s *Server) postImagesPush(version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -1139,11 +1101,6 @@ func (s *Server) postContainersAttach(version version.Version, w http.ResponseWr
return fmt.Errorf("Missing parameter")
}
cont, err := s.daemon.Get(vars["name"])
if err != nil {
return err
}
inStream, outStream, err := hijackServer(w)
if err != nil {
return err
@@ -1164,9 +1121,10 @@ func (s *Server) postContainersAttach(version version.Version, w http.ResponseWr
UseStderr: boolValue(r, "stderr"),
Logs: boolValue(r, "logs"),
Stream: boolValue(r, "stream"),
Multiplex: version.GreaterThanOrEqualTo("1.6"),
}
if err := s.daemon.ContainerAttachWithLogs(cont, attachWithLogsConfig); err != nil {
if err := s.daemon.ContainerAttachWithLogs(vars["name"], attachWithLogsConfig); err != nil {
fmt.Fprintf(outStream, "Error attaching: %s\n", err)
}
@@ -1181,11 +1139,6 @@ func (s *Server) wsContainersAttach(version version.Version, w http.ResponseWrit
return fmt.Errorf("Missing parameter")
}
cont, err := s.daemon.Get(vars["name"])
if err != nil {
return err
}
h := websocket.Handler(func(ws *websocket.Conn) {
defer ws.Close()
@@ -1197,7 +1150,7 @@ func (s *Server) wsContainersAttach(version version.Version, w http.ResponseWrit
Stream: boolValue(r, "stream"),
}
if err := s.daemon.ContainerWsAttachWithLogs(cont, wsAttachWithLogsConfig); err != nil {
if err := s.daemon.ContainerWsAttachWithLogs(vars["name"], wsAttachWithLogsConfig); err != nil {
logrus.Errorf("Error attaching websocket: %s", err)
}
})
@@ -1211,8 +1164,8 @@ func (s *Server) getContainersByName(version version.Version, w http.ResponseWri
return fmt.Errorf("Missing parameter")
}
if version.LessThan("1.20") {
containerJSONRaw, err := s.daemon.ContainerInspectPre120(vars["name"])
if version.LessThan("1.19") {
containerJSONRaw, err := s.daemon.ContainerInspectRaw(vars["name"])
if err != nil {
return err
}
@@ -1254,17 +1207,18 @@ func (s *Server) getImagesByName(version version.Version, w http.ResponseWriter,
func (s *Server) postBuild(version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var (
authConfigs = map[string]cliconfig.AuthConfig{}
authConfigsEncoded = r.Header.Get("X-Registry-Config")
buildConfig = builder.NewBuildConfig()
authConfig = &cliconfig.AuthConfig{}
configFileEncoded = r.Header.Get("X-Registry-Config")
configFile = &cliconfig.ConfigFile{}
buildConfig = builder.NewBuildConfig()
)
if authConfigsEncoded != "" {
authConfigsJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authConfigsEncoded))
if err := json.NewDecoder(authConfigsJSON).Decode(&authConfigs); err != nil {
if configFileEncoded != "" {
configFileJson := base64.NewDecoder(base64.URLEncoding, strings.NewReader(configFileEncoded))
if err := json.NewDecoder(configFileJson).Decode(configFile); err != nil {
// for a pull it is not an error if no auth was given
// to increase compatibility with the existing api it is defaulting
// to be empty.
// to increase compatibility with the existing api it is defaulting to be empty
configFile = &cliconfig.ConfigFile{}
}
}
@@ -1291,25 +1245,17 @@ func (s *Server) postBuild(version version.Version, w http.ResponseWriter, r *ht
buildConfig.SuppressOutput = boolValue(r, "q")
buildConfig.NoCache = boolValue(r, "nocache")
buildConfig.ForceRemove = boolValue(r, "forcerm")
buildConfig.AuthConfigs = authConfigs
buildConfig.AuthConfig = authConfig
buildConfig.ConfigFile = configFile
buildConfig.MemorySwap = int64ValueOrZero(r, "memswap")
buildConfig.Memory = int64ValueOrZero(r, "memory")
buildConfig.CPUShares = int64ValueOrZero(r, "cpushares")
buildConfig.CPUPeriod = int64ValueOrZero(r, "cpuperiod")
buildConfig.CPUQuota = int64ValueOrZero(r, "cpuquota")
buildConfig.CPUSetCpus = r.FormValue("cpusetcpus")
buildConfig.CPUSetMems = r.FormValue("cpusetmems")
buildConfig.CpuShares = int64ValueOrZero(r, "cpushares")
buildConfig.CpuPeriod = int64ValueOrZero(r, "cpuperiod")
buildConfig.CpuQuota = int64ValueOrZero(r, "cpuquota")
buildConfig.CpuSetCpus = r.FormValue("cpusetcpus")
buildConfig.CpuSetMems = r.FormValue("cpusetmems")
buildConfig.CgroupParent = r.FormValue("cgroupparent")
var buildUlimits = []*ulimit.Ulimit{}
ulimitsJson := r.FormValue("ulimits")
if ulimitsJson != "" {
if err := json.NewDecoder(strings.NewReader(ulimitsJson)).Decode(&buildUlimits); err != nil {
return err
}
buildConfig.Ulimits = buildUlimits
}
// Job cancellation. Note: not all job types support this.
if closeNotifier, ok := w.(http.CloseNotifier); ok {
finished := make(chan struct{})
@@ -1336,7 +1282,6 @@ func (s *Server) postBuild(version version.Version, w http.ResponseWriter, r *ht
return nil
}
// postContainersCopy is deprecated in favor of getContainersArchivePath.
func (s *Server) postContainersCopy(version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
@@ -1376,74 +1321,10 @@ func (s *Server) postContainersCopy(version version.Version, w http.ResponseWrit
return nil
}
// // Encode the stat to JSON, base64 encode, and place in a header.
func setContainerPathStatHeader(stat *types.ContainerPathStat, header http.Header) error {
statJSON, err := json.Marshal(stat)
if err != nil {
return err
}
header.Set(
"X-Docker-Container-Path-Stat",
base64.StdEncoding.EncodeToString(statJSON),
)
return nil
}
func (s *Server) headContainersArchive(version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
v, err := archiveFormValues(r, vars)
if err != nil {
return err
}
stat, err := s.daemon.ContainerStatPath(v.name, v.path)
if err != nil {
return err
}
return setContainerPathStatHeader(stat, w.Header())
}
func (s *Server) getContainersArchive(version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
v, err := archiveFormValues(r, vars)
if err != nil {
return err
}
tarArchive, stat, err := s.daemon.ContainerArchivePath(v.name, v.path)
if err != nil {
return err
}
defer tarArchive.Close()
if err := setContainerPathStatHeader(stat, w.Header()); err != nil {
return err
}
w.Header().Set("Content-Type", "application/x-tar")
_, err = io.Copy(w, tarArchive)
return err
}
func (s *Server) putContainersArchive(version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
v, err := archiveFormValues(r, vars)
if err != nil {
return err
}
noOverwriteDirNonDir := boolValue(r, "noOverwriteDirNonDir")
return s.daemon.ContainerExtractToDir(v.name, v.path, noOverwriteDirNonDir, r.Body)
}
func (s *Server) postContainerExecCreate(version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
if err := checkForJson(r); err != nil {
return err
}
name := vars["name"]
execConfig := &runconfig.ExecConfig{}
@@ -1558,15 +1439,22 @@ func (s *Server) ping(version version.Version, w http.ResponseWriter, r *http.Re
}
func (s *Server) initTcpSocket(addr string) (l net.Listener, err error) {
if s.cfg.TLSConfig == nil || s.cfg.TLSConfig.ClientAuth != tls.RequireAndVerifyClientCert {
if !s.cfg.TlsVerify {
logrus.Warn("/!\\ DON'T BIND ON ANY IP ADDRESS WITHOUT setting -tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING /!\\")
}
if l, err = sockets.NewTcpSocket(addr, s.cfg.TLSConfig, s.start); err != nil {
var c *sockets.TlsConfig
if s.cfg.Tls || s.cfg.TlsVerify {
c = sockets.NewTlsConfig(s.cfg.TlsCert, s.cfg.TlsKey, s.cfg.TlsCa, s.cfg.TlsVerify)
}
if l, err = sockets.NewTcpSocket(addr, c, s.start); err != nil {
return nil, err
}
if err := allocateDaemonPort(addr); err != nil {
return nil, err
}
return
}
@@ -1581,35 +1469,22 @@ func makeHttpHandler(logging bool, localMethod string, localRoute string, handle
if strings.Contains(r.Header.Get("User-Agent"), "Docker-Client/") {
userAgent := strings.Split(r.Header.Get("User-Agent"), "/")
// v1.20 onwards includes the GOOS of the client after the version
// such as Docker/1.7.0 (linux)
if len(userAgent) == 2 && strings.Contains(userAgent[1], " ") {
userAgent[1] = strings.Split(userAgent[1], " ")[0]
}
if len(userAgent) == 2 && !dockerVersion.Equal(version.Version(userAgent[1])) {
logrus.Debugf("Warning: client and server don't have the same version (client: %s, server: %s)", userAgent[1], dockerVersion)
}
}
version := version.Version(mux.Vars(r)["version"])
if version == "" {
version = api.Version
version = api.APIVERSION
}
if corsHeaders != "" {
writeCorsHeaders(w, r, corsHeaders)
}
if version.GreaterThan(api.Version) {
http.Error(w, fmt.Errorf("client is newer than server (client API version: %s, server API version: %s)", version, api.Version).Error(), http.StatusBadRequest)
if version.GreaterThan(api.APIVERSION) {
http.Error(w, fmt.Errorf("client and server don't have same version (client API version: %s, server API version: %s)", version, api.APIVERSION).Error(), http.StatusBadRequest)
return
}
if version.LessThan(api.MinVersion) {
http.Error(w, fmt.Errorf("client is too old, minimum supported API version is %s, please upgrade your client to a newer version", api.MinVersion).Error(), http.StatusBadRequest)
return
}
w.Header().Set("Server", "Docker/"+dockerversion.VERSION+" ("+runtime.GOOS+")")
if err := handlerFunc(version, w, r, mux.Vars(r)); err != nil {
logrus.Errorf("Handler for %s %s returned error: %s", localMethod, localRoute, err)
@@ -1625,9 +1500,6 @@ func createRouter(s *Server) *mux.Router {
ProfilerSetup(r, "/debug/")
}
m := map[string]map[string]HttpApiFunc{
"HEAD": {
"/containers/{name:.*}/archive": s.headContainersArchive,
},
"GET": {
"/_ping": s.ping,
"/events": s.getEvents,
@@ -1649,7 +1521,6 @@ func createRouter(s *Server) *mux.Router {
"/containers/{name:.*}/stats": s.getContainersStats,
"/containers/{name:.*}/attach/ws": s.wsContainersAttach,
"/exec/{id:.*}/json": s.getExecByID,
"/containers/{name:.*}/archive": s.getContainersArchive,
},
"POST": {
"/auth": s.postAuth,
@@ -1675,9 +1546,6 @@ func createRouter(s *Server) *mux.Router {
"/exec/{name:.*}/resize": s.postContainerExecResize,
"/containers/{name:.*}/rename": s.postContainerRename,
},
"PUT": {
"/containers/{name:.*}/archive": s.putContainersArchive,
},
"DELETE": {
"/containers/{name:.*}": s.deleteContainers,
"/images/{name:.*}": s.deleteImages,

View File

@@ -1,17 +0,0 @@
// +build experimental
package server
func (s *Server) registerSubRouter() {
httpHandler := s.daemon.NetworkApiRouter()
subrouter := s.router.PathPrefix("/v{version:[0-9.]+}/networks").Subrouter()
subrouter.Methods("GET", "POST", "PUT", "DELETE").HandlerFunc(httpHandler)
subrouter = s.router.PathPrefix("/networks").Subrouter()
subrouter.Methods("GET", "POST", "PUT", "DELETE").HandlerFunc(httpHandler)
subrouter = s.router.PathPrefix("/v{version:[0-9.]+}/services").Subrouter()
subrouter.Methods("GET", "POST", "PUT", "DELETE").HandlerFunc(httpHandler)
subrouter = s.router.PathPrefix("/services").Subrouter()
subrouter.Methods("GET", "POST", "PUT", "DELETE").HandlerFunc(httpHandler)
}

View File

@@ -70,7 +70,6 @@ func (s *Server) newServer(proto, addr string) ([]serverCloser, error) {
func (s *Server) AcceptConnections(d *daemon.Daemon) {
// Tell the init daemon we are accepting requests
s.daemon = d
s.registerSubRouter()
go systemd.SdNotify("READY=1")
// close the lock so the listeners start accepting connections
select {
@@ -109,7 +108,7 @@ func allocateDaemonPort(addr string) error {
func adjustCpuShares(version version.Version, hostConfig *runconfig.HostConfig) {
if version.LessThan("1.19") {
if hostConfig != nil && hostConfig.CpuShares > 0 {
if hostConfig.CpuShares > 0 {
// Handle unsupported CpuShares
if hostConfig.CpuShares < linuxMinCpuShares {
logrus.Warnf("Changing requested CpuShares of %d to minimum allowed of %d", hostConfig.CpuShares, linuxMinCpuShares)

View File

@@ -1,6 +0,0 @@
// +build !experimental
package server
func (s *Server) registerSubRouter() {
}

View File

@@ -8,44 +8,35 @@ import (
"net/http"
"github.com/docker/docker/daemon"
"github.com/docker/docker/pkg/version"
"github.com/docker/docker/runconfig"
)
// NewServer sets up the required Server and does protocol specific checking.
func (s *Server) newServer(proto, addr string) ([]serverCloser, error) {
func (s *Server) newServer(proto, addr string) (serverCloser, error) {
var (
ls []net.Listener
err error
l net.Listener
)
switch proto {
case "tcp":
l, err := s.initTcpSocket(addr)
l, err = s.initTcpSocket(addr)
if err != nil {
return nil, err
}
ls = append(ls, l)
default:
return nil, errors.New("Invalid protocol format. Windows only supports tcp.")
}
var res []serverCloser
for _, l := range ls {
res = append(res, &HttpServer{
&http.Server{
Addr: addr,
Handler: s.router,
},
l,
})
}
return res, nil
return &HttpServer{
&http.Server{
Addr: addr,
Handler: s.router,
},
l,
}, nil
}
func (s *Server) AcceptConnections(d *daemon.Daemon) {
s.daemon = d
s.registerSubRouter()
// close the lock so the listeners start accepting connections
select {
case <-s.start:

View File

@@ -48,7 +48,6 @@ type MemoryStats struct {
Limit uint64 `json:"limit"`
}
// TODO Windows: This can be factored out
type BlkioStatEntry struct {
Major uint64 `json:"major"`
Minor uint64 `json:"minor"`
@@ -56,7 +55,6 @@ type BlkioStatEntry struct {
Value uint64 `json:"value"`
}
// TODO Windows: This can be factored out
type BlkioStats struct {
// number of bytes tranferred to and from the block device
IoServiceBytesRecursive []BlkioStatEntry `json:"io_service_bytes_recursive"`
@@ -69,7 +67,6 @@ type BlkioStats struct {
SectorsRecursive []BlkioStatEntry `json:"sectors_recursive"`
}
// TODO Windows: This will require refactoring
type Network struct {
RxBytes uint64 `json:"rx_bytes"`
RxPackets uint64 `json:"rx_packets"`

View File

@@ -1,7 +1,6 @@
package types
import (
"os"
"time"
"github.com/docker/docker/daemon/network"
@@ -76,17 +75,12 @@ type Image struct {
Labels map[string]string
}
type GraphDriverData struct {
Name string
Data map[string]string
}
// GET "/images/{name:.*}/json"
type ImageInspect struct {
Id string
Parent string
Comment string
Created string
Created time.Time
Container string
ContainerConfig *runconfig.Config
DockerVersion string
@@ -96,7 +90,6 @@ type ImageInspect struct {
Os string
Size int64
VirtualSize int64
GraphDriver GraphDriverData
}
// GET "/containers/json"
@@ -118,9 +111,6 @@ type Container struct {
SizeRootFs int `json:",omitempty"`
Labels map[string]string
Status string
HostConfig struct {
NetworkMode string `json:",omitempty"`
}
}
// POST "/containers/"+containerID+"/copy"
@@ -128,17 +118,6 @@ type CopyConfig struct {
Resource string
}
// ContainerPathStat is used to encode the header from
// GET /containers/{name:.*}/archive
// "name" is basename of the resource.
type ContainerPathStat struct {
Name string `json:"name"`
Size int64 `json:"size"`
Mode os.FileMode `json:"mode"`
Mtime time.Time `json:"mtime"`
LinkTarget string `json:"linkTarget"`
}
// GET "/containers/{name:.*}/top"
type ContainerProcessList struct {
Processes [][]string
@@ -154,7 +133,6 @@ type Version struct {
Arch string
KernelVersion string `json:",omitempty"`
Experimental bool `json:",omitempty"`
BuildTime string `json:",omitempty"`
}
// GET "/info"
@@ -169,8 +147,6 @@ type Info struct {
CpuCfsPeriod bool
CpuCfsQuota bool
IPv4Forwarding bool
BridgeNfIptables bool
BridgeNfIp6tables bool
Debug bool
NFd int
OomKillDisable bool
@@ -214,14 +190,14 @@ type ContainerState struct {
Pid int
ExitCode int
Error string
StartedAt string
FinishedAt string
StartedAt time.Time
FinishedAt time.Time
}
// GET "/containers/{name:.*}/json"
type ContainerJSONBase struct {
Id string
Created string
Created time.Time
Path string
Args []string
State *ContainerState
@@ -237,24 +213,22 @@ type ContainerJSONBase struct {
ExecDriver string
MountLabel string
ProcessLabel string
Volumes map[string]string
VolumesRW map[string]bool
AppArmorProfile string
ExecIDs []string
HostConfig *runconfig.HostConfig
GraphDriver GraphDriverData
}
type ContainerJSON struct {
*ContainerJSONBase
Mounts []MountPoint
Config *runconfig.Config
}
// backcompatibility struct along with ContainerConfig
type ContainerJSONPre120 struct {
type ContainerJSONRaw struct {
*ContainerJSONBase
Volumes map[string]string
VolumesRW map[string]bool
Config *ContainerConfig
Config *ContainerConfig
}
type ContainerConfig struct {
@@ -266,13 +240,3 @@ type ContainerConfig struct {
CpuShares int64
Cpuset string
}
// MountPoint represents a mount point configuration inside the container.
type MountPoint struct {
Name string `json:",omitempty"`
Source string
Destination string
Driver string `json:",omitempty"`
Mode string // this is internally named `Relabel`
RW bool
}

View File

@@ -5,7 +5,6 @@ import (
"strings"
)
// FlagType is the type of the build flag
type FlagType int
const (
@@ -13,32 +12,28 @@ const (
stringType
)
// BFlags contains all flags information for the builder
type BFlags struct {
type BuilderFlags struct {
Args []string // actual flags/args from cmd line
flags map[string]*Flag
used map[string]*Flag
Err error
}
// Flag contains all information for a flag
type Flag struct {
bf *BFlags
bf *BuilderFlags
name string
flagType FlagType
Value string
}
// NewBFlags return the new BFlags struct
func NewBFlags() *BFlags {
return &BFlags{
func NewBuilderFlags() *BuilderFlags {
return &BuilderFlags{
flags: make(map[string]*Flag),
used: make(map[string]*Flag),
}
}
// AddBool adds a bool flag to BFlags
func (bf *BFlags) AddBool(name string, def bool) *Flag {
func (bf *BuilderFlags) AddBool(name string, def bool) *Flag {
flag := bf.addFlag(name, boolType)
if flag == nil {
return nil
@@ -51,8 +46,7 @@ func (bf *BFlags) AddBool(name string, def bool) *Flag {
return flag
}
// AddString adds a string flag to BFlags
func (bf *BFlags) AddString(name string, def string) *Flag {
func (bf *BuilderFlags) AddString(name string, def string) *Flag {
flag := bf.addFlag(name, stringType)
if flag == nil {
return nil
@@ -61,7 +55,7 @@ func (bf *BFlags) AddString(name string, def string) *Flag {
return flag
}
func (bf *BFlags) addFlag(name string, flagType FlagType) *Flag {
func (bf *BuilderFlags) addFlag(name string, flagType FlagType) *Flag {
if _, ok := bf.flags[name]; ok {
bf.Err = fmt.Errorf("Duplicate flag defined: %s", name)
return nil
@@ -77,7 +71,6 @@ func (bf *BFlags) addFlag(name string, flagType FlagType) *Flag {
return newFlag
}
// IsUsed checks if the flag is used
func (fl *Flag) IsUsed() bool {
if _, ok := fl.bf.used[fl.name]; ok {
return true
@@ -85,7 +78,6 @@ func (fl *Flag) IsUsed() bool {
return false
}
// IsTrue checks if a bool flag is true
func (fl *Flag) IsTrue() bool {
if fl.flagType != boolType {
// Should never get here
@@ -94,8 +86,7 @@ func (fl *Flag) IsTrue() bool {
return fl.Value == "true"
}
// Parse parses and checks if the BFlags is valid
func (bf *BFlags) Parse() error {
func (bf *BuilderFlags) Parse() error {
// If there was an error while defining the possible flags
// go ahead and bubble it back up here since we didn't do it
// earlier in the processing

View File

@@ -10,7 +10,7 @@ func TestBuilderFlags(t *testing.T) {
// ---
bf := NewBFlags()
bf := NewBuilderFlags()
bf.Args = []string{}
if err := bf.Parse(); err != nil {
t.Fatalf("Test1 of %q was supposed to work: %s", bf.Args, err)
@@ -18,7 +18,7 @@ func TestBuilderFlags(t *testing.T) {
// ---
bf = NewBFlags()
bf = NewBuilderFlags()
bf.Args = []string{"--"}
if err := bf.Parse(); err != nil {
t.Fatalf("Test2 of %q was supposed to work: %s", bf.Args, err)
@@ -26,7 +26,7 @@ func TestBuilderFlags(t *testing.T) {
// ---
bf = NewBFlags()
bf = NewBuilderFlags()
flStr1 := bf.AddString("str1", "")
flBool1 := bf.AddBool("bool1", false)
bf.Args = []string{}
@@ -43,7 +43,7 @@ func TestBuilderFlags(t *testing.T) {
// ---
bf = NewBFlags()
bf = NewBuilderFlags()
flStr1 = bf.AddString("str1", "HI")
flBool1 = bf.AddBool("bool1", false)
bf.Args = []string{}
@@ -67,7 +67,7 @@ func TestBuilderFlags(t *testing.T) {
// ---
bf = NewBFlags()
bf = NewBuilderFlags()
flStr1 = bf.AddString("str1", "HI")
bf.Args = []string{"--str1"}
@@ -77,7 +77,7 @@ func TestBuilderFlags(t *testing.T) {
// ---
bf = NewBFlags()
bf = NewBuilderFlags()
flStr1 = bf.AddString("str1", "HI")
bf.Args = []string{"--str1="}
@@ -92,7 +92,7 @@ func TestBuilderFlags(t *testing.T) {
// ---
bf = NewBFlags()
bf = NewBuilderFlags()
flStr1 = bf.AddString("str1", "HI")
bf.Args = []string{"--str1=BYE"}
@@ -107,7 +107,7 @@ func TestBuilderFlags(t *testing.T) {
// ---
bf = NewBFlags()
bf = NewBuilderFlags()
flBool1 = bf.AddBool("bool1", false)
bf.Args = []string{"--bool1"}
@@ -121,7 +121,7 @@ func TestBuilderFlags(t *testing.T) {
// ---
bf = NewBFlags()
bf = NewBuilderFlags()
flBool1 = bf.AddBool("bool1", false)
bf.Args = []string{"--bool1=true"}
@@ -135,7 +135,7 @@ func TestBuilderFlags(t *testing.T) {
// ---
bf = NewBFlags()
bf = NewBuilderFlags()
flBool1 = bf.AddBool("bool1", false)
bf.Args = []string{"--bool1=false"}
@@ -149,7 +149,7 @@ func TestBuilderFlags(t *testing.T) {
// ---
bf = NewBFlags()
bf = NewBuilderFlags()
flBool1 = bf.AddBool("bool1", false)
bf.Args = []string{"--bool1=false1"}
@@ -159,7 +159,7 @@ func TestBuilderFlags(t *testing.T) {
// ---
bf = NewBFlags()
bf = NewBuilderFlags()
flBool1 = bf.AddBool("bool1", false)
bf.Args = []string{"--bool2"}
@@ -169,7 +169,7 @@ func TestBuilderFlags(t *testing.T) {
// ---
bf = NewBFlags()
bf = NewBuilderFlags()
flStr1 = bf.AddString("str1", "HI")
flBool1 = bf.AddBool("bool1", false)
bf.Args = []string{"--bool1", "--str1=BYE"}

View File

@@ -1,7 +1,6 @@
// Package command contains the set of Dockerfile commands.
// This package contains the set of Dockerfile commands.
package command
// Define constants for the command strings
const (
Env = "env"
Label = "label"

View File

@@ -10,7 +10,6 @@ package builder
import (
"fmt"
"io/ioutil"
"path"
"path/filepath"
"regexp"
"runtime"
@@ -18,8 +17,8 @@ import (
"strings"
"github.com/Sirupsen/logrus"
"github.com/docker/docker/nat"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/nat"
"github.com/docker/docker/runconfig"
)
@@ -30,7 +29,7 @@ const (
)
// dispatch with no layer / parsing. This is effectively not a command.
func nullDispatch(b *builder, args []string, attributes map[string]bool, original string) error {
func nullDispatch(b *Builder, args []string, attributes map[string]bool, original string) error {
return nil
}
@@ -39,7 +38,10 @@ func nullDispatch(b *builder, args []string, attributes map[string]bool, origina
// Sets the environment variable foo to bar, also makes interpolation
// in the dockerfile available from the next statement on via ${foo}.
//
func env(b *builder, args []string, attributes map[string]bool, original string) error {
func env(b *Builder, args []string, attributes map[string]bool, original string) error {
if runtime.GOOS == "windows" {
return fmt.Errorf("ENV is not supported on Windows.")
}
if len(args) == 0 {
return fmt.Errorf("ENV requires at least one argument")
}
@@ -98,7 +100,7 @@ func env(b *builder, args []string, attributes map[string]bool, original string)
// MAINTAINER some text <maybe@an.email.address>
//
// Sets the maintainer metadata.
func maintainer(b *builder, args []string, attributes map[string]bool, original string) error {
func maintainer(b *Builder, args []string, attributes map[string]bool, original string) error {
if len(args) != 1 {
return fmt.Errorf("MAINTAINER requires exactly one argument")
}
@@ -115,7 +117,7 @@ func maintainer(b *builder, args []string, attributes map[string]bool, original
//
// Sets the Label variable foo to bar,
//
func label(b *builder, args []string, attributes map[string]bool, original string) error {
func label(b *Builder, args []string, attributes map[string]bool, original string) error {
if len(args) == 0 {
return fmt.Errorf("LABEL requires at least one argument")
}
@@ -151,7 +153,7 @@ func label(b *builder, args []string, attributes map[string]bool, original strin
// Add the file 'foo' to '/path'. Tarball and Remote URL (git, http) handling
// exist here. If you do not wish to have this automatic handling, use COPY.
//
func add(b *builder, args []string, attributes map[string]bool, original string) error {
func add(b *Builder, args []string, attributes map[string]bool, original string) error {
if len(args) < 2 {
return fmt.Errorf("ADD requires at least two arguments")
}
@@ -167,7 +169,7 @@ func add(b *builder, args []string, attributes map[string]bool, original string)
//
// Same as 'ADD' but without the tar and remote url handling.
//
func dispatchCopy(b *builder, args []string, attributes map[string]bool, original string) error {
func dispatchCopy(b *Builder, args []string, attributes map[string]bool, original string) error {
if len(args) < 2 {
return fmt.Errorf("COPY requires at least two arguments")
}
@@ -183,7 +185,7 @@ func dispatchCopy(b *builder, args []string, attributes map[string]bool, origina
//
// This sets the image the dockerfile will build on top of.
//
func from(b *builder, args []string, attributes map[string]bool, original string) error {
func from(b *Builder, args []string, attributes map[string]bool, original string) error {
if len(args) != 1 {
return fmt.Errorf("FROM requires one argument")
}
@@ -231,7 +233,7 @@ func from(b *builder, args []string, attributes map[string]bool, original string
// special cases. search for 'OnBuild' in internals.go for additional special
// cases.
//
func onbuild(b *builder, args []string, attributes map[string]bool, original string) error {
func onbuild(b *Builder, args []string, attributes map[string]bool, original string) error {
if len(args) == 0 {
return fmt.Errorf("ONBUILD requires at least one argument")
}
@@ -258,7 +260,7 @@ func onbuild(b *builder, args []string, attributes map[string]bool, original str
//
// Set the working directory for future RUN/CMD/etc statements.
//
func workdir(b *builder, args []string, attributes map[string]bool, original string) error {
func workdir(b *Builder, args []string, attributes map[string]bool, original string) error {
if len(args) != 1 {
return fmt.Errorf("WORKDIR requires exactly one argument")
}
@@ -267,39 +269,12 @@ func workdir(b *builder, args []string, attributes map[string]bool, original str
return err
}
// Note that workdir passed comes from the Dockerfile. Hence it is in
// Linux format using forward-slashes, even on Windows. However,
// b.Config.WorkingDir is in platform-specific notation (in other words
// on Windows will use `\`
workdir := args[0]
isAbs := false
if runtime.GOOS == "windows" {
// Alternate processing for Windows here is necessary as we can't call
// filepath.IsAbs(workDir) as that would verify Windows style paths,
// along with drive-letters (eg c:\pathto\file.txt). We (arguably
// correctly or not) check for both forward and back slashes as this
// is what the 1.4.2 GoLang implementation of IsAbs() does in the
// isSlash() function.
isAbs = workdir[0] == '\\' || workdir[0] == '/'
} else {
isAbs = filepath.IsAbs(workdir)
if !filepath.IsAbs(workdir) {
workdir = filepath.Join("/", b.Config.WorkingDir, workdir)
}
if !isAbs {
current := b.Config.WorkingDir
if runtime.GOOS == "windows" {
// Convert to Linux format before join
current = strings.Replace(current, "\\", "/", -1)
}
// Must use path.Join so works correctly on Windows, not filepath
workdir = path.Join("/", current, workdir)
}
// Convert to platform specific format
if runtime.GOOS == "windows" {
workdir = strings.Replace(workdir, "/", "\\", -1)
}
b.Config.WorkingDir = workdir
return b.commit("", b.Config.Cmd, fmt.Sprintf("WORKDIR %v", workdir))
@@ -315,7 +290,7 @@ func workdir(b *builder, args []string, attributes map[string]bool, original str
// RUN echo hi # cmd /S /C echo hi (Windows)
// RUN [ "echo", "hi" ] # echo hi
//
func run(b *builder, args []string, attributes map[string]bool, original string) error {
func run(b *Builder, args []string, attributes map[string]bool, original string) error {
if b.image == "" && !b.noBaseImage {
return fmt.Errorf("Please provide a source image with `from` prior to run")
}
@@ -324,7 +299,7 @@ func run(b *builder, args []string, attributes map[string]bool, original string)
return err
}
args = handleJSONArgs(args, attributes)
args = handleJsonArgs(args, attributes)
if !attributes["json"] {
if runtime.GOOS != "windows" {
@@ -386,12 +361,12 @@ func run(b *builder, args []string, attributes map[string]bool, original string)
// Set the default command to run in the container (which may be empty).
// Argument handling is the same as RUN.
//
func cmd(b *builder, args []string, attributes map[string]bool, original string) error {
func cmd(b *Builder, args []string, attributes map[string]bool, original string) error {
if err := b.BuilderFlags.Parse(); err != nil {
return err
}
cmdSlice := handleJSONArgs(args, attributes)
cmdSlice := handleJsonArgs(args, attributes)
if !attributes["json"] {
if runtime.GOOS != "windows" {
@@ -422,12 +397,12 @@ func cmd(b *builder, args []string, attributes map[string]bool, original string)
// Handles command processing similar to CMD and RUN, only b.Config.Entrypoint
// is initialized at NewBuilder time instead of through argument parsing.
//
func entrypoint(b *builder, args []string, attributes map[string]bool, original string) error {
func entrypoint(b *Builder, args []string, attributes map[string]bool, original string) error {
if err := b.BuilderFlags.Parse(); err != nil {
return err
}
parsed := handleJSONArgs(args, attributes)
parsed := handleJsonArgs(args, attributes)
switch {
case attributes["json"]:
@@ -463,7 +438,7 @@ func entrypoint(b *builder, args []string, attributes map[string]bool, original
// Expose ports for links and port mappings. This all ends up in
// b.Config.ExposedPorts for runconfig.
//
func expose(b *builder, args []string, attributes map[string]bool, original string) error {
func expose(b *Builder, args []string, attributes map[string]bool, original string) error {
portsTab := args
if len(args) == 0 {
@@ -478,11 +453,19 @@ func expose(b *builder, args []string, attributes map[string]bool, original stri
b.Config.ExposedPorts = make(nat.PortSet)
}
ports, _, err := nat.ParsePortSpecs(portsTab)
ports, bindingMap, err := nat.ParsePortSpecs(append(portsTab, b.Config.PortSpecs...))
if err != nil {
return err
}
for _, bindings := range bindingMap {
if bindings[0].HostIp != "" || bindings[0].HostPort != "" {
fmt.Fprintf(b.ErrStream, " ---> Using Dockerfile's EXPOSE instruction"+
" to map host ports to container ports (ip:hostPort:containerPort) is deprecated.\n"+
" Please use -p to publish the ports.\n")
}
}
// instead of using ports directly, we build a list of ports and sort it so
// the order is consistent. This prevents cache burst where map ordering
// changes between builds
@@ -496,6 +479,7 @@ func expose(b *builder, args []string, attributes map[string]bool, original stri
i++
}
sort.Strings(portList)
b.Config.PortSpecs = nil
return b.commit("", b.Config.Cmd, fmt.Sprintf("EXPOSE %s", strings.Join(portList, " ")))
}
@@ -504,9 +488,9 @@ func expose(b *builder, args []string, attributes map[string]bool, original stri
// Set the user to 'foo' for future commands and when running the
// ENTRYPOINT/CMD at container run time.
//
func user(b *builder, args []string, attributes map[string]bool, original string) error {
func user(b *Builder, args []string, attributes map[string]bool, original string) error {
if runtime.GOOS == "windows" {
return fmt.Errorf("USER is not supported on Windows")
return fmt.Errorf("USER is not supported on Windows.")
}
if len(args) != 1 {
@@ -525,9 +509,9 @@ func user(b *builder, args []string, attributes map[string]bool, original string
//
// Expose the volume /foo for use. Will also accept the JSON array form.
//
func volume(b *builder, args []string, attributes map[string]bool, original string) error {
func volume(b *Builder, args []string, attributes map[string]bool, original string) error {
if runtime.GOOS == "windows" {
return fmt.Errorf("VOLUME is not supported on Windows")
return fmt.Errorf("VOLUME is not supported on Windows.")
}
if len(args) == 0 {
return fmt.Errorf("VOLUME requires at least one argument")

View File

@@ -37,7 +37,6 @@ import (
"github.com/docker/docker/pkg/stringid"
"github.com/docker/docker/pkg/symlink"
"github.com/docker/docker/pkg/tarsum"
"github.com/docker/docker/pkg/ulimit"
"github.com/docker/docker/runconfig"
"github.com/docker/docker/utils"
)
@@ -54,10 +53,10 @@ var replaceEnvAllowed = map[string]struct{}{
command.User: {},
}
var evaluateTable map[string]func(*builder, []string, map[string]bool, string) error
var evaluateTable map[string]func(*Builder, []string, map[string]bool, string) error
func init() {
evaluateTable = map[string]func(*builder, []string, map[string]bool, string) error{
evaluateTable = map[string]func(*Builder, []string, map[string]bool, string) error{
command.Env: env,
command.Label: label,
command.Maintainer: maintainer,
@@ -75,9 +74,9 @@ func init() {
}
}
// builder is an internal struct, used to maintain configuration of the Dockerfile's
// internal struct, used to maintain configuration of the Dockerfile's
// processing as it evaluates the parsing result.
type builder struct {
type Builder struct {
Daemon *daemon.Daemon
// effectively stdio for the run. Because it is not stdio, I said
@@ -99,8 +98,8 @@ type builder struct {
// the final configs of the Dockerfile but dont want the layers
disableCommit bool
// Registry server auth configs used to pull images when handling `FROM`.
AuthConfigs map[string]cliconfig.AuthConfig
AuthConfig *cliconfig.AuthConfig
ConfigFile *cliconfig.ConfigFile
// Deprecated, original writer used for ImagePull. To be removed.
OutOld io.Writer
@@ -116,7 +115,7 @@ type builder struct {
image string // image name for commit processing
maintainer string // maintainer name. could probably be removed.
cmdSet bool // indicates is CMD was set in current Dockerfile
BuilderFlags *BFlags // current cmd's BuilderFlags - temporary
BuilderFlags *BuilderFlags // current cmd's BuilderFlags - temporary
context tarsum.TarSum // the context is a tarball that is uploaded by the client
contextPath string // the path of the temporary directory the local context is unpacked to (server side)
noBaseImage bool // indicates that this build does not start from any base image, but is being built from an empty file system.
@@ -130,12 +129,8 @@ type builder struct {
cgroupParent string
memory int64
memorySwap int64
ulimits []*ulimit.Ulimit
cancelled <-chan struct{} // When closed, job was cancelled.
activeImages []string
id string // Used to hold reference images
}
// Run the builder with the context. This is the lynchpin of this package. This
@@ -150,7 +145,7 @@ type builder struct {
// processing.
// * Print a happy message and return the image ID.
//
func (b *builder) Run(context io.Reader) (string, error) {
func (b *Builder) Run(context io.Reader) (string, error) {
if err := b.readContext(context); err != nil {
return "", err
}
@@ -201,7 +196,7 @@ func (b *builder) Run(context io.Reader) (string, error) {
// Reads a Dockerfile from the current context. It assumes that the
// 'filename' is a relative path from the root of the context
func (b *builder) readDockerfile() error {
func (b *Builder) readDockerfile() error {
// If no -f was specified then look for 'Dockerfile'. If we can't find
// that then look for 'dockerfile'. If neither are found then default
// back to 'Dockerfile' and use that in the error message.
@@ -279,7 +274,7 @@ func (b *builder) readDockerfile() error {
// such as `RUN` in ONBUILD RUN foo. There is special case logic in here to
// deal with that, at least until it becomes more of a general concern with new
// features.
func (b *builder) dispatch(stepN int, ast *parser.Node) error {
func (b *Builder) dispatch(stepN int, ast *parser.Node) error {
cmd := ast.Value
attrs := ast.Attributes
original := ast.Original
@@ -342,7 +337,7 @@ func (b *builder) dispatch(stepN int, ast *parser.Node) error {
// XXX yes, we skip any cmds that are not valid; the parser should have
// picked these out already.
if f, ok := evaluateTable[cmd]; ok {
b.BuilderFlags = NewBFlags()
b.BuilderFlags = NewBuilderFlags()
b.BuilderFlags.Args = flags
return f(b, strList, attrs, original)
}

View File

@@ -12,8 +12,8 @@ import (
"net/http"
"net/url"
"os"
"path"
"path/filepath"
"runtime"
"sort"
"strings"
"syscall"
@@ -21,10 +21,9 @@ import (
"github.com/Sirupsen/logrus"
"github.com/docker/docker/builder/parser"
"github.com/docker/docker/cliconfig"
"github.com/docker/docker/daemon"
"github.com/docker/docker/graph"
"github.com/docker/docker/image"
imagepkg "github.com/docker/docker/image"
"github.com/docker/docker/pkg/archive"
"github.com/docker/docker/pkg/chrootarchive"
"github.com/docker/docker/pkg/httputils"
@@ -40,40 +39,30 @@ import (
"github.com/docker/docker/runconfig"
)
func (b *builder) readContext(context io.Reader) (err error) {
func (b *Builder) readContext(context io.Reader) error {
tmpdirPath, err := ioutil.TempDir("", "docker-build")
if err != nil {
return
return err
}
// Make sure we clean-up upon error. In the happy case the caller
// is expected to manage the clean-up
defer func() {
if err != nil {
if e := os.RemoveAll(tmpdirPath); e != nil {
logrus.Debugf("[BUILDER] failed to remove temporary context: %s", e)
}
}
}()
decompressedStream, err := archive.DecompressStream(context)
if err != nil {
return
return err
}
if b.context, err = tarsum.NewTarSum(decompressedStream, true, tarsum.Version1); err != nil {
return
if b.context, err = tarsum.NewTarSum(decompressedStream, true, tarsum.Version0); err != nil {
return err
}
if err = chrootarchive.Untar(b.context, tmpdirPath, nil); err != nil {
return
if err := chrootarchive.Untar(b.context, tmpdirPath, nil); err != nil {
return err
}
b.contextPath = tmpdirPath
return
return nil
}
func (b *builder) commit(id string, autoCmd *runconfig.Command, comment string) error {
func (b *Builder) commit(id string, autoCmd *runconfig.Command, comment string) error {
if b.disableCommit {
return nil
}
@@ -83,11 +72,7 @@ func (b *builder) commit(id string, autoCmd *runconfig.Command, comment string)
b.Config.Image = b.image
if id == "" {
cmd := b.Config.Cmd
if runtime.GOOS != "windows" {
b.Config.Cmd = runconfig.NewCommand("/bin/sh", "-c", "#(nop) "+comment)
} else {
b.Config.Cmd = runconfig.NewCommand("cmd", "/S /C", "REM (nop) "+comment)
}
b.Config.Cmd = runconfig.NewCommand("/bin/sh", "-c", "#(nop) "+comment)
defer func(cmd *runconfig.Command) { b.Config.Cmd = cmd }(cmd)
hit, err := b.probeCache()
@@ -118,19 +103,11 @@ func (b *builder) commit(id string, autoCmd *runconfig.Command, comment string)
autoConfig := *b.Config
autoConfig.Cmd = autoCmd
commitCfg := &daemon.ContainerCommitConfig{
Author: b.maintainer,
Pause: true,
Config: &autoConfig,
}
// Commit the container
image, err := b.Daemon.Commit(container, commitCfg)
image, err := b.Daemon.Commit(container, "", "", "", b.maintainer, true, &autoConfig)
if err != nil {
return err
}
b.Daemon.Graph().Retain(b.id, image.ID)
b.activeImages = append(b.activeImages, image.ID)
b.image = image.ID
return nil
}
@@ -143,7 +120,7 @@ type copyInfo struct {
tmpDir string
}
func (b *builder) runContextCommand(args []string, allowRemote bool, allowDecompression bool, cmdName string) error {
func (b *Builder) runContextCommand(args []string, allowRemote bool, allowDecompression bool, cmdName string) error {
if b.context == nil {
return fmt.Errorf("No context given. Impossible to use %s", cmdName)
}
@@ -152,8 +129,7 @@ func (b *builder) runContextCommand(args []string, allowRemote bool, allowDecomp
return fmt.Errorf("Invalid %s format - at least two arguments required", cmdName)
}
// Work in daemon-specific filepath semantics
dest := filepath.FromSlash(args[len(args)-1]) // last one is always the dest
dest := args[len(args)-1] // last one is always the dest
copyInfos := []*copyInfo{}
@@ -188,7 +164,8 @@ func (b *builder) runContextCommand(args []string, allowRemote bool, allowDecomp
if len(copyInfos) == 0 {
return fmt.Errorf("No source files were specified")
}
if len(copyInfos) > 1 && !strings.HasSuffix(dest, string(os.PathSeparator)) {
if len(copyInfos) > 1 && !strings.HasSuffix(dest, "/") {
return fmt.Errorf("When using %s with more than one source file, the destination must be a directory and end with a /", cmdName)
}
@@ -214,11 +191,7 @@ func (b *builder) runContextCommand(args []string, allowRemote bool, allowDecomp
}
cmd := b.Config.Cmd
if runtime.GOOS != "windows" {
b.Config.Cmd = runconfig.NewCommand("/bin/sh", "-c", fmt.Sprintf("#(nop) %s %s in %s", cmdName, srcHash, dest))
} else {
b.Config.Cmd = runconfig.NewCommand("cmd", "/S /C", fmt.Sprintf("REM (nop) %s %s in %s", cmdName, srcHash, dest))
}
b.Config.Cmd = runconfig.NewCommand("/bin/sh", "-c", fmt.Sprintf("#(nop) %s %s in %s", cmdName, srcHash, dest))
defer func(cmd *runconfig.Command) { b.Config.Cmd = cmd }(cmd)
hit, err := b.probeCache()
@@ -241,59 +214,39 @@ func (b *builder) runContextCommand(args []string, allowRemote bool, allowDecomp
}
defer container.Unmount()
if err := container.PrepareStorage(); err != nil {
return err
}
for _, ci := range copyInfos {
if err := b.addContext(container, ci.origPath, ci.destPath, ci.decompress); err != nil {
return err
}
}
if err := container.CleanupStorage(); err != nil {
return err
}
if err := b.commit(container.ID, cmd, fmt.Sprintf("%s %s in %s", cmdName, origPaths, dest)); err != nil {
return err
}
return nil
}
func calcCopyInfo(b *builder, cmdName string, cInfos *[]*copyInfo, origPath string, destPath string, allowRemote bool, allowDecompression bool, allowWildcards bool) error {
func calcCopyInfo(b *Builder, cmdName string, cInfos *[]*copyInfo, origPath string, destPath string, allowRemote bool, allowDecompression bool, allowWildcards bool) error {
// Work in daemon-specific OS filepath semantics. However, we save
// the the origPath passed in here, as it might also be a URL which
// we need to check for in this function.
passedInOrigPath := origPath
origPath = filepath.FromSlash(origPath)
destPath = filepath.FromSlash(destPath)
if origPath != "" && origPath[0] == os.PathSeparator && len(origPath) > 1 {
if origPath != "" && origPath[0] == '/' && len(origPath) > 1 {
origPath = origPath[1:]
}
origPath = strings.TrimPrefix(origPath, "."+string(os.PathSeparator))
origPath = strings.TrimPrefix(origPath, "./")
// Twiddle the destPath when its a relative path - meaning, make it
// relative to the WORKINGDIR
if !filepath.IsAbs(destPath) {
hasSlash := strings.HasSuffix(destPath, string(os.PathSeparator))
destPath = filepath.Join(string(os.PathSeparator), filepath.FromSlash(b.Config.WorkingDir), destPath)
hasSlash := strings.HasSuffix(destPath, "/")
destPath = filepath.Join("/", b.Config.WorkingDir, destPath)
// Make sure we preserve any trailing slash
if hasSlash {
destPath += string(os.PathSeparator)
destPath += "/"
}
}
// In the remote/URL case, download it and gen its hashcode
if urlutil.IsURL(passedInOrigPath) {
// As it's a URL, we go back to processing on what was passed in
// to this function
origPath = passedInOrigPath
if urlutil.IsURL(origPath) {
if !allowRemote {
return fmt.Errorf("Source can't be a URL for %s", cmdName)
}
@@ -319,7 +272,7 @@ func calcCopyInfo(b *builder, cmdName string, cInfos *[]*copyInfo, origPath stri
ci.tmpDir = tmpDirName
// Create a tmp file within our tmp dir
tmpFileName := filepath.Join(tmpDirName, "tmp")
tmpFileName := path.Join(tmpDirName, "tmp")
tmpFile, err := os.OpenFile(tmpFileName, os.O_RDWR|os.O_CREATE|os.O_EXCL, 0600)
if err != nil {
return err
@@ -359,19 +312,19 @@ func calcCopyInfo(b *builder, cmdName string, cInfos *[]*copyInfo, origPath stri
return err
}
ci.origPath = filepath.Join(filepath.Base(tmpDirName), filepath.Base(tmpFileName))
ci.origPath = path.Join(filepath.Base(tmpDirName), filepath.Base(tmpFileName))
// If the destination is a directory, figure out the filename.
if strings.HasSuffix(ci.destPath, string(os.PathSeparator)) {
if strings.HasSuffix(ci.destPath, "/") {
u, err := url.Parse(origPath)
if err != nil {
return err
}
path := u.Path
if strings.HasSuffix(path, string(os.PathSeparator)) {
if strings.HasSuffix(path, "/") {
path = path[:len(path)-1]
}
parts := strings.Split(path, string(os.PathSeparator))
parts := strings.Split(path, "/")
filename := parts[len(parts)-1]
if filename == "" {
return fmt.Errorf("cannot determine filename from url: %s", u)
@@ -384,7 +337,7 @@ func calcCopyInfo(b *builder, cmdName string, cInfos *[]*copyInfo, origPath stri
if err != nil {
return err
}
tarSum, err := tarsum.NewTarSum(r, true, tarsum.Version1)
tarSum, err := tarsum.NewTarSum(r, true, tarsum.Version0)
if err != nil {
return err
}
@@ -398,12 +351,12 @@ func calcCopyInfo(b *builder, cmdName string, cInfos *[]*copyInfo, origPath stri
}
// Deal with wildcards
if allowWildcards && containsWildcards(origPath) {
if allowWildcards && ContainsWildcards(origPath) {
for _, fileInfo := range b.context.GetSums() {
if fileInfo.Name() == "" {
continue
}
match, _ := filepath.Match(origPath, fileInfo.Name())
match, _ := path.Match(origPath, fileInfo.Name())
if !match {
continue
}
@@ -420,7 +373,7 @@ func calcCopyInfo(b *builder, cmdName string, cInfos *[]*copyInfo, origPath stri
if err := b.checkPathForAddition(origPath); err != nil {
return err
}
fi, _ := os.Stat(filepath.Join(b.contextPath, origPath))
fi, _ := os.Stat(path.Join(b.contextPath, origPath))
ci := copyInfo{}
ci.origPath = origPath
@@ -441,20 +394,20 @@ func calcCopyInfo(b *builder, cmdName string, cInfos *[]*copyInfo, origPath stri
// Must be a dir
var subfiles []string
absOrigPath := filepath.Join(b.contextPath, ci.origPath)
absOrigPath := path.Join(b.contextPath, ci.origPath)
// Add a trailing / to make sure we only pick up nested files under
// the dir and not sibling files of the dir that just happen to
// start with the same chars
if !strings.HasSuffix(absOrigPath, string(os.PathSeparator)) {
absOrigPath += string(os.PathSeparator)
if !strings.HasSuffix(absOrigPath, "/") {
absOrigPath += "/"
}
// Need path w/o slash too to find matching dir w/o trailing slash
// Need path w/o / too to find matching dir w/o trailing /
absOrigPathNoSlash := absOrigPath[:len(absOrigPath)-1]
for _, fileInfo := range b.context.GetSums() {
absFile := filepath.Join(b.contextPath, fileInfo.Name())
absFile := path.Join(b.contextPath, fileInfo.Name())
// Any file in the context that starts with the given path will be
// picked up and its hashcode used. However, we'll exclude the
// root dir itself. We do this for a coupel of reasons:
@@ -475,7 +428,7 @@ func calcCopyInfo(b *builder, cmdName string, cInfos *[]*copyInfo, origPath stri
return nil
}
func containsWildcards(name string) bool {
func ContainsWildcards(name string) bool {
for i := 0; i < len(name); i++ {
ch := name[i]
if ch == '\\' {
@@ -487,25 +440,21 @@ func containsWildcards(name string) bool {
return false
}
func (b *builder) pullImage(name string) (*image.Image, error) {
func (b *Builder) pullImage(name string) (*imagepkg.Image, error) {
remote, tag := parsers.ParseRepositoryTag(name)
if tag == "" {
tag = "latest"
}
pullRegistryAuth := &cliconfig.AuthConfig{}
if len(b.AuthConfigs) > 0 {
pullRegistryAuth := b.AuthConfig
if len(b.ConfigFile.AuthConfigs) > 0 {
// The request came with a full auth config file, we prefer to use that
repoInfo, err := b.Daemon.RegistryService.ResolveRepository(remote)
if err != nil {
return nil, err
}
resolvedConfig := registry.ResolveAuthConfig(
&cliconfig.ConfigFile{AuthConfigs: b.AuthConfigs},
repoInfo.Index,
)
pullRegistryAuth = &resolvedConfig
resolvedAuth := registry.ResolveAuthConfig(b.ConfigFile, repoInfo.Index)
pullRegistryAuth = &resolvedAuth
}
imagePullConfig := &graph.ImagePullConfig{
@@ -525,15 +474,14 @@ func (b *builder) pullImage(name string) (*image.Image, error) {
return image, nil
}
func (b *builder) processImageFrom(img *image.Image) error {
func (b *Builder) processImageFrom(img *imagepkg.Image) error {
b.image = img.ID
if img.Config != nil {
b.Config = img.Config
}
// The default path will be blank on Windows (set by HCS)
if len(b.Config.Env) == 0 && daemon.DefaultPathEnv != "" {
if len(b.Config.Env) == 0 {
b.Config.Env = append(b.Config.Env, "PATH="+daemon.DefaultPathEnv)
}
@@ -577,7 +525,7 @@ func (b *builder) processImageFrom(img *image.Image) error {
// in the current server `b.Daemon`. If an image is found, probeCache returns
// `(true, nil)`. If no image is found, it returns `(false, nil)`. If there
// is any error, it returns `(false, err)`.
func (b *builder) probeCache() (bool, error) {
func (b *Builder) probeCache() (bool, error) {
if !b.UtilizeCache || b.cacheBusted {
return false, nil
}
@@ -595,12 +543,10 @@ func (b *builder) probeCache() (bool, error) {
fmt.Fprintf(b.OutStream, " ---> Using cache\n")
logrus.Debugf("[BUILDER] Use cached version")
b.image = cache.ID
b.Daemon.Graph().Retain(b.id, cache.ID)
b.activeImages = append(b.activeImages, cache.ID)
return true, nil
}
func (b *builder) create() (*daemon.Container, error) {
func (b *Builder) create() (*daemon.Container, error) {
if b.image == "" && !b.noBaseImage {
return nil, fmt.Errorf("Please provide a source image with `from` prior to run")
}
@@ -615,7 +561,7 @@ func (b *builder) create() (*daemon.Container, error) {
CgroupParent: b.cgroupParent,
Memory: b.memory,
MemorySwap: b.memorySwap,
Ulimits: b.ulimits,
NetworkMode: "bridge",
}
config := *b.Config
@@ -644,7 +590,7 @@ func (b *builder) create() (*daemon.Container, error) {
return c, nil
}
func (b *builder) run(c *daemon.Container) error {
func (b *Builder) run(c *daemon.Container) error {
var errCh chan error
if b.Verbose {
errCh = c.Attach(nil, b.OutStream, b.ErrStream)
@@ -684,8 +630,8 @@ func (b *builder) run(c *daemon.Container) error {
return nil
}
func (b *builder) checkPathForAddition(orig string) error {
origPath := filepath.Join(b.contextPath, orig)
func (b *Builder) checkPathForAddition(orig string) error {
origPath := path.Join(b.contextPath, orig)
origPath, err := filepath.EvalSymlinks(origPath)
if err != nil {
if os.IsNotExist(err) {
@@ -693,11 +639,7 @@ func (b *builder) checkPathForAddition(orig string) error {
}
return err
}
contextPath, err := filepath.EvalSymlinks(b.contextPath)
if err != nil {
return err
}
if !strings.HasPrefix(origPath, contextPath) {
if !strings.HasPrefix(origPath, b.contextPath) {
return fmt.Errorf("Forbidden path outside the build context: %s (%s)", orig, origPath)
}
if _, err := os.Stat(origPath); err != nil {
@@ -709,31 +651,27 @@ func (b *builder) checkPathForAddition(orig string) error {
return nil
}
func (b *builder) addContext(container *daemon.Container, orig, dest string, decompress bool) error {
func (b *Builder) addContext(container *daemon.Container, orig, dest string, decompress bool) error {
var (
err error
destExists = true
origPath = filepath.Join(b.contextPath, orig)
origPath = path.Join(b.contextPath, orig)
destPath string
)
// Work in daemon-local OS specific file paths
dest = filepath.FromSlash(dest)
destPath, err = container.GetResourcePath(dest)
if err != nil {
return err
}
// Preserve the trailing slash
if strings.HasSuffix(dest, string(os.PathSeparator)) || dest == "." {
destPath = destPath + string(os.PathSeparator)
// Preserve the trailing '/'
if strings.HasSuffix(dest, "/") || dest == "." {
destPath = destPath + "/"
}
destStat, err := os.Stat(destPath)
if err != nil {
if !os.IsNotExist(err) {
logrus.Errorf("Error performing os.Stat on %s. %s", destPath, err)
return err
}
destExists = false
@@ -756,9 +694,9 @@ func (b *builder) addContext(container *daemon.Container, orig, dest string, dec
// First try to unpack the source as an archive
// to support the untar feature we need to clean up the path a little bit
// because tar is very forgiving. First we need to strip off the archive's
// filename from the path but this is only added if it does not end in slash
// filename from the path but this is only added if it does not end in / .
tarDest := destPath
if strings.HasSuffix(tarDest, string(os.PathSeparator)) {
if strings.HasSuffix(tarDest, "/") {
tarDest = filepath.Dir(destPath)
}
@@ -770,7 +708,7 @@ func (b *builder) addContext(container *daemon.Container, orig, dest string, dec
}
}
if err := system.MkdirAll(filepath.Dir(destPath), 0755); err != nil {
if err := os.MkdirAll(path.Dir(destPath), 0755); err != nil {
return err
}
if err := chrootarchive.CopyWithTar(origPath, destPath); err != nil {
@@ -779,7 +717,7 @@ func (b *builder) addContext(container *daemon.Container, orig, dest string, dec
resPath := destPath
if destExists && destStat.IsDir() {
resPath = filepath.Join(destPath, filepath.Base(origPath))
resPath = path.Join(destPath, path.Base(origPath))
}
return fixPermissions(origPath, resPath, 0, 0, destExists)
@@ -792,7 +730,39 @@ func copyAsDirectory(source, destination string, destExisted bool) error {
return fixPermissions(source, destination, 0, 0, destExisted)
}
func (b *builder) clearTmp() {
func fixPermissions(source, destination string, uid, gid int, destExisted bool) error {
// If the destination didn't already exist, or the destination isn't a
// directory, then we should Lchown the destination. Otherwise, we shouldn't
// Lchown the destination.
destStat, err := os.Stat(destination)
if err != nil {
// This should *never* be reached, because the destination must've already
// been created while untar-ing the context.
return err
}
doChownDestination := !destExisted || !destStat.IsDir()
// We Walk on the source rather than on the destination because we don't
// want to change permissions on things we haven't created or modified.
return filepath.Walk(source, func(fullpath string, info os.FileInfo, err error) error {
// Do not alter the walk root iff. it existed before, as it doesn't fall under
// the domain of "things we should chown".
if !doChownDestination && (source == fullpath) {
return nil
}
// Path is prefixed by source: substitute with destination instead.
cleaned, err := filepath.Rel(source, fullpath)
if err != nil {
return err
}
fullpath = path.Join(destination, cleaned)
return os.Lchown(fullpath, uid, gid)
})
}
func (b *Builder) clearTmp() {
for c := range b.TmpContainers {
rmConfig := &daemon.ContainerRmConfig{
ForceRemove: true,

View File

@@ -1,40 +0,0 @@
// +build linux
package builder
import (
"os"
"path/filepath"
)
func fixPermissions(source, destination string, uid, gid int, destExisted bool) error {
// If the destination didn't already exist, or the destination isn't a
// directory, then we should Lchown the destination. Otherwise, we shouldn't
// Lchown the destination.
destStat, err := os.Stat(destination)
if err != nil {
// This should *never* be reached, because the destination must've already
// been created while untar-ing the context.
return err
}
doChownDestination := !destExisted || !destStat.IsDir()
// We Walk on the source rather than on the destination because we don't
// want to change permissions on things we haven't created or modified.
return filepath.Walk(source, func(fullpath string, info os.FileInfo, err error) error {
// Do not alter the walk root iff. it existed before, as it doesn't fall under
// the domain of "things we should chown".
if !doChownDestination && (source == fullpath) {
return nil
}
// Path is prefixed by source: substitute with destination instead.
cleaned, err := filepath.Rel(source, fullpath)
if err != nil {
return err
}
fullpath = filepath.Join(destination, cleaned)
return os.Lchown(fullpath, uid, gid)
})
}

View File

@@ -1,8 +0,0 @@
// +build windows
package builder
func fixPermissions(source, destination string, uid, gid int, destExisted bool) error {
// chown is not supported on Windows
return nil
}

View File

@@ -2,7 +2,6 @@ package builder
import (
"bytes"
"errors"
"fmt"
"io"
"io/ioutil"
@@ -18,34 +17,25 @@ import (
"github.com/docker/docker/pkg/archive"
"github.com/docker/docker/pkg/httputils"
"github.com/docker/docker/pkg/parsers"
"github.com/docker/docker/pkg/progressreader"
"github.com/docker/docker/pkg/streamformatter"
"github.com/docker/docker/pkg/stringid"
"github.com/docker/docker/pkg/ulimit"
"github.com/docker/docker/pkg/urlutil"
"github.com/docker/docker/registry"
"github.com/docker/docker/runconfig"
"github.com/docker/docker/utils"
)
// When downloading remote contexts, limit the amount (in bytes)
// to be read from the response body in order to detect its Content-Type
const maxPreambleLength = 100
// whitelist of commands allowed for a commit/import
var validCommitCommands = map[string]bool{
"cmd": true,
"entrypoint": true,
"env": true,
"expose": true,
"label": true,
"onbuild": true,
"cmd": true,
"user": true,
"volume": true,
"workdir": true,
"env": true,
"volume": true,
"expose": true,
"onbuild": true,
}
// Config contains all configs for a build job
type Config struct {
DockerfileName string
RemoteURL string
@@ -57,14 +47,14 @@ type Config struct {
Pull bool
Memory int64
MemorySwap int64
CPUShares int64
CPUPeriod int64
CPUQuota int64
CPUSetCpus string
CPUSetMems string
CpuShares int64
CpuPeriod int64
CpuQuota int64
CpuSetCpus string
CpuSetMems string
CgroupParent string
Ulimits []*ulimit.Ulimit
AuthConfigs map[string]cliconfig.AuthConfig
AuthConfig *cliconfig.AuthConfig
ConfigFile *cliconfig.ConfigFile
Stdout io.Writer
Context io.ReadCloser
@@ -75,36 +65,32 @@ type Config struct {
cancelOnce sync.Once
}
// Cancel signals the build job to cancel
// When called, causes the Job.WaitCancelled channel to unblock.
func (b *Config) Cancel() {
b.cancelOnce.Do(func() {
close(b.cancelled)
})
}
// WaitCancelled returns a channel which is closed ("never blocks") when
// the job is cancelled.
// Returns a channel which is closed ("never blocks") when the job is cancelled.
func (b *Config) WaitCancelled() <-chan struct{} {
return b.cancelled
}
// NewBuildConfig returns a new Config struct
func NewBuildConfig() *Config {
return &Config{
AuthConfigs: map[string]cliconfig.AuthConfig{},
cancelled: make(chan struct{}),
AuthConfig: &cliconfig.AuthConfig{},
ConfigFile: &cliconfig.ConfigFile{},
cancelled: make(chan struct{}),
}
}
// Build is the main interface of the package, it gathers the Builder
// struct and calls builder.Run() to do all the real build job.
func Build(d *daemon.Daemon, buildConfig *Config) error {
var (
repoName string
tag string
context io.ReadCloser
)
sf := streamformatter.NewJSONStreamFormatter()
repoName, tag = parsers.ParseRepositoryTag(buildConfig.RepoName)
if repoName != "" {
@@ -135,52 +121,29 @@ func Build(d *daemon.Daemon, buildConfig *Config) error {
} else if urlutil.IsURL(buildConfig.RemoteURL) {
f, err := httputils.Download(buildConfig.RemoteURL)
if err != nil {
return fmt.Errorf("Error downloading remote context %s: %v", buildConfig.RemoteURL, err)
return err
}
defer f.Body.Close()
ct := f.Header.Get("Content-Type")
clen := int(f.ContentLength)
contentType, bodyReader, err := inspectResponse(ct, f.Body, clen)
defer bodyReader.Close()
dockerFile, err := ioutil.ReadAll(f.Body)
if err != nil {
return fmt.Errorf("Error detecting content type for remote %s: %v", buildConfig.RemoteURL, err)
return err
}
if contentType == httputils.MimeTypes.TextPlain {
dockerFile, err := ioutil.ReadAll(bodyReader)
if err != nil {
return err
}
// When we're downloading just a Dockerfile put it in
// the default name - don't allow the client to move/specify it
buildConfig.DockerfileName = api.DefaultDockerfileName
// When we're downloading just a Dockerfile put it in
// the default name - don't allow the client to move/specify it
buildConfig.DockerfileName = api.DefaultDockerfileName
c, err := archive.Generate(buildConfig.DockerfileName, string(dockerFile))
if err != nil {
return err
}
context = c
} else {
// Pass through - this is a pre-packaged context, presumably
// with a Dockerfile with the right name inside it.
prCfg := progressreader.Config{
In: bodyReader,
Out: buildConfig.Stdout,
Formatter: sf,
Size: clen,
NewLines: true,
ID: "Downloading context",
Action: buildConfig.RemoteURL,
}
context = progressreader.New(prCfg)
c, err := archive.Generate(buildConfig.DockerfileName, string(dockerFile))
if err != nil {
return err
}
context = c
}
defer context.Close()
builder := &builder{
sf := streamformatter.NewJSONStreamFormatter()
builder := &Builder{
Daemon: d,
OutStream: &streamformatter.StdoutFormater{
Writer: buildConfig.Stdout,
@@ -197,40 +160,31 @@ func Build(d *daemon.Daemon, buildConfig *Config) error {
Pull: buildConfig.Pull,
OutOld: buildConfig.Stdout,
StreamFormatter: sf,
AuthConfigs: buildConfig.AuthConfigs,
AuthConfig: buildConfig.AuthConfig,
ConfigFile: buildConfig.ConfigFile,
dockerfileName: buildConfig.DockerfileName,
cpuShares: buildConfig.CPUShares,
cpuPeriod: buildConfig.CPUPeriod,
cpuQuota: buildConfig.CPUQuota,
cpuSetCpus: buildConfig.CPUSetCpus,
cpuSetMems: buildConfig.CPUSetMems,
cpuShares: buildConfig.CpuShares,
cpuPeriod: buildConfig.CpuPeriod,
cpuQuota: buildConfig.CpuQuota,
cpuSetCpus: buildConfig.CpuSetCpus,
cpuSetMems: buildConfig.CpuSetMems,
cgroupParent: buildConfig.CgroupParent,
memory: buildConfig.Memory,
memorySwap: buildConfig.MemorySwap,
ulimits: buildConfig.Ulimits,
cancelled: buildConfig.WaitCancelled(),
id: stringid.GenerateRandomID(),
}
defer func() {
builder.Daemon.Graph().Release(builder.id, builder.activeImages...)
}()
id, err := builder.Run(context)
if err != nil {
return err
}
if repoName != "" {
return d.Repositories().Tag(repoName, tag, id, true)
}
return nil
}
// BuildFromConfig will do build directly from parameter 'changes', which comes
// from Dockerfile entries, it will:
//
// - call parse.Parse() to get AST root from Dockerfile entries
// - do build by calling builder.dispatch() to call all entries' handling routines
func BuildFromConfig(d *daemon.Daemon, c *runconfig.Config, changes []string) (*runconfig.Config, error) {
ast, err := parser.Parse(bytes.NewBufferString(strings.Join(changes, "\n")))
if err != nil {
@@ -244,7 +198,7 @@ func BuildFromConfig(d *daemon.Daemon, c *runconfig.Config, changes []string) (*
}
}
builder := &builder{
builder := &Builder{
Daemon: d,
Config: c,
OutStream: ioutil.Discard,
@@ -261,19 +215,7 @@ func BuildFromConfig(d *daemon.Daemon, c *runconfig.Config, changes []string) (*
return builder.Config, nil
}
// CommitConfig contains build configs for commit operation
type CommitConfig struct {
Pause bool
Repo string
Tag string
Author string
Comment string
Changes []string
Config *runconfig.Config
}
// Commit will create a new image from a container's changes
func Commit(name string, d *daemon.Daemon, c *CommitConfig) (string, error) {
func Commit(d *daemon.Daemon, name string, c *daemon.ContainerCommitConfig) (string, error) {
container, err := d.Get(name)
if err != nil {
return "", err
@@ -292,64 +234,10 @@ func Commit(name string, d *daemon.Daemon, c *CommitConfig) (string, error) {
return "", err
}
commitCfg := &daemon.ContainerCommitConfig{
Pause: c.Pause,
Repo: c.Repo,
Tag: c.Tag,
Author: c.Author,
Comment: c.Comment,
Config: newConfig,
}
img, err := d.Commit(container, commitCfg)
img, err := d.Commit(container, c.Repo, c.Tag, c.Comment, c.Author, c.Pause, newConfig)
if err != nil {
return "", err
}
return img.ID, nil
}
// inspectResponse looks into the http response data at r to determine whether its
// content-type is on the list of acceptable content types for remote build contexts.
// This function returns:
// - a string representation of the detected content-type
// - an io.Reader for the response body
// - an error value which will be non-nil either when something goes wrong while
// reading bytes from r or when the detected content-type is not acceptable.
func inspectResponse(ct string, r io.ReadCloser, clen int) (string, io.ReadCloser, error) {
plen := clen
if plen <= 0 || plen > maxPreambleLength {
plen = maxPreambleLength
}
preamble := make([]byte, plen, plen)
rlen, err := r.Read(preamble)
if rlen == 0 {
return ct, r, errors.New("Empty response")
}
if err != nil && err != io.EOF {
return ct, r, err
}
preambleR := bytes.NewReader(preamble)
bodyReader := ioutil.NopCloser(io.MultiReader(preambleR, r))
// Some web servers will use application/octet-stream as the default
// content type for files without an extension (e.g. 'Dockerfile')
// so if we receive this value we better check for text content
contentType := ct
if len(ct) == 0 || ct == httputils.MimeTypes.OctetStream {
contentType, _, err = httputils.DetectContentType(preamble)
if err != nil {
return contentType, bodyReader, err
}
}
contentType = selectAcceptableMIME(contentType)
var cterr error
if len(contentType) == 0 {
cterr = fmt.Errorf("unsupported Content-Type %q", ct)
contentType = ct
}
return contentType, bodyReader, cterr
}

View File

@@ -1,113 +0,0 @@
package builder
import (
"bytes"
"io/ioutil"
"testing"
)
var textPlainDockerfile = "FROM busybox"
var binaryContext = []byte{0xFD, 0x37, 0x7A, 0x58, 0x5A, 0x00} //xz magic
func TestInspectEmptyResponse(t *testing.T) {
ct := "application/octet-stream"
br := ioutil.NopCloser(bytes.NewReader([]byte("")))
contentType, bReader, err := inspectResponse(ct, br, 0)
if err == nil {
t.Fatalf("Should have generated an error for an empty response")
}
if contentType != "application/octet-stream" {
t.Fatalf("Content type should be 'application/octet-stream' but is %q", contentType)
}
body, err := ioutil.ReadAll(bReader)
if err != nil {
t.Fatal(err)
}
if len(body) != 0 {
t.Fatal("response body should remain empty")
}
}
func TestInspectResponseBinary(t *testing.T) {
ct := "application/octet-stream"
br := ioutil.NopCloser(bytes.NewReader(binaryContext))
contentType, bReader, err := inspectResponse(ct, br, len(binaryContext))
if err != nil {
t.Fatal(err)
}
if contentType != "application/octet-stream" {
t.Fatalf("Content type should be 'application/octet-stream' but is %q", contentType)
}
body, err := ioutil.ReadAll(bReader)
if err != nil {
t.Fatal(err)
}
if len(body) != len(binaryContext) {
t.Fatalf("Wrong response size %d, should be == len(binaryContext)", len(body))
}
for i := range body {
if body[i] != binaryContext[i] {
t.Fatalf("Corrupted response body at byte index %d", i)
}
}
}
func TestResponseUnsupportedContentType(t *testing.T) {
content := []byte(textPlainDockerfile)
ct := "application/json"
br := ioutil.NopCloser(bytes.NewReader(content))
contentType, bReader, err := inspectResponse(ct, br, len(textPlainDockerfile))
if err == nil {
t.Fatal("Should have returned an error on content-type 'application/json'")
}
if contentType != ct {
t.Fatalf("Should not have altered content-type: orig: %s, altered: %s", ct, contentType)
}
body, err := ioutil.ReadAll(bReader)
if err != nil {
t.Fatal(err)
}
if string(body) != textPlainDockerfile {
t.Fatalf("Corrupted response body %s", body)
}
}
func TestInspectResponseTextSimple(t *testing.T) {
content := []byte(textPlainDockerfile)
ct := "text/plain"
br := ioutil.NopCloser(bytes.NewReader(content))
contentType, bReader, err := inspectResponse(ct, br, len(content))
if err != nil {
t.Fatal(err)
}
if contentType != "text/plain" {
t.Fatalf("Content type should be 'text/plain' but is %q", contentType)
}
body, err := ioutil.ReadAll(bReader)
if err != nil {
t.Fatal(err)
}
if string(body) != textPlainDockerfile {
t.Fatalf("Corrupted response body %s", body)
}
}
func TestInspectResponseEmptyContentType(t *testing.T) {
content := []byte(textPlainDockerfile)
br := ioutil.NopCloser(bytes.NewReader(content))
contentType, bodyReader, err := inspectResponse("", br, len(content))
if err != nil {
t.Fatal(err)
}
if contentType != "text/plain" {
t.Fatalf("Content type should be 'text/plain' but is %q", contentType)
}
body, err := ioutil.ReadAll(bodyReader)
if err != nil {
t.Fatal(err)
}
if string(body) != textPlainDockerfile {
t.Fatalf("Corrupted response body %s", body)
}
}

View File

@@ -151,7 +151,7 @@ func parseNameVal(rest string, key string) (*Node, map[string]bool, error) {
if !strings.Contains(words[0], "=") {
node := &Node{}
rootnode = node
strs := tokenWhitespace.Split(rest, 2)
strs := TOKEN_WHITESPACE.Split(rest, 2)
if len(strs) < 2 {
return nil, nil, fmt.Errorf(key + " must have two arguments")
@@ -205,7 +205,7 @@ func parseStringsWhitespaceDelimited(rest string) (*Node, map[string]bool, error
node := &Node{}
rootnode := node
prevnode := node
for _, str := range tokenWhitespace.Split(rest, -1) { // use regexp
for _, str := range TOKEN_WHITESPACE.Split(rest, -1) { // use regexp
prevnode = node
node.Value = str
node.Next = &Node{}
@@ -232,13 +232,13 @@ func parseString(rest string) (*Node, map[string]bool, error) {
// parseJSON converts JSON arrays to an AST.
func parseJSON(rest string) (*Node, map[string]bool, error) {
var myJSON []interface{}
if err := json.NewDecoder(strings.NewReader(rest)).Decode(&myJSON); err != nil {
var myJson []interface{}
if err := json.NewDecoder(strings.NewReader(rest)).Decode(&myJson); err != nil {
return nil, nil, err
}
var top, prev *Node
for _, str := range myJSON {
for _, str := range myJson {
s, ok := str.(string)
if !ok {
return nil, nil, errDockerfileNotStringArray

View File

@@ -1,4 +1,4 @@
// Package parser implements a parser and parse tree dumper for Dockerfiles.
// This package implements a parser and parse tree dumper for Dockerfiles.
package parser
import (
@@ -33,10 +33,10 @@ type Node struct {
}
var (
dispatch map[string]func(string) (*Node, map[string]bool, error)
tokenWhitespace = regexp.MustCompile(`[\t\v\f\r ]+`)
tokenLineContinuation = regexp.MustCompile(`\\[ \t]*$`)
tokenComment = regexp.MustCompile(`^#.*$`)
dispatch map[string]func(string) (*Node, map[string]bool, error)
TOKEN_WHITESPACE = regexp.MustCompile(`[\t\v\f\r ]+`)
TOKEN_LINE_CONTINUATION = regexp.MustCompile(`\\[ \t]*$`)
TOKEN_COMMENT = regexp.MustCompile(`^#.*$`)
)
func init() {
@@ -70,8 +70,8 @@ func parseLine(line string) (string, *Node, error) {
return "", nil, nil
}
if tokenLineContinuation.MatchString(line) {
line = tokenLineContinuation.ReplaceAllString(line, "")
if TOKEN_LINE_CONTINUATION.MatchString(line) {
line = TOKEN_LINE_CONTINUATION.ReplaceAllString(line, "")
return line, nil, nil
}
@@ -96,8 +96,8 @@ func parseLine(line string) (string, *Node, error) {
return "", node, nil
}
// Parse is the main parse routine.
// It handles an io.ReadWriteCloser and returns the root of the AST.
// The main parse routine. Handles an io.ReadWriteCloser and returns the root
// of the AST.
func Parse(rwc io.Reader) (*Node, error) {
root := &Node{}
scanner := bufio.NewScanner(rwc)

View File

@@ -19,7 +19,7 @@
# -e GPG_PASSPHRASE=gloubiboulga \
# docker hack/release.sh
#
# Note: AppArmor used to mess with privileged mode, but this is no longer
# Note: Apparmor used to mess with privileged mode, but this is no longer
# the case. Therefore, you don't have to disable it anymore.
#

View File

@@ -7,8 +7,8 @@ import (
"unicode"
)
// Dump dumps the AST defined by `node` as a list of sexps.
// Returns a string suitable for printing.
// dumps the AST defined by `node` as a list of sexps. Returns a string
// suitable for printing.
func (node *Node) Dump() string {
str := ""
str += node.Value
@@ -59,7 +59,7 @@ func splitCommand(line string) (string, []string, string, error) {
var flags []string
// Make sure we get the same results irrespective of leading/trailing spaces
cmdline := tokenWhitespace.Split(strings.TrimSpace(line), 2)
cmdline := TOKEN_WHITESPACE.Split(strings.TrimSpace(line), 2)
cmd := strings.ToLower(cmdline[0])
if len(cmdline) == 2 {
@@ -77,8 +77,8 @@ func splitCommand(line string) (string, []string, string, error) {
// this function.
func stripComments(line string) string {
// string is already trimmed at this point
if tokenComment.MatchString(line) {
return tokenComment.ReplaceAllString(line, "")
if TOKEN_COMMENT.MatchString(line) {
return TOKEN_COMMENT.ReplaceAllString(line, "")
}
return line

View File

@@ -18,8 +18,6 @@ type shellWord struct {
pos int
}
// ProcessWord will use the 'env' list of environment variables,
// and replace any env var references in 'word'.
func ProcessWord(word string, env []string) (string, error) {
sw := &shellWord{
word: word,

View File

@@ -1,19 +1,10 @@
package builder
import (
"regexp"
"strings"
)
const acceptableRemoteMIME = `(?:application/(?:(?:x\-)?tar|octet\-stream|((?:x\-)?(?:gzip|bzip2?|xz)))|(?:text/plain))`
var mimeRe = regexp.MustCompile(acceptableRemoteMIME)
func selectAcceptableMIME(ct string) string {
return mimeRe.FindString(ct)
}
func handleJSONArgs(args []string, attributes map[string]bool) []string {
func handleJsonArgs(args []string, attributes map[string]bool) []string {
if len(args) == 0 {
return []string{}
}

View File

@@ -1,41 +0,0 @@
package builder
import (
"fmt"
"testing"
)
func TestSelectAcceptableMIME(t *testing.T) {
validMimeStrings := []string{
"application/x-bzip2",
"application/bzip2",
"application/gzip",
"application/x-gzip",
"application/x-xz",
"application/xz",
"application/tar",
"application/x-tar",
"application/octet-stream",
"text/plain",
}
invalidMimeStrings := []string{
"",
"application/octet",
"application/json",
}
for _, m := range invalidMimeStrings {
if len(selectAcceptableMIME(m)) > 0 {
err := fmt.Errorf("Should not have accepted %q", m)
t.Fatal(err)
}
}
for _, m := range validMimeStrings {
if str := selectAcceptableMIME(m); str == "" {
err := fmt.Errorf("Should have accepted %q", m)
t.Fatal(err)
}
}
}

View File

@@ -1,200 +0,0 @@
package cli
import (
"errors"
"fmt"
"io"
"os"
"reflect"
"strings"
flag "github.com/docker/docker/pkg/mflag"
)
// Cli represents a command line interface.
type Cli struct {
Stderr io.Writer
handlers []Handler
Usage func()
}
// Handler holds the different commands Cli will call
// It should have methods with names starting with `Cmd` like:
// func (h myHandler) CmdFoo(args ...string) error
type Handler interface{}
// Initializer can be optionally implemented by a Handler to
// initialize before each call to one of its commands.
type Initializer interface {
Initialize() error
}
// New instantiates a ready-to-use Cli.
func New(handlers ...Handler) *Cli {
// make the generic Cli object the first cli handler
// in order to handle `docker help` appropriately
cli := new(Cli)
cli.handlers = append([]Handler{cli}, handlers...)
return cli
}
// initErr is an error returned upon initialization of a handler implementing Initializer.
type initErr struct{ error }
func (err initErr) Error() string {
return err.Error()
}
func (cli *Cli) command(args ...string) (func(...string) error, error) {
for _, c := range cli.handlers {
if c == nil {
continue
}
camelArgs := make([]string, len(args))
for i, s := range args {
if len(s) == 0 {
return nil, errors.New("empty command")
}
camelArgs[i] = strings.ToUpper(s[:1]) + strings.ToLower(s[1:])
}
methodName := "Cmd" + strings.Join(camelArgs, "")
method := reflect.ValueOf(c).MethodByName(methodName)
if method.IsValid() {
if c, ok := c.(Initializer); ok {
if err := c.Initialize(); err != nil {
return nil, initErr{err}
}
}
return method.Interface().(func(...string) error), nil
}
}
return nil, errors.New("command not found")
}
// Run executes the specified command.
func (cli *Cli) Run(args ...string) error {
if len(args) > 1 {
command, err := cli.command(args[:2]...)
switch err := err.(type) {
case nil:
return command(args[2:]...)
case initErr:
return err.error
}
}
if len(args) > 0 {
command, err := cli.command(args[0])
switch err := err.(type) {
case nil:
return command(args[1:]...)
case initErr:
return err.error
}
cli.noSuchCommand(args[0])
}
return cli.CmdHelp()
}
func (cli *Cli) noSuchCommand(command string) {
if cli.Stderr == nil {
cli.Stderr = os.Stderr
}
fmt.Fprintf(cli.Stderr, "docker: '%s' is not a docker command.\nSee 'docker --help'.\n", command)
os.Exit(1)
}
// CmdHelp displays information on a Docker command.
//
// If more than one command is specified, information is only shown for the first command.
//
// Usage: docker help COMMAND or docker COMMAND --help
func (cli *Cli) CmdHelp(args ...string) error {
if len(args) > 1 {
command, err := cli.command(args[:2]...)
switch err := err.(type) {
case nil:
command("--help")
return nil
case initErr:
return err.error
}
}
if len(args) > 0 {
command, err := cli.command(args[0])
switch err := err.(type) {
case nil:
command("--help")
return nil
case initErr:
return err.error
}
cli.noSuchCommand(args[0])
}
if cli.Usage == nil {
flag.Usage()
} else {
cli.Usage()
}
return nil
}
// Subcmd is a subcommand of the main "docker" command.
// A subcommand represents an action that can be performed
// from the Docker command line client.
//
// To see all available subcommands, run "docker --help".
func Subcmd(name string, synopses []string, description string, exitOnError bool) *flag.FlagSet {
var errorHandling flag.ErrorHandling
if exitOnError {
errorHandling = flag.ExitOnError
} else {
errorHandling = flag.ContinueOnError
}
flags := flag.NewFlagSet(name, errorHandling)
flags.Usage = func() {
flags.ShortUsage()
flags.PrintDefaults()
}
flags.ShortUsage = func() {
options := ""
if flags.FlagCountUndeprecated() > 0 {
options = " [OPTIONS]"
}
if len(synopses) == 0 {
synopses = []string{""}
}
// Allow for multiple command usage synopses.
for i, synopsis := range synopses {
lead := "\t"
if i == 0 {
// First line needs the word 'Usage'.
lead = "Usage:\t"
}
if synopsis != "" {
synopsis = " " + synopsis
}
fmt.Fprintf(flags.Out(), "\n%sdocker %s%s%s", lead, name, options, synopsis)
}
fmt.Fprintf(flags.Out(), "\n\n%s\n", description)
}
return flags
}
// An StatusError reports an unsuccessful exit by a command.
type StatusError struct {
Status string
StatusCode int
}
func (e StatusError) Error() string {
return fmt.Sprintf("Status: %s, Code: %d", e.Status, e.StatusCode)
}

View File

@@ -1,12 +0,0 @@
package cli
import flag "github.com/docker/docker/pkg/mflag"
// ClientFlags represents flags for the docker client.
type ClientFlags struct {
FlagSet *flag.FlagSet
Common *CommonFlags
PostParse func()
ConfigDir string
}

View File

@@ -1,20 +0,0 @@
package cli
import (
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/tlsconfig"
)
// CommonFlags represents flags that are common to both the client and the daemon.
type CommonFlags struct {
FlagSet *flag.FlagSet
PostParse func()
Debug bool
Hosts []string
LogLevel string
TLS bool
TLSVerify bool
TLSOptions *tlsconfig.Options
TrustKey string
}

View File

@@ -3,6 +3,7 @@ package cliconfig
import (
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"os"
@@ -10,41 +11,24 @@ import (
"strings"
"github.com/docker/docker/pkg/homedir"
"github.com/docker/docker/pkg/system"
)
const (
// ConfigFile is the name of config file
ConfigFileName = "config.json"
oldConfigfile = ".dockercfg"
// Where we store the config file
CONFIGFILE = "config.json"
OLD_CONFIGFILE = ".dockercfg"
// This constant is only used for really old config files when the
// URL wasn't saved as part of the config file and it was just
// assumed to be this value.
defaultIndexserver = "https://index.docker.io/v1/"
DEFAULT_INDEXSERVER = "https://index.docker.io/v1/"
)
var (
configDir = os.Getenv("DOCKER_CONFIG")
ErrConfigFileMissing = errors.New("The Auth config file is missing")
)
func init() {
if configDir == "" {
configDir = filepath.Join(homedir.Get(), ".docker")
}
}
// ConfigDir returns the directory the configuration file is stored in
func ConfigDir() string {
return configDir
}
// SetConfigDir sets the directory the configuration file is stored in
func SetConfigDir(dir string) {
configDir = dir
}
// AuthConfig contains authorization information for connecting to a Registry
// Registry Auth Info
type AuthConfig struct {
Username string `json:"username,omitempty"`
Password string `json:"password,omitempty"`
@@ -53,34 +37,31 @@ type AuthConfig struct {
ServerAddress string `json:"serveraddress,omitempty"`
}
// ConfigFile ~/.docker/config.json file info
// ~/.docker/config.json file info
type ConfigFile struct {
AuthConfigs map[string]AuthConfig `json:"auths"`
HTTPHeaders map[string]string `json:"HttpHeaders,omitempty"`
PsFormat string `json:"psFormat,omitempty"`
HttpHeaders map[string]string `json:"HttpHeaders,omitempty"`
filename string // Note: not serialized - for internal use only
}
// NewConfigFile initilizes an empty configuration file for the given filename 'fn'
func NewConfigFile(fn string) *ConfigFile {
return &ConfigFile{
AuthConfigs: make(map[string]AuthConfig),
HTTPHeaders: make(map[string]string),
HttpHeaders: make(map[string]string),
filename: fn,
}
}
// Load reads the configuration files in the given directory, and sets up
// the auth config information and return values.
// load up the auth config information and return values
// FIXME: use the internal golang config parser
func Load(configDir string) (*ConfigFile, error) {
if configDir == "" {
configDir = ConfigDir()
configDir = filepath.Join(homedir.Get(), ".docker")
}
configFile := ConfigFile{
AuthConfigs: make(map[string]AuthConfig),
filename: filepath.Join(configDir, ConfigFileName),
filename: filepath.Join(configDir, CONFIGFILE),
}
// Try happy path first - latest config file
@@ -113,7 +94,8 @@ func Load(configDir string) (*ConfigFile, error) {
}
// Can't find latest config file so check for the old one
confFile := filepath.Join(homedir.Get(), oldConfigfile)
confFile := filepath.Join(homedir.Get(), OLD_CONFIGFILE)
if _, err := os.Stat(confFile); err != nil {
return &configFile, nil //missing file is not an error
}
@@ -142,8 +124,8 @@ func Load(configDir string) (*ConfigFile, error) {
return &configFile, fmt.Errorf("Invalid Auth config file")
}
authConfig.Email = origEmail[1]
authConfig.ServerAddress = defaultIndexserver
configFile.AuthConfigs[defaultIndexserver] = authConfig
authConfig.ServerAddress = DEFAULT_INDEXSERVER
configFile.AuthConfigs[DEFAULT_INDEXSERVER] = authConfig
} else {
for k, authConfig := range configFile.AuthConfigs {
authConfig.Username, authConfig.Password, err = DecodeAuth(authConfig.Auth)
@@ -158,13 +140,12 @@ func Load(configDir string) (*ConfigFile, error) {
return &configFile, nil
}
// Save encodes and writes out all the authorization information
func (configFile *ConfigFile) Save() error {
// Encode sensitive data into a new/temp struct
tmpAuthConfigs := make(map[string]AuthConfig, len(configFile.AuthConfigs))
for k, authConfig := range configFile.AuthConfigs {
authCopy := authConfig
// encode and save the authstring, while blanking out the original fields
authCopy.Auth = EncodeAuth(&authCopy)
authCopy.Username = ""
authCopy.Password = ""
@@ -181,7 +162,7 @@ func (configFile *ConfigFile) Save() error {
return err
}
if err := system.MkdirAll(filepath.Dir(configFile.filename), 0700); err != nil {
if err := os.MkdirAll(filepath.Dir(configFile.filename), 0700); err != nil {
return err
}
@@ -192,12 +173,11 @@ func (configFile *ConfigFile) Save() error {
return nil
}
// Filename returns the name of the configuration file
func (configFile *ConfigFile) Filename() string {
return configFile.filename
func (config *ConfigFile) Filename() string {
return config.filename
}
// EncodeAuth creates a base64 encoded string to containing authorization information
// create a base64 encoded auth string to store in config
func EncodeAuth(authConfig *AuthConfig) string {
authStr := authConfig.Username + ":" + authConfig.Password
msg := []byte(authStr)
@@ -206,7 +186,7 @@ func EncodeAuth(authConfig *AuthConfig) string {
return string(encoded)
}
// DecodeAuth decodes a base64 encoded string and returns username and password
// decode the auth string
func DecodeAuth(authStr string) (string, string, error) {
decLen := base64.StdEncoding.DecodedLen(len(authStr))
decoded := make([]byte, decLen)

View File

@@ -25,7 +25,7 @@ func TestMissingFile(t *testing.T) {
t.Fatalf("Failed to save: %q", err)
}
buf, err := ioutil.ReadFile(filepath.Join(tmpHome, ConfigFileName))
buf, err := ioutil.ReadFile(filepath.Join(tmpHome, CONFIGFILE))
if !strings.Contains(string(buf), `"auths":`) {
t.Fatalf("Should have save in new form: %s", string(buf))
}
@@ -47,7 +47,7 @@ func TestSaveFileToDirs(t *testing.T) {
t.Fatalf("Failed to save: %q", err)
}
buf, err := ioutil.ReadFile(filepath.Join(tmpHome, ConfigFileName))
buf, err := ioutil.ReadFile(filepath.Join(tmpHome, CONFIGFILE))
if !strings.Contains(string(buf), `"auths":`) {
t.Fatalf("Should have save in new form: %s", string(buf))
}
@@ -55,7 +55,7 @@ func TestSaveFileToDirs(t *testing.T) {
func TestEmptyFile(t *testing.T) {
tmpHome, _ := ioutil.TempDir("", "config-test")
fn := filepath.Join(tmpHome, ConfigFileName)
fn := filepath.Join(tmpHome, CONFIGFILE)
ioutil.WriteFile(fn, []byte(""), 0600)
_, err := Load(tmpHome)
@@ -66,7 +66,7 @@ func TestEmptyFile(t *testing.T) {
func TestEmptyJson(t *testing.T) {
tmpHome, _ := ioutil.TempDir("", "config-test")
fn := filepath.Join(tmpHome, ConfigFileName)
fn := filepath.Join(tmpHome, CONFIGFILE)
ioutil.WriteFile(fn, []byte("{}"), 0600)
config, err := Load(tmpHome)
@@ -80,7 +80,7 @@ func TestEmptyJson(t *testing.T) {
t.Fatalf("Failed to save: %q", err)
}
buf, err := ioutil.ReadFile(filepath.Join(tmpHome, ConfigFileName))
buf, err := ioutil.ReadFile(filepath.Join(tmpHome, CONFIGFILE))
if !strings.Contains(string(buf), `"auths":`) {
t.Fatalf("Should have save in new form: %s", string(buf))
}
@@ -100,7 +100,7 @@ func TestOldJson(t *testing.T) {
defer func() { os.Setenv(homeKey, homeVal) }()
os.Setenv(homeKey, tmpHome)
fn := filepath.Join(tmpHome, oldConfigfile)
fn := filepath.Join(tmpHome, OLD_CONFIGFILE)
js := `{"https://index.docker.io/v1/":{"auth":"am9lam9lOmhlbGxv","email":"user@example.com"}}`
ioutil.WriteFile(fn, []byte(js), 0600)
@@ -120,7 +120,7 @@ func TestOldJson(t *testing.T) {
t.Fatalf("Failed to save: %q", err)
}
buf, err := ioutil.ReadFile(filepath.Join(tmpHome, ConfigFileName))
buf, err := ioutil.ReadFile(filepath.Join(tmpHome, CONFIGFILE))
if !strings.Contains(string(buf), `"auths":`) ||
!strings.Contains(string(buf), "user@example.com") {
t.Fatalf("Should have save in new form: %s", string(buf))
@@ -129,7 +129,7 @@ func TestOldJson(t *testing.T) {
func TestNewJson(t *testing.T) {
tmpHome, _ := ioutil.TempDir("", "config-test")
fn := filepath.Join(tmpHome, ConfigFileName)
fn := filepath.Join(tmpHome, CONFIGFILE)
js := ` { "auths": { "https://index.docker.io/v1/": { "auth": "am9lam9lOmhlbGxv", "email": "user@example.com" } } }`
ioutil.WriteFile(fn, []byte(js), 0600)
@@ -149,40 +149,9 @@ func TestNewJson(t *testing.T) {
t.Fatalf("Failed to save: %q", err)
}
buf, err := ioutil.ReadFile(filepath.Join(tmpHome, ConfigFileName))
buf, err := ioutil.ReadFile(filepath.Join(tmpHome, CONFIGFILE))
if !strings.Contains(string(buf), `"auths":`) ||
!strings.Contains(string(buf), "user@example.com") {
t.Fatalf("Should have save in new form: %s", string(buf))
}
}
func TestJsonWithPsFormat(t *testing.T) {
tmpHome, _ := ioutil.TempDir("", "config-test")
fn := filepath.Join(tmpHome, ConfigFileName)
js := `{
"auths": { "https://index.docker.io/v1/": { "auth": "am9lam9lOmhlbGxv", "email": "user@example.com" } },
"psFormat": "table {{.ID}}\\t{{.Label \"com.docker.label.cpu\"}}"
}`
ioutil.WriteFile(fn, []byte(js), 0600)
config, err := Load(tmpHome)
if err != nil {
t.Fatalf("Failed loading on empty json file: %q", err)
}
if config.PsFormat != `table {{.ID}}\t{{.Label "com.docker.label.cpu"}}` {
t.Fatalf("Unknown ps format: %s\n", config.PsFormat)
}
// Now save it and make sure it shows up in new form
err = config.Save()
if err != nil {
t.Fatalf("Failed to save: %q", err)
}
buf, err := ioutil.ReadFile(filepath.Join(tmpHome, ConfigFileName))
if !strings.Contains(string(buf), `"psFormat":`) ||
!strings.Contains(string(buf), "{{.ID}}") {
t.Fatalf("Should have save in new form: %s", string(buf))
}
}

View File

@@ -1,151 +0,0 @@
@{DOCKER_GRAPH_PATH}=/var/lib/docker
profile /usr/bin/docker (attach_disconnected, complain) {
# Prevent following links to these files during container setup.
deny /etc/** mkl,
deny /dev/** kl,
deny /sys/** mkl,
deny /proc/** mkl,
mount -> @{DOCKER_GRAPH_PATH}/**,
mount -> /,
mount -> /proc/**,
mount -> /sys/**,
mount -> /run/docker/netns/**,
umount,
pivot_root,
signal (receive) peer=@{profile_name},
signal (receive) peer=unconfined,
signal (send),
ipc rw,
network,
capability,
owner /** rw,
/var/lib/docker/** rwl,
# For non-root client use:
/dev/urandom r,
/run/docker.sock rw,
/proc/** r,
/sys/kernel/mm/hugepages/ r,
/etc/localtime r,
ptrace peer=@{profile_name},
ptrace (read) peer=docker-default,
deny ptrace (trace) peer=docker-default,
deny ptrace peer=/usr/bin/docker///bin/ps,
/usr/bin/docker pix,
/sbin/xtables-multi rCx,
/sbin/iptables rCx,
/sbin/modprobe rCx,
/sbin/auplink rCx,
/bin/kmod rCx,
/usr/bin/xz rCx,
/bin/ps rCx,
/bin/cat rCx,
/sbin/zfs rCx,
# Transitions
change_profile -> docker-*,
change_profile -> unconfined,
profile /bin/cat (complain) {
/etc/ld.so.cache r,
/lib/** r,
/dev/null rw,
/proc r,
/bin/cat mr,
# For reading in 'docker stats':
/proc/[0-9]*/net/dev r,
}
profile /bin/ps (complain) {
/etc/ld.so.cache r,
/etc/localtime r,
/etc/passwd r,
/etc/nsswitch.conf r,
/lib/** r,
/proc/[0-9]*/** r,
/dev/null rw,
/bin/ps mr,
# We don't need ptrace so we'll deny and ignore the error.
deny ptrace (read, trace),
# Quiet dac_override denials
deny capability dac_override,
deny capability dac_read_search,
deny capability sys_ptrace,
/dev/tty r,
/proc/stat r,
/proc/cpuinfo r,
/proc/meminfo r,
/proc/uptime r,
/sys/devices/system/cpu/online r,
/proc/sys/kernel/pid_max r,
/proc/ r,
/proc/tty/drivers r,
}
profile /sbin/iptables (complain) {
signal (receive) peer=/usr/bin/docker,
capability net_admin,
}
profile /sbin/auplink flags=(attach_disconnected, complain) {
signal (receive) peer=/usr/bin/docker,
capability sys_admin,
capability dac_override,
@{DOCKER_GRAPH_PATH}/aufs/** rw,
@{DOCKER_GRAPH_PATH}/tmp/** rw,
# For user namespaces:
@{DOCKER_GRAPH_PATH}/[0-9]*.[0-9]*/** rw,
/sys/fs/aufs/** r,
/lib/** r,
/apparmor/.null r,
/dev/null rw,
/etc/ld.so.cache r,
/sbin/auplink rm,
/proc/fs/aufs/** rw,
/proc/[0-9]*/mounts rw,
}
profile /sbin/modprobe /bin/kmod (complain) {
signal (receive) peer=/usr/bin/docker,
capability sys_module,
/etc/ld.so.cache r,
/lib/** r,
/dev/null rw,
/apparmor/.null rw,
/sbin/modprobe rm,
/bin/kmod rm,
/proc/cmdline r,
/sys/module/** r,
/etc/modprobe.d{/,/**} r,
}
# xz works via pipes, so we do not need access to the filesystem.
profile /usr/bin/xz (complain) {
signal (receive) peer=/usr/bin/docker,
/etc/ld.so.cache r,
/lib/** r,
/usr/bin/xz rm,
deny /proc/** rw,
deny /sys/** rw,
}
profile /sbin/xtables-multi (attach_disconnected, complain) {
/etc/ld.so.cache r,
/lib/** r,
/sbin/xtables-multi rm,
/apparmor/.null w,
/dev/null rw,
capability net_raw,
capability net_admin,
network raw,
}
profile /sbin/zfs (attach_disconnected, complain) {
file,
capability,
}
}

View File

@@ -41,8 +41,6 @@ for version in "${versions[@]}"; do
echo >> "$version/Dockerfile"
extraBuildTags=
# this list is sorted alphabetically; please keep it that way
packages=(
bash-completion # for bash-completion debhelper integration
@@ -56,23 +54,6 @@ for version in "${versions[@]}"; do
libdevmapper-dev # for "libdevmapper.h"
libsqlite3-dev # for "sqlite3.h"
)
if [ "$suite" = 'precise' ]; then
# precise has a few package issues
# - dh-systemd doesn't exist at all
packages=( "${packages[@]/dh-systemd}" )
# - libdevmapper-dev is missing critical structs (too old)
packages=( "${packages[@]/libdevmapper-dev}" )
extraBuildTags+=' exclude_graphdriver_devicemapper'
# - btrfs-tools is missing "ioctl.h" (too old), so it's useless
# (since kernels on precise are old too, just skip btrfs entirely)
packages=( "${packages[@]/btrfs-tools}" )
extraBuildTags+=' exclude_graphdriver_btrfs'
fi
echo "RUN apt-get update && apt-get install -y ${packages[*]} --no-install-recommends && rm -rf /var/lib/apt/lists/*" >> "$version/Dockerfile"
echo >> "$version/Dockerfile"
@@ -84,5 +65,5 @@ for version in "${versions[@]}"; do
echo >> "$version/Dockerfile"
echo 'ENV AUTO_GOPATH 1' >> "$version/Dockerfile"
awk '$1 == "ENV" && $2 == "DOCKER_BUILDTAGS" { print $0 "'"$extraBuildTags"'"; exit }' ../../../Dockerfile >> "$version/Dockerfile"
awk '$1 == "ENV" && $2 == "DOCKER_BUILDTAGS" { print; exit }' ../../../Dockerfile >> "$version/Dockerfile"
done

View File

@@ -2,7 +2,7 @@
# THIS FILE IS AUTOGENERATED; SEE "contrib/builder/deb/generate.sh"!
#
FROM ubuntu:wily
FROM ubuntu-debootstrap:trusty
RUN apt-get update && apt-get install -y bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-systemd git libapparmor-dev libdevmapper-dev libsqlite3-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*

View File

@@ -2,7 +2,7 @@
# THIS FILE IS AUTOGENERATED; SEE "contrib/builder/deb/generate.sh"!
#
FROM ubuntu:trusty
FROM ubuntu-debootstrap:utopic
RUN apt-get update && apt-get install -y bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-systemd git libapparmor-dev libdevmapper-dev libsqlite3-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*

View File

@@ -2,7 +2,7 @@
# THIS FILE IS AUTOGENERATED; SEE "contrib/builder/deb/generate.sh"!
#
FROM ubuntu:vivid
FROM ubuntu-debootstrap:vivid
RUN apt-get update && apt-get install -y bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-systemd git libapparmor-dev libdevmapper-dev libsqlite3-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*

View File

@@ -1,14 +0,0 @@
#
# THIS FILE IS AUTOGENERATED; SEE "contrib/builder/deb/generate.sh"!
#
FROM ubuntu:precise
RUN apt-get update && apt-get install -y bash-completion build-essential curl ca-certificates debhelper git libapparmor-dev libsqlite3-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.4.2
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
ENV AUTO_GOPATH 1
ENV DOCKER_BUILDTAGS apparmor selinux exclude_graphdriver_devicemapper exclude_graphdriver_btrfs

View File

@@ -2,7 +2,7 @@
# THIS FILE IS AUTOGENERATED; SEE "contrib/builder/rpm/generate.sh"!
#
FROM oraclelinux:6
FROM centos:6
RUN yum groupinstall -y "Development Tools"
RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libselinux-devel sqlite-devel tar
@@ -12,4 +12,4 @@ RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64
ENV PATH $PATH:/usr/local/go/bin
ENV AUTO_GOPATH 1
ENV DOCKER_BUILDTAGS selinux
ENV DOCKER_BUILDTAGS selinux exclude_graphdriver_btrfs

View File

@@ -5,7 +5,6 @@
FROM centos:7
RUN yum groupinstall -y "Development Tools"
RUN yum -y swap -- remove systemd-container systemd-container-libs -- install systemd systemd-libs
RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libselinux-devel sqlite-devel tar
ENV GO_VERSION 1.4.2

Some files were not shown because too many files have changed in this diff Show More