Compare commits

...

262 Commits

Author SHA1 Message Date
Jessica Frazelle
7ddecf74be Bump version to v1.7.0
Signed-off-by: Jessica Frazelle <princess@docker.com>
2015-06-04 12:14:27 -07:00
David Calavera
8e7ea1a8fd Migrate data from old vfs paths to new local volumes path.
Signed-off-by: David Calavera <david.calavera@gmail.com>
(cherry picked from commit 16a5590c5b)
2015-06-04 12:14:21 -07:00
Alessandro Boch
c11128520c Fix for #13720
Signed-off-by: Alessandro Boch <aboch@docker.com>
(cherry picked from commit ea180a73bc)
2015-06-04 12:12:50 -07:00
Tianon Gravi
25fc200dcf Swap build-* to use UTC instead of local time
Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
(cherry picked from commit aa54a93f74)
2015-06-04 09:03:54 -07:00
Tianon Gravi
d17f27a13f Make "DEST" a make.sh construct instead of ad-hoc
Using "DEST" for our build artifacts inside individual bundlescripts was already well-established convention, but this officializes it by having `make.sh` itself set the variable and create the directory, also handling CYGWIN oddities in a single central place (instead of letting them spread outward from `hack/make/binary` like was definitely on their roadmap, whether they knew it or not; sneaky oddities).

Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
(cherry picked from commit ac3388367b)
2015-06-04 09:03:54 -07:00
Madhu Venugopal
f2f2c492e1 Using container NetworkDisabled to fix #13725
container.config.NetworkDisabled is set for both daemon's
DisableNetwork and --networking=false case. Hence using
this flag instead to fix #13725.

There is an existing integration-test to catch this issue,
but it is working for the wrong reasons.

Signed-off-by: Madhu Venugopal <madhu@docker.com>
(cherry picked from commit 83208a531d)
2015-06-04 09:03:54 -07:00
Jessica Frazelle
18af7fdbba fix version struct on old versions
Signed-off-by: Jessica Frazelle <princess@docker.com>
(cherry picked from commit 229b599259)
2015-06-03 17:33:10 -07:00
Antonio Murdaca
339f0a128a SizeRW & SizeRootFs omitted if empty in /container/json call
Signed-off-by: Antonio Murdaca <runcom@linux.com>
(cherry picked from commit 6945ac2d02)
2015-06-03 17:33:10 -07:00
Jessica Frazelle
5a0fa9545a Update urls from .com to .org.
I added 301 redirects from dockerproject.com to dockerproject.org but may as
well make sure everything is updated anyways.

Signed-off-by: Jessica Frazelle <princess@docker.com>
(cherry picked from commit 7943bce894)
2015-06-03 14:23:45 -07:00
Alexander Morozov
930f691919 Support CloseNotifier for events
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit 9e7fc245a7)
2015-06-03 13:47:20 -07:00
Antonio Murdaca
bf371ab2a5 Do not omit empty json field in /containers/json api response
Signed-off-by: Antonio Murdaca <runcom@linux.com>
(cherry picked from commit 725f34151c)
2015-06-03 12:43:08 -07:00
Michael Crosby
d283999b1c Fix nat integration tests
This removes complexity of current implementation and makes the test
correct and assert the right things.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit 4f42097883)
2015-06-02 18:54:54 -07:00
Arnaud Porterie
2786c4d0e2 Update vendoring
Update libnetwork to 005bc475ee49a36ef2ad9c112d1b5ccdaba277d4
Synchronize changes to distribution (was missing _test.go file)

Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
(cherry picked from commit 1664d6cd66)
2015-06-02 17:42:41 -07:00
Antonio Murdaca
2939617c8b Expose old config field for api < 1.19
Signed-off-by: Antonio Murdaca <me@runcom.ninja>
(cherry picked from commit 6deaa58ba5)
2015-06-02 16:02:13 -07:00
David Calavera
f5e3c68c93 Update libnetwork to 4ded6fe3641b71863cc5985652930ce40efc3af4
Signed-off-by: David Calavera <david.calavera@gmail.com>
(cherry picked from commit ad244668c3)
2015-06-02 12:06:40 -07:00
David Calavera
c96b03797a Bump libnetwork for 1.7 release.
Signed-off-by: David Calavera <david.calavera@gmail.com>
(cherry picked from commit ad41efd9a2)
2015-06-02 12:06:40 -07:00
Stephen J Day
0f06c54f40 Break down loadManifest function into constituent parts
Signed-off-by: Stephen J Day <stephen.day@docker.com>
(cherry picked from commit 84413be3c9)
2015-06-02 11:56:42 -07:00
Stephen J Day
32fcacdedb Add tests for loadManifest digest verification
Signed-off-by: Stephen J Day <stephen.day@docker.com>
(cherry picked from commit 74528be903)
2015-06-02 11:56:42 -07:00
Stephen J Day
140e36a77e Attempt to retain tagging behavior
Signed-off-by: Stephen J Day <stephen.day@docker.com>
(cherry picked from commit 1e653ab645)
2015-06-02 11:56:42 -07:00
Stephen J Day
4945a51f73 Properly verify manifests and layer digests on pull
To ensure manifest integrity when pulling by digest, this changeset ensures
that not only the remote digest provided by the registry is verified but also
that the digest provided on the command line is checked, as well. If this check
fails, the pull is cancelled as with an error. Inspection also should that
while layers were being verified against their digests, the error was being
treated as tech preview image signing verification error. This, in fact, is not
a tech preview and opens up the docker daemon to man in the middle attacks that
can be avoided with the v2 registry protocol.

As a matter of cleanliness, the digest package from the distribution project
has been updated to latest version. There were some recent improvements in the
digest package.

Signed-off-by: Stephen J Day <stephen.day@docker.com>
(cherry picked from commit 06612cc0fe)
2015-06-02 11:56:41 -07:00
Jeffrey van Gogh
c881349e5e Upon HTTP 302 redirect do not include "Authorization" header on 'untrusted' registries.
Refactoring in Docker 1.7 changed the behavior to add this header where as Docker <= 1.6 wouldn't emit this Header on a HTTP 302 redirect.

This closes #13649

Signed-off-by: Jeffrey van Gogh <jvg@google.com>
(cherry picked from commit 65c5105fcc)
2015-06-02 11:56:41 -07:00
Victor Vieux
ecdf1297a3 no not print empty keys in docker info
Signed-off-by: Victor Vieux <victorvieux@gmail.com>
(cherry picked from commit c790aa36ea)
2015-06-02 11:03:06 -07:00
Antonio Murdaca
db3daa7bdd Fix wrong kill signal parsing
Signed-off-by: Antonio Murdaca <antonio.murdaca@gmail.com>
(cherry picked from commit 39eec4c25b)
2015-06-02 09:43:43 -07:00
Richard
c0fc839e2b If no endpoint could be established with the given mirror configuration,
fallback to pulling from the hub as per v1 behavior.

Signed-off-by: Richard Scothern <richard.scothern@gmail.com>
(cherry picked from commit 6e4ff1bb13)
2015-06-01 18:11:10 -07:00
Alexander Morozov
2e29eadb5c Fix race condition in registry/session
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit 9d98c28855)
2015-06-01 14:45:22 -07:00
Tonis Tiigi
ecda5c0a6d Fix breakouts from git root during build
Signed-off-by: Tõnis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 7f7ebeffe8)
2015-06-01 13:06:12 -07:00
Arnaud Porterie
4872f86d00 Add note about overlay not being production ready
Add a paragraph in cli.md mentioning that overlay is not a production
ready graphdriver.

Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
(cherry picked from commit 67cb748e26)
2015-06-01 12:26:57 -07:00
Lei Jitang
6ca78fff63 Add docker stats --no-stream show cpu usage
Signed-off-by: Lei Jitang <leijitang@huawei.com>
(cherry picked from commit 96123a1fd5)

Conflicts:
	daemon/stats.go
2015-06-01 10:09:14 -07:00
David R. Jenni
e9c51c3edc Fix issue #10184.
Merge user specified devices correctly with default devices.
Otherwise the user specified devices end up without permissions.

Signed-off-by: David R. Jenni <david.r.jenni@gmail.com>
(cherry picked from commit c913c9921b)
2015-06-01 09:32:50 -07:00
Jessica Frazelle
b38c720f19 fix experimental version and release script
add api version experimental

Signed-off-by: Jessica Frazelle <princess@docker.com>
(cherry picked from commit b372f9f224)

Conflicts:
	docs/sources/reference/api/docker_remote_api_v1.20.md
2015-06-01 09:31:32 -07:00
Jana Radhakrishnan
3bed793ba1 Do not attempt releasing network when not attached to any network
Sometimes container.cleanup() can be called from multiple paths
for the same container during error conditions from monitor and
regular startup path. So if the container network has been already
released do not try to release it again.

Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
(cherry picked from commit 6cdf8623d5)
2015-06-01 08:53:49 -07:00
Doug Davis
ab7e7a7338 Carry #11858
Continues 11858 by:
- Making sure the exit code is always zero when we ask for help
- Making sure the exit code isn't zero when we print help on error cases
- Making sure both short and long usage go to the same stream (stdout vs stderr)
- Making sure all docker commands support --help
- Test that all cmds send --help to stdout, exit code 0, show full usage, no blank lines at end
- Test that all cmds (that support it) show short usage on bad arg to stderr, no blank line at end
- Test that all cmds complain about a bad option, no blank line at end
- Test that docker (w/o subcmd) does the same stuff mentioned above properly

Signed-off-by: Doug Davis <dug@us.ibm.com>
(cherry picked from commit 8324d7918b)
2015-06-01 08:53:49 -07:00
Lei Jitang
4fefcde5a6 Ensure all the running containers are killed on daemon shutdown
Signed-off-by: Lei Jitang <leijitang@huawei.com>
(cherry picked from commit bdb77078b5)
2015-05-29 16:50:23 -07:00
Arnaud Porterie
63e3b7433f Add note about overlay not being production ready
Add a paragraph in cli.md mentioning that overlay is not a production
ready graphdriver.

Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
(cherry picked from commit 67cb748e26)
2015-05-29 16:28:00 -07:00
Jessica Frazelle
736c216d58 fix bug with rmi multiple tag
Signed-off-by: Jessica Frazelle <princess@docker.com>
(cherry picked from commit 185f392691)
2015-05-29 15:12:33 -07:00
Antonio Murdaca
85f0e01833 Fix regression in stats API endpoint where stream query param default is true
Signed-off-by: Antonio Murdaca <me@runcom.ninja>
(cherry picked from commit ec97f41465)
2015-05-29 15:06:01 -07:00
Zhang Wei
eb3ed436a4 bug fix: close http response body no longer in use
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
(cherry picked from commit 6c49576a86)
2015-05-29 14:34:25 -07:00
Burke Libbey
d77d7a0056 Use bufio.Reader instead of bufio.Scanner for logger.Copier
When using a scanner, log lines over 64K will crash the Copier with
bufio.ErrTooLong. Subsequently, the ioutils.bufReader will grow without
bound as the logs are no longer being flushed to disk.

Signed-off-by: Burke Libbey <burke.libbey@shopify.com>
(cherry picked from commit f779cfc5d8)
2015-05-29 13:39:21 -07:00
Antonio Murdaca
a0aad0d4de Add syslog-address log-opt
Signed-off-by: Antonio Murdaca <me@runcom.ninja>
(cherry picked from commit e8c88d2533)
2015-05-29 10:28:28 -07:00
Mary Anthony
baf9ea5ce4 Fixes to the 1.19 version
updating with changes to this instant
Signed-off-by: Mary Anthony <mary@docker.com>

(cherry picked from commit 30901609a8)
2015-05-29 10:28:28 -07:00
David Calavera
a6fe70c696 Mount bind volumes coming from the old volumes configuration.
Signed-off-by: David Calavera <david.calavera@gmail.com>
(cherry picked from commit 53d9609de4)
2015-05-29 09:52:36 -07:00
Harald Albers
eee959a9b9 Update bash completion for 1.7.0
Signed-off-by: Harald Albers <github@albersweb.de>
(cherry picked from commit b2832dffe5)
2015-05-29 09:52:36 -07:00
Alexander Morozov
d000ba05fd Treat systemd listeners as all other
Fix #13549

Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit 6f9fa64645)
2015-05-28 14:16:28 -07:00
Antonio Murdaca
d8eff999e0 Fix race in httpsRequestModifier.ModifyRequest when writing tlsConfig
Signed-off-by: Antonio Murdaca <me@runcom.ninja>
(cherry picked from commit a27395e6df)
2015-05-28 11:33:34 -07:00
Jason Shepherd
fb124bcad0 adding nicer help when missing arguments (#11858)
Signed-off-by: Jason Shepherd <jason@jasonshepherd.net>
(cherry picked from commit 48231d623f)
2015-05-28 11:33:34 -07:00
Lei Jitang
9827107dcf Fix automatically publish ports without --publish-all
Signed-off-by: Lei Jitang <leijitang@huawei.com>
(cherry picked from commit 9a09664b51)
2015-05-28 10:03:29 -07:00
Jessica Frazelle
e57d649057 fix release script
Signed-off-by: Jessica Frazelle <princess@docker.com>
(cherry picked from commit 9f619282db)
2015-05-28 10:03:29 -07:00
Tianon Gravi
d6ff6e2c6f Add fedora:22 to our rpm targets
Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
(cherry picked from commit 96903c837f)
2015-05-28 10:03:29 -07:00
Jessica Frazelle
cc2944c7af Merge remote-tracking branch 'origin/master' into bump_v1.7.0
* origin/master: (999 commits)
  Review feedback:     - Match verbiage with other output     - Remove dead code and clearer flow
  Vendoring in libnetwork 2da2dc055de5a474c8540871ad88a48213b0994f
  Restore the stripped registry version number
  Use SELinux labels for volumes
  apply selinux labels volume patch on volumes refactor
  Modify volume mounts SELinux labels on the fly based on :Z or :z
  Remove unused code
  Remove redundant set header
  Return err if we got err on parseForm
  script cleaned up
  Fix unregister stats on when rm running container
  Fix container unmount networkMounts
  Windows: Set default exec driver to windows
  Fixes title, line wrap, and Adds install area Tibor's comment Updating with the new plugins Entering comments from Seb
  Add regression test to make sure we can load old containers with volumes.
  Do not force `syscall.Unmount` on container cleanup.
  Revert "Add docker exec run a command in privileged mode"
  Cleanup container rm funcs
  Allow mirroring only for the official index
  Registry v2 mirror support.
  ...

Conflicts:
	CHANGELOG.md
	VERSION
	api/client/commands.go
	api/client/utils.go
	api/server/server.go
	api/server/server_linux.go
	builder/shell_parser.go
	builder/words
	daemon/config.go
	daemon/container.go
	daemon/daemon.go
	daemon/delete.go
	daemon/execdriver/execdrivers/execdrivers_linux.go
	daemon/execdriver/lxc/driver.go
	daemon/execdriver/native/driver.go
	daemon/graphdriver/aufs/aufs.go
	daemon/graphdriver/driver.go
	daemon/logger/syslog/syslog.go
	daemon/networkdriver/bridge/driver.go
	daemon/networkdriver/portallocator/portallocator.go
	daemon/networkdriver/portmapper/mapper.go
	daemon/networkdriver/portmapper/mapper_test.go
	daemon/volumes.go
	docs/Dockerfile
	docs/man/docker-create.1.md
	docs/man/docker-login.1.md
	docs/man/docker-logout.1.md
	docs/man/docker-run.1.md
	docs/man/docker.1.md
	docs/mkdocs.yml
	docs/s3_website.json
	docs/sources/installation/windows.md
	docs/sources/reference/api/docker_remote_api_v1.18.md
	docs/sources/reference/api/registry_api_client_libraries.md
	docs/sources/reference/builder.md
	docs/sources/reference/run.md
	docs/sources/release-notes.md
	graph/graph.go
	graph/push.go
	hack/install.sh
	hack/vendor.sh
	integration-cli/docker_cli_build_test.go
	integration-cli/docker_cli_pull_test.go
	integration-cli/docker_cli_run_test.go
	pkg/archive/changes.go
	pkg/broadcastwriter/broadcastwriter.go
	pkg/ioutils/readers.go
	pkg/ioutils/readers_test.go
	pkg/progressreader/progressreader.go
	registry/auth.go
	vendor/src/github.com/docker/libcontainer/cgroups/fs/cpu.go
	vendor/src/github.com/docker/libcontainer/cgroups/fs/devices.go
	vendor/src/github.com/docker/libcontainer/cgroups/fs/memory.go
	vendor/src/github.com/docker/libcontainer/cgroups/systemd/apply_systemd.go
	vendor/src/github.com/docker/libcontainer/container_linux.go
	vendor/src/github.com/docker/libcontainer/init_linux.go
	vendor/src/github.com/docker/libcontainer/integration/exec_test.go
	vendor/src/github.com/docker/libcontainer/integration/utils_test.go
	vendor/src/github.com/docker/libcontainer/nsinit/README.md
	vendor/src/github.com/docker/libcontainer/process.go
	vendor/src/github.com/docker/libcontainer/rootfs_linux.go
	vendor/src/github.com/docker/libcontainer/update-vendor.sh
	vendor/src/github.com/docker/libnetwork/portallocator/portallocator_test.go
2015-05-27 19:13:22 -07:00
Michael Crosby
9cee8c4ed0 Merge pull request #13133 from crosbymichael/bump_v1.6.2
Bump to version v1.6.2
2015-05-13 13:46:56 -07:00
Michael Crosby
7c8fca2ddb Bump to version 1.6.2
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-05-11 15:05:16 -07:00
Michael Crosby
376188dcd3 Update libcontainer to 227771c8f611f03639f0ee
This fixes regressions for docker containers mounting into
/sys/fs/cgroup.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-05-11 15:05:07 -07:00
David Calavera
dc610864aa Merge pull request #13072 from jfrazelle/bump_v1.6.1
Bump v1.6.1
2015-05-07 14:59:23 -07:00
Jessica Frazelle
97cd073598 Bump version to 1.6.1
Signed-off-by: Jessica Frazelle <jess@docker.com>
2015-05-07 10:07:01 -07:00
Michael Crosby
d5ebb60bdd Allow libcontainer to eval symlink destination
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>

Add tests for mounting into /proc and /sys

These two locations should be prohibited from mounting volumes into
those destinations.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-04-30 14:08:02 -07:00
Michael Crosby
83c5131acd Update libcontainer to 1b471834b45063b61e0aedefbb1
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-04-30 14:07:52 -07:00
Michael Crosby
b6a9dc399b Mask reads from timer_stats and latency_stats
These files in /proc should not be able to be read as well
as written to.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-04-30 10:25:26 -07:00
Michael Crosby
614a9690e7 Mount RO for timer_stats and latency_stats in proc
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-04-22 11:15:26 -07:00
Michael Crosby
545b440a80 Mount /proc/fs as readonly
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-04-22 11:15:17 -07:00
Michael Crosby
3162024e28 Prevent write access to /proc/asound
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>

Conflicts:
	integration-cli/docker_cli_run_test.go
2015-04-22 11:14:46 -07:00
Jessie Frazelle
769acfec29 Merge pull request #11635 from jfrazelle/bump_v1.6.0
Bump v1.6.0
2015-04-16 12:31:02 -07:00
Jessica Frazelle
47496519da Bump version to v1.6.0
Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-04-16 11:38:05 -07:00
Mary Anthony
fdd21bf032 for 1.6
Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit f44aa3b1fb)
2015-04-16 11:37:59 -07:00
Mary Anthony
d928dad8c8 In with the old menu layout
Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit fe8fb24b53)
2015-04-15 21:34:50 -07:00
Mary Anthony
82366ce059 Adding environment variables for sub projects
Fixes issue #12186
Fixing variables per Jess

Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit ef8b917fac)
2015-04-15 21:34:49 -07:00
Mary Anthony
6410c3c066 Updating with man pages for distribution
Went through the man pages to update for the
v2 instance. Checked against the commands.

Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit b6d55ebcbc)
2015-04-15 21:34:49 -07:00
Michael Crosby
9231dc9cc0 Ensure state is destroyed on daemont restart
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit a5f7c4aa31)
2015-04-15 21:34:49 -07:00
Deng Guangxing
6a3f37386b Inspect show right LogPath in json-file driver
Signed-off-by: Deng Guangxing <dengguangxing@huawei.com>
(cherry picked from commit acf025ad1b)
2015-04-15 17:37:43 -07:00
Darren Shepherd
d9a0c05208 Change syslog format and facility
This patch changes two things

1. Set facility to LOG_DAEMON
2. Remove ": " from tag so that the tag + pid become a single column in
   the log

Signed-off-by: Darren Shepherd <darren@rancher.com>
(cherry picked from commit 05641ccffc)
2015-04-15 14:49:24 -07:00
Jessica Frazelle
24cb9df189 try to modprobe bridge
Signed-off-by: Jessica Frazelle <jess@docker.com>
(cherry picked from commit b3867b8899)
2015-04-15 09:10:56 -07:00
Lewis Marshall
c51cd3298c Prevent Upstart post-start stanza from hanging
Once the job has failed and is respawned, the status becomes `docker
respawn/post-start` after subsequent failures (as opposed to `docker
stop/post-start`), so the post-start script needs to take this into
account.

I could not find specific documentation on the job transitioning to the
`respawn/post-start` state, but this was observed on Ubuntu 14.04.2.

Signed-off-by: Lewis Marshall <lewis@lmars.net>
(cherry picked from commit 302e3834a0)
2015-04-14 16:23:59 -07:00
Alexander Morozov
10affa8018 Get process list after PID 1 dead
Fix #11087

Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit ac8bd12b39)
2015-04-13 12:28:22 -07:00
Deng Guangxing
ce27fa2716 move syslog-tag to syslog.New function
Signed-off-by: Deng Guangxing <dengguangxing@huawei.com>
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit 4f91a333d5)
2015-04-13 12:17:26 -07:00
Brendan Dixon
8d83409e85 Turned off Ctrl+C processing by Windows shell
Signed-off-by: Brendan Dixon <brendand@microsoft.com>
(cherry picked from commit c337bfd2e0)
2015-04-10 16:41:59 -07:00
Nathan LeClaire
3a73b6a2bf Allow SEO crawling from docs site
Signed-off-by: Nathan LeClaire <nathan.leclaire@gmail.com>

Docker-DCO-1.1-Signed-off-by: Nathan LeClaire <nathan.leclaire@gmail.com> (github: nathanleclaire)

(cherry picked from commit de03f4797b)
2015-04-10 15:59:35 -07:00
Tianon Gravi
f99269882f Commonalize more bits of install.sh (especially standardizing around "cat <<-EOF")
Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
(cherry picked from commit 6842bba163)
2015-04-10 11:20:15 -07:00
Eric Windisch
568a9703ac Wrap installer in a function
This will assure that the install script will not
begin executing until after it has been downloaded should
it be utilized in a 'curl | bash' workflow.

Signed-off-by: Eric Windisch <eric@windisch.us>
(cherry picked from commit fa961ce046)
2015-04-10 11:20:15 -07:00
Aaron Welch
faaeb5162d add centos to supported distros
Signed-off-by: Aaron Welch <welch@packet.net>
(cherry picked from commit a6b8f2e3fe)
2015-04-10 11:20:15 -07:00
Brendan Dixon
b5613baac2 Corrected int16 overflow and buffer sizes
Signed-off-by: Brendan Dixon <brendand@microsoft.com>
(cherry picked from commit a264e1e83d)
2015-04-10 11:20:15 -07:00
Alexander Morozov
c956efcd52 Update libcontainer to bd8ec36106086f72b66e1be85a81202b93503e44
Fix #12130

Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit 2f853b9493)
2015-04-07 16:22:37 -07:00
Alexander Morozov
5455864187 Test case for network mode chain container -> container -> host
Issue #12130

Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit ce69dafe4d)
2015-04-07 16:22:37 -07:00
Ahmet Alp Balkan
ceb72fab34 Swap width/height in GetWinsize and monitorTtySize
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
(cherry picked from commit 6e44246fed)
2015-04-06 16:55:38 -07:00
Brendan Dixon
c6ea062a26 Windows console fixes
Corrected integer size passed to Windows
Corrected DisableEcho / SetRawTerminal to not modify state
Cleaned up and made routines more idiomatic
Corrected raw mode state bits
Removed duplicate IsTerminal
Corrected off-by-one error
Minor idiomatic change

Signed-off-by: Brendan Dixon <brendand@microsoft.com>
(cherry picked from commit 1a36a113d4)
2015-04-06 11:48:33 -07:00
Ahmet Alp Balkan
0e045ab50c docs: Add new windows installation tutorials
Updated Windows installation documentation with newest
screencasts and Chocolatey instructions to install windows
client CLI.

Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
(cherry picked from commit 2b320a2309)
2015-04-03 13:51:53 -07:00
Michael Crosby
eeb05fc081 Update libcontainaer to d00b8369852285d6a830a8d3b9
Fixes #12015

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit d12fef1515)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-04-02 14:52:27 -07:00
Michael Crosby
e1381ae328 Return closed channel if oom notification fails
When working with Go channels you must not set it to nil or else the
channel will block forever.  It will not panic reading from a nil chan
but it blocks.  The correct way to do this is to create the channel then
close it as the correct results to the caller will be returned.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit 7061a993c5)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-04-01 16:14:14 -07:00
Alexander Morozov
45ad064150 Fix panic in integration tests
Closing activationLock only if it's not closed already. This is needed
only because integration tests using docker code directly and doesn't
care about global state.

Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit c717475714)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-31 16:43:16 -07:00
Darren Shepherd
72e14a1566 Avoid ServeApi race condition
If job "acceptconnections" is called before "serveapi" the API Accept()
method will hang forever waiting for activation.  This is due to the fact
that when "acceptconnections" ran the activation channel was nil.

Signed-off-by: Darren Shepherd <darren@rancher.com>
(cherry picked from commit 8f6a14452d)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-31 16:43:16 -07:00
Vincent Batts
7d7bec86c9 graphdriver: promote overlay above vfs
It's about time to let folks not hit 'vfs', when 'overlay' is supported
on their kernel. Especially now that v3.18.y is a long-term kernel.

Signed-off-by: Vincent Batts <vbatts@redhat.com>
(cherry picked from commit 2c72ff1dbf)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-31 14:46:09 -07:00
Derek McGowan
a39d49d676 Fix progress reader output on close
Currently the progress reader won't close properly by not setting the close size.

fixes #11849

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
(cherry picked from commit aa3083f577)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-31 14:18:42 -07:00
Brian Goff
5bf15a013b Use getResourcePath instead
Also cleans up tests to not shell out for file creation.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 63708dca8a)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-31 14:06:32 -07:00
Lei Jitang
9461967eec Fix create volume in a directory which is a symbolic link
Signed-off-by: Lei Jitang <leijitang@huawei.com>
(cherry picked from commit 7583b49125)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-31 14:06:32 -07:00
Alexander Morozov
3be7d11cee Initialize portMapper in RequestPort too
Api requesting port for daemon before init_networkdriver called.
Problem is that now initialization of api depends on initialization of
daemon and their intializations runs in parallel. Proper fix will be
just do it sequentially. For now I don't want refactor it, because it
can bring additional problems in 1.6.0.

Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit 584180fce7)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-31 14:00:02 -07:00
Vivek Goyal
d9910b8fd8 container: Do not remove contianer if any of the resource failed cleanup
Do not remove container if any of the resource could not be cleaned up. We
don't want to leak resources.

Two new states have been created. RemovalInProgress and Dead. Once container
is Dead, it can not be started/restarted. Dead container signifies the
container where we tried to remove it but removal failed. User now needs to
figure out what went wrong, corrent the situation and try cleanup again.

RemovalInProgress signifies that container is already being removed. Only
one removal can be in progress.

Also, do not allow start of a container if it is already dead or removal is
in progress.

Also extend existing force option (-f) to docker rm to not return an error
and remove container from user view even if resource cleanup failed.
This will allow a user to get back to old behavior where resources
might leak but atleast user will be able to make progress.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
(cherry picked from commit 40945fc186)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-31 14:00:02 -07:00
Michael Crosby
f115c32f6b Ensure that bridge driver does not use global mappers
This has a few hacks in it but it ensures that the bridge driver does
not use global state in the mappers, atleast as much as possible at this
point without further refactoring.  Some of the exported fields are
hacks to handle the daemon port mapping but this results in a much
cleaner approach and completely remove the global state from the mapper
and allocator.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit d8c628cf08)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-31 10:53:09 -07:00
Michael Crosby
57939badc3 Refactor port allocator to not have ANY global state
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit 43a50b0618)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-31 10:53:09 -07:00
Michael Crosby
51ee02d478 Refactor portmapper to remove ALL global state
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit 62522c9853)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-31 10:53:09 -07:00
Paul Bellamy
c92860748c Refactor global portallocator and portmapper state
Continuation of: #11660, working on issue #11626.

Wrapped portmapper global state into a struct. Now portallocator and
portmapper have no global state (except configuration, and a default
instance).

Unfortunately, removing the global default instances will break
```api/server/server.go:1539```, and ```daemon/daemon.go:832```, which
both call the global portallocator directly. Fixing that would be a much
bigger change, so for now, have postponed that.

Signed-off-by: Paul Bellamy <paul.a.bellamy@gmail.com>
(cherry picked from commit 87df5ab41b)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-31 10:53:09 -07:00
Paul Bellamy
f582f9717f Refactor global portallocator state into a global struct
Signed-off-by: Paul Bellamy <paul.a.bellamy@gmail.com>
(cherry picked from commit 1257679876)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-31 10:53:09 -07:00
Ahmet Alp Balkan
ebcb36a8d2 windows: monitorTtySize correctly by polling
This change makes `monitorTtySize` work correctly on windows by polling
into win32 API to get terminal size (because there's no SIGWINCH on
windows) and send it to the engine over Remove API properly.

Average getttysize syscall takes around 30-40 ms on an average windows
machine as far as I can tell, therefore in a `for` loop, checking every
250ms if size has changed or not.

I'm not sure if there's a better way to do it on windows, if so,
somebody please send a link 'cause I could not find.

Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
(cherry picked from commit ebbceea8a7)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-31 10:39:59 -07:00
Michael Crosby
e6e8f2d717 Update libcontainer to c8512754166539461fd860451ff
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit 17ecbcf8ff)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-30 17:19:49 -07:00
Alexander Morozov
317a510261 Use proper wait function for --pid=host
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit 489ab77f4a)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-30 17:19:48 -07:00
Alexander Morozov
5d3a080178 Get child processes before main process die
Signed-off-by: Alexander Morozov <lk4d4@docker.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-30 17:19:48 -07:00
Alexander Morozov
542c84c2d2 Do not mask *exec.ExitError
Fix #11764

Signed-off-by: Alexander Morozov <lk4d4@docker.com>
(cherry picked from commit f468bbb7e8)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-30 17:19:48 -07:00
Harald Albers
f1df74d09d Add missing filters to bash completion for docker images and docker ps
Signed-off-by: Harald Albers <github@albersweb.de>
(cherry picked from commit cf438a542e)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-30 16:42:53 -07:00
Derek McGowan
4ddbc7a62f Compress layers on push to a v2 registry
When buffering to file add support for compressing the tar contents. Since digest should be computed while writing buffer, include digest creation during buffer.

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
(cherry picked from commit 851c64725d)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-30 15:42:14 -07:00
Harald Albers
f72b2c02b8 Do not complete --cgroup-parent as _filedir
This is a follow-up on PR 11708, as suggested by tianon.

Signed-off-by: Harald Albers <github@albersweb.de>
(cherry picked from commit a09cc935c3)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-30 12:02:48 -07:00
Daniel, Dao Quang Minh
af9dab70f8 aufs: apply dirperm1 by default if supported
Automatically detect support for aufs `dirperm1` option and apply it.
`dirperm1` tells aufs to check the permission bits of the directory on the
topmost branch and ignore the permission bits on all lower branches.
It can be used to fix aufs' permission bug (i.e., upper layer having
broader mask than the lower layer).

More information about the bug can be found at https://github.com/docker/docker/issues/783
`dirperm1` man page is at: http://aufs.sourceforge.net/aufs3/man.html

Signed-off-by: Daniel, Dao Quang Minh <dqminh89@gmail.com>
(cherry picked from commit 281abd2c8a)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-30 12:02:48 -07:00
Daniel, Dao Quang Minh
10425e83f2 document dirperm1 fix for #783 in known issues
Since `dirperm1` requires a more recent aufs patch than many current OS release,
we cant remove #783 completely. This documents that docker will apply `dirperm1`
automatically for systems that support it

Signed-off-by: Daniel, Dao Quang Minh <dqminh89@gmail.com>
(cherry picked from commit d7bbe2fcb5)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-30 12:02:48 -07:00
Daniel, Dao Quang Minh
9c528dca85 print dirperm1 supported status in docker info
It's easier for users to check if their systems support dirperm1 just by using
docker info

Signed-off-by: Daniel, Dao Quang Minh <dqminh89@gmail.com>
(cherry picked from commit d68d5f2e4b)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-30 12:02:48 -07:00
Ahmet Alp Balkan
cb2c25ad2d docs: remove unused windows images
These images was just sitting around and referenced from
nowhere, nor they seemed any useful.

Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
(cherry picked from commit 986ae5d52a)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-30 10:46:16 -07:00
Ahmet Alp Balkan
962dec81ec Update boot2docker on Windows documentation
Boot2Docker experience is updated now that we have a Docker
client on Windows. Instead of running `boot2docker ssh`, users
can also use boot2docker on Windows Command Prompt (`cmd.exe`)
and PowerShell.

Updated documentation and screenshots, added a few details,
reorganized sections by importance, fixed a few errors.

Remaining: the video link in the Demonstration section needs
to be updated once I shoot a new video.

Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
(cherry picked from commit de09c55394)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-30 10:46:16 -07:00
unclejack
1eae925a3d pkg/broadcastwriter: avoid alloc w/ WriteString
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
(cherry picked from commit db877d8a42)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-30 10:46:16 -07:00
Lei Jitang
3ce2cc8ee7 Add some run option to bash completion
Signed-off-by: Lei Jitang <leijitang@huawei.com>
(cherry picked from commit 7d70736015)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-27 15:11:59 -07:00
Ahmet Alp Balkan
054acc4bee term/winconsole: Identify tty correctly, fix resize problem
This change fixes a bug where stdout/stderr handles are not identified
correctly.

Previously we used to set the window size to fixed size to fit the default
tty size on the host (80x24). Now the attach/exec commands can correctly
get the terminal size from windows.

We still do not `monitorTtySize()` correctly on windows and update the tty
size on the host-side, in order to fix that we'll provide a
platform-specific `monitorTtySize` implementation in the future.

Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
(cherry picked from commit 0532dcf3dc)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-26 16:24:40 -07:00
Don Kjer
63cb03a55b Fix for issue 9922: private registry search with auth returns 401
Signed-off-by: Don Kjer <don.kjer@gmail.com>
(cherry picked from commit 6b2eeaf896)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-26 13:03:50 -07:00
Arnaud Porterie
49b6f23696 Remove unused runconfig.Config.SecurityOpt field
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
(cherry picked from commit e39646d2e1)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-25 16:22:34 -07:00
Michael Crosby
299ae6a2e6 Update libcontainer to a6044b701c166fe538fc760f9e2
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-25 12:23:08 -07:00
Vincent Batts
97b521bf10 make.sh: leave around the generated version
For positerity (largely of packagers) lets leave around the generated
version files that happen during build.
They're already ignored in git, and recreated on every build.

Signed-off-by: Vincent Batts <vbatts@redhat.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-25 11:34:55 -07:00
Vincent Batts
7f5937d46c btrfs: #ifdef for build version
We removed it, because upstream removed it. But now it will be coming
back, so work with it either way.

Signed-off-by: Vincent Batts <vbatts@redhat.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-25 11:34:55 -07:00
Jessica Frazelle
b6166b9496 btrfs_noversion: including what was in merge commit from 8fc9e40086 (diff-479b910834cf0e4daea2e02767fd5dc9R1) pr #11417
Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-24 22:15:08 -07:00
Jessica Frazelle
b596d025f5 fix 2 integration tests on lxc
Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-24 21:47:42 -07:00
Jessica Frazelle
ca32446950 Get rid of panic in stats for lxc
Fix containers dir

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-24 21:47:42 -07:00
Michal Fojtik
5328d6d620 Fix lxc-start in lxc>1.1.0 where containers start daemonized by default
Signed-off-by: Michal Fojtik <mfojtik@redhat.com>

Docker-DCO-1.1-Signed-off-by: Michal Fojtik <mfojtik@redhat.com> (github: jfrazelle)
2015-03-24 21:38:01 -07:00
Michael Crosby
d0023242ab Mkdir for lxc root dir before setup of symlink
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-24 16:30:38 -07:00
Alexander Morozov
3ff002aa1a Use /var/run/docker as root for execdriver
Signed-off-by: Alexander Morozov <lk4d4@docker.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-24 15:43:08 -07:00
Dan Walsh
ea9b357be2 Btrfs has eliminated the BTRFS_BUILD_VERSION in latest version
They say we should only use the BTRFS_LIB_VERSION

They will no longer support this since it had to be managed manually

Docker-DCO-1.1-Signed-off-by: Dan Walsh <dwalsh@redhat.com> (github: rhatdan)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-24 13:12:49 -07:00
Harald Albers
bf1829459f restrict bash completion for hostdir arg to directories
The previous state assumed that the HOSTPATH argument referred to a
file. As clarified by moxiegirl in PR #11305, it is a directory.
Adjusted completion to reflect this.

Signed-off-by: Harald Albers <github@albersweb.de>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-24 13:12:49 -07:00
Vincent Batts
4f744ca781 pkg/archive: ignore mtime changes on directories
on overlay fs, the mtime of directories changes in a container where new
files are added in an upper layer (e.g. '/etc'). This flags the
directory as a change where there was none.

Closes #9874

Signed-off-by: Vincent Batts <vbatts@redhat.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-23 15:29:10 -07:00
Michael Crosby
7dab04383b Update libcontainer to fd0087d3acdc4c5865de1829d4a
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-23 15:05:44 -07:00
Brian Goff
8a003c8134 Improve err message when parsing kernel port range
Signed-off-by: Brian Goff <cpuguy83@gmail.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-23 14:26:42 -07:00
Ahmet Alp Balkan
208178c799 Disable ANSI emulation in certain windows shells
This disables recently added ANSI emulation feature in certain Windows
shells (like ConEmu) where ANSI output is emulated by default with builtin
functionality in the shell.

MSYS (mingw) runs in cmd.exe window and it doesn't support emulation.

Cygwin doesn't even pass terminal handles to docker.exe as far as I can
tell, stdin/stdout/stderr handles are behaving like non-TTY. Therefore not
even including that in the check.

Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-23 13:32:40 -07:00
sidharthamani
03b36f3451 add syslog driver
Signed-off-by: wlan0 <sid@rancher.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-23 11:41:35 -07:00
Doug Davis
7758553239 Fix some escaping around env var processing
Clarify in the docs that ENV is not recursive

Closes #10391

Signed-off-by: Doug Davis <dug@us.ibm.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-23 11:37:10 -07:00
Arnaud Porterie
10fb5ce6d0 Restore TestPullVerified test
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-23 11:33:08 -07:00
Brian Goff
0959aec1a9 Allow normal volume to overwrite in start Binds
Fixes #9981
Allows a volume which was created by docker (ie, in
/var/lib/docker/vfs/dir) to be used as a Bind argument via the container
start API and overwrite an existing volume.

For example:

```bash
docker create -v /foo --name one
docker create -v /foo --name two
```

This allows the volume from `one` to be passed into the container start
API as a bind to `two`, and it will overwrite it.

This was possible before 7107898d5c

Signed-off-by: Brian Goff <cpuguy83@gmail.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-23 11:04:27 -07:00
Mabin
773f74eb71 Fix hanging up problem when start and attach multiple containers
Signed-off-by: Mabin <bin.ma@huawei.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
2015-03-23 11:04:27 -07:00
unclejack
7070d9255a pkg/ioutils: add tests for BufReader
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-23 11:04:27 -07:00
unclejack
2cb4b7f65c pkg/ioutils: avoid huge Buffer growth in bufreader
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-23 11:04:27 -07:00
Mitch Capper
2d80652d8a Change windows default permissions to 755 not 711, read access for all poses little security risk and prevents breaking existing Dockerfiles
Signed-off-by: Mitch Capper <mitch.capper@gmail.com>

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-03-23 11:04:27 -07:00
Jessica Frazelle
81b4691406 Merge origin/master into origin/release
Signed-off-by: Jessica Frazelle <jess@docker.com>

Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
2015-03-22 23:45:58 -07:00
Arnaud Porterie
4bae33ef9f Merge pull request #10286 from icecrime/bump_v1.5.0
Bump to version v1.5.0
2015-02-10 10:50:09 -08:00
Arnaud Porterie
a8a31eff10 Bump to version v1.5.0
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
2015-02-10 08:14:37 -08:00
Sven Dowideit
68a8fd5c4e updates from review
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-10 08:14:37 -08:00
unclejack
8387c5ab65 update kernel reqs doc; recommend updates on RHEL
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
2015-02-10 08:14:37 -08:00
Sven Dowideit
69498943c3 remove the text-indent and increase the font size
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-10 08:14:37 -08:00
Sven Dowideit
1aeb78c2ae Simplfy the sidebar html and css, and then allow the text to wrap
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-10 08:14:36 -08:00
Tibor Vass
331d37f35d Minor nits
Signed-off-by: Tibor Vass <tibor@docker.com>
2015-02-10 08:14:36 -08:00
Tibor Vass
edf3bf7f33 Clarify docs review role
Signed-off-by: Tibor Vass <teabee89@gmail.com>
2015-02-10 08:14:36 -08:00
Tibor Vass
9ee8dca246 A few fixes
Signed-off-by: Tibor Vass <teabee89@gmail.com>
2015-02-10 08:14:36 -08:00
Tibor Vass
aa98bb6c13 New pull request workflow
Signed-off-by: Tibor Vass <teabee89@gmail.com>
2015-02-10 08:14:36 -08:00
Zhang Wei
2aba3c69f9 docs: fix a typo in registry_mirror.md
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
2015-02-10 08:14:36 -08:00
Steve Koch
71a44c769e Add link to user guide to end of 14.04 section
Adding instructions to exit the test shell and a link to the user guide (as is done in the following sections for 12.04 and 13.04/10

Signed-off-by: Steven Koch <sjkoch@unm.edu>
2015-02-10 08:14:36 -08:00
Wei-Ting Kuo
d8381fad2b Update certificates.md
`openssl req -new -x509 -text -key client.key -out client.cert` creates a self-sign certificate but not a certificate request.

Signed-off-by: Wei-Ting Kuo <waitingkuo0527@gmail.com>
2015-02-09 08:49:05 -08:00
Chen Hanxiao
be379580d0 docs: fix a typo in Dockerfile.5.md
s/Mutliple/Multiple

Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
2015-02-09 08:49:05 -08:00
Alexander Morozov
7ea8513479 Fix example about ps and linked containers
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
2015-02-09 08:49:05 -08:00
Sven Dowideit
3b2fe01c78 Documentation on boolean flags is wrong #10517
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:05 -08:00
Sven Dowideit
e8afc22b1f Do some major rearranging of the fedora/centos/rhel installation docs
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:05 -08:00
Lokesh Mandvekar
4e407e6b77 update fedora docs to reflect latest rpm changes
Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2015-02-09 08:49:05 -08:00
unclejack
23f1c2ea9e docs/articles/systemd: correct --storage-driver
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
2015-02-09 08:49:05 -08:00
Katie McLaughlin
788047cafb Format awsconfig sample config correctly
Reflow change in commit 195f3a3f removed newlines in the config format.

This change reverts the sample config to the original formatting, which
matches the actual config format of a `awsconfig` file.

Signed-off-by: Katie McLaughlin <katie@glasnt.com>
2015-02-09 08:49:05 -08:00
Sven Dowideit
0c0e7b1b60 Fix a small spelling error in the dm.blkdiscard docs
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:05 -08:00
Sven Dowideit
09d41529a0 Add an initial list of new features in Docker Engine 1.5.0
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:04 -08:00
Sven Dowideit
cb288fefee remove swarm, machine and compose from the 1.5.0 release docs
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:04 -08:00
Sven Dowideit
f7636796c5 The DHE documentation will not be published with 1.5.0
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:04 -08:00
Sven Dowideit
cb5af83444 For now, docker stats appears to be libcontainer only
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:04 -08:00
Mihai Borobocea
96feaf1920 docs: fix typo
There are 2 not 3 RUN instructions in the userguide's Dockerfile.

Signed-off-by: Mihai Borobocea <MihaiBorobocea@gmail.com>
2015-02-09 08:49:04 -08:00
Vincent Giersch
1f03944950 Documents build API "remote" parameter
Introduced in Docker v0.4.5 / Remove API v1.1 (#848), the remote
parameter of the API method POST /build allows to specify a buildable
remote URL (HTTPS, HTTP or Git).

Signed-off-by: Vincent Giersch <vincent.giersch@ovh.net>
2015-02-09 08:49:04 -08:00
Sven Dowideit
6060eedf9c The Hub build webhooks now list the images that have been built
And fix some spelling - repo isn't really a word :)

Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:04 -08:00
Sven Dowideit
d217da854a Spelling mistake in dockerlinks
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:04 -08:00
Victor Vieux
d74d6d981b add crosbymichael and Github -> GitHub
Signed-off-by: Victor Vieux <vieux@docker.com>
2015-02-09 08:49:04 -08:00
Victor Vieux
0205ac33d2 update MAINTAINERS file
Signed-off-by: Victor Vieux <vieux@docker.com>
2015-02-09 08:49:04 -08:00
Sven Dowideit
dbb9d47bdc The reference menu is too big to list more than the latest API docs, so the others can be hidden - they're still linked from the API summary
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:03 -08:00
Sven Dowideit
ddd1d081d7 use the same paths as in the swarm repo, so that their links magically work
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:03 -08:00
Sven Dowideit
d6ac36d929 Docker attach documentation didn't make sense to me
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-09 08:49:03 -08:00
Michael A. Smith
715b94f664 Distinguish ENV from setting environment inline
It's ambiguous to say that `ENV` is _functionally equivalent to prefixing the command with `<key>=<value>`_. `ENV` sets the environment for all future commands, but `RUN` can take chained commands like `RUN foo=bar bash -c 'echo $foo' && bash -c 'echo $foo $bar'`. Users with a solid understanding of `exec` may grok this without confusion, but less experienced users may need this distinction.

Signed-off-by: Michael A. Smith <msmith3@ebay.com>

Improve Environment Handling Descriptions

- Link `ENV` and `Environment Replacement`
- Improve side-effects of `ENV` text
- Rearrange avoiding side effects text

Signed-off-by: Michael A. Smith <msmith3@ebay.com>
2015-02-09 08:49:03 -08:00
Chen Hanxiao
16baca9277 docs: change events --since to fit RFC3339Nano
PR6931 changed time format to RFC3339Nano.
But the example in cli.md does not changed.

Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
2015-02-09 08:49:03 -08:00
J Bruni
627f8a6cd5 Remove File List
This list is outdated. It could be updated instead of removed... but why should it be maintained? I do not see a reason.

Signed-off-by: João Bruni <contato@jbruni.com.br>
2015-02-09 08:49:03 -08:00
Sebastiaan van Stijn
a8a7df203a Fix broken link to project/MAINTAINERS.md
The link to project/MAINTAINERS.md was broken, in
addition, /MAINTAINERS containers more relevant
information on the LGTM process and contains info
about maintainers of all subsystems.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2015-02-09 08:49:03 -08:00
Thell 'Bo' Fowler
580cbcefd3 Update dockerfile_best-practices.md
Signed-off-by: Thell Fowler <Thell@tbfowler.name>
2015-02-09 08:49:03 -08:00
Yihang Ho
d9c5ce6e97 Fix a tiny typo.
'saving', not 'saveing'

Signed-off-by: Yihang Ho <hoyihang5@gmail.com>
2015-02-09 08:49:03 -08:00
Bradley Cicenas
0fe9b95415 fix project url in readme to point to the correct location,
https://github.com/docker/docker/tree/master/project

Signed-off-by: Bradley Cicenas <bradley.cicenas@gmail.com>
2015-02-09 08:49:03 -08:00
Alexandr Morozov
41d0e4293e Update events format in man page
Signed-off-by: Alexandr Morozov <lk4d4@docker.com>
2015-02-09 08:49:03 -08:00
Doug Davis
26fe640da1 Add builder folks to the top-level maintainers file
Signed-off-by: Doug Davis <dug@us.ibm.com>
2015-02-09 08:49:02 -08:00
Jessica Frazelle
198ca26969 Added tianon's info and changed a typo.
Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
2015-02-09 08:49:02 -08:00
Phil Estes
d5365f6fc4 Fix incorrect IPv6 addresses/subnet notations in docs
Fixes a few typos in IPv6 addresses. Will make it easier for users who
actually try and copy/paste or use the example addresses directly.

Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
2015-02-09 08:49:02 -08:00
Brian Goff
5f7e814ee7 Update go-md2man
Update fixes some rendering issues, including improperly escaping '$' in
blocks, and actual parsing of blockcode.

`ID=$(sudo docker run -d fedora /usr/bin/top -b)` was being converted to
`ID=do docker run -d fedora/usr/bin/top -b)`

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2015-02-09 08:49:02 -08:00
Doug Davis
a84aca0985 Fix docs so WORKDIR mentions it works for COPY and ADD too
The docs around COPY/ADD already mentioned that it will do a relative
copy/add based on WORKDIR, so that part is already ok.  Just needed to
tweak the WORKDIR section since w/o mentioning COPY/ADD it can be misleading.

Noticed by @phemmer

Signed-off-by: Doug Davis <dug@us.ibm.com>
2015-02-09 08:49:02 -08:00
Solomon Hykes
68ec22876a Proposal for an improved project structure.
Note: this deprecates the fine-grained, high-overlap cascading MAINTAINERS files,
and replaces them with a single top-level file, using a new structure:

* More coarse grained subsystems with dedicated teams of maintainers
* Core maintainers with a better-defined role and a wider scope (if it's
not in a subsystem, it's up to the core maintainers to figure it out)
* Architects
* Operators

This is work in progress, the goal is to start a conversation

Signed-off-by: Solomon Hykes <solomon@docker.com>
Signed-off-by: Erik Hollensbe <github@hollensbe.org>
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
Signed-off-by: Tibor Vass <teabee89@gmail.com>
Signed-off-by: Victor Vieux <vieux@docker.com>
Signed-off-by: Vincent Batts <vbatts@redhat.com>
2015-02-09 08:49:02 -08:00
Josh Hawn
0dcc3559e9 Updated image spec docs to clarify image JSON
The title `Image JSON Schema` was used as a header in the section
which describes the layout and fields of the image metadata JSON
file. It was pointed out that `JSON Schema` is its own term for
describing JSON in a machine-and-human-readable format, while the
word "Schema" in this context was used more generically to say that
the section is meant to be an example and outline of the Image JSON.

http://spacetelescope.github.io/understanding-json-schema/

This section now has the title `Image JSON Description` in order
to not cause this confusion.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-02-09 08:49:02 -08:00
Derek McGowan
d4c731ecd6 Limit push and pull to v2 official registry
No longer push to the official v2 registry when it is available. This allows pulling images from the v2 registry without defaulting push. Only pull official images from the v2 official registry.

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-02-04 10:05:16 -08:00
Arnaud Porterie
2dba4e1386 Fix client-side validation of Dockerfile path
Arguments to `filepath.Rel` were reversed, making all builder tests to
fail.

Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
2015-02-03 09:07:17 -08:00
Tibor Vass
06a7f471e0 builder: prevent Dockerfile to leave build context
Signed-off-by: Tibor Vass <teabee89@gmail.com>
2015-02-03 09:07:17 -08:00
Doug Davis
4683d01691 Add an API test for docker build -f Dockerfile
I noticed that while we have tests to make sure that people don't
specify a Dockerfile (via -f) that's outside of the build context
when using the docker cli, we don't check on the server side to make
sure that API users have the same check done. This would be a security
risk.

While in there I had to add a new util func for the tests to allow us to
send content to the server that isn't json encoded - in this case a tarball

Signed-off-by: Doug Davis <dug@us.ibm.com>
2015-02-03 09:07:17 -08:00
Michael Crosby
6020a06399 Print zeros for initial stats collection on stopped container
When calling stats on stopped container's print out zeros for all of the
values to populate the initial table.  This signals to the user that the
operations completed and will not block.

Closes #10504

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-02-03 08:50:47 -08:00
Sebastiaan van Stijn
cc0bfccdf4 Replace "base" with "ubuntu" in documentation
The API documentation uses the "base" image in various
places. The "base" image is deprecated and it is no longer
possible to download this image.

This changes the API documentation to use "ubuntu" in stead.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2015-02-02 13:34:12 -08:00
Chen Hanxiao
0c18ec62f3 docs: fix another typo in docker-build man page
s/arbtrary/arbitrary

Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
2015-02-02 13:30:11 -08:00
John Tims
a9825c9bd8 Fix documentation typo
Signed-off-by: John Tims <john.k.tims@gmail.com>
2015-02-02 13:30:11 -08:00
Josh Hawn
908be50c44 Handle gorilla/mux route url bug
When getting the URL from a v2 registry url builder, it does not
honor the scheme from the endpoint object and will cause an https
endpoint to return urls starting with http.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-02-02 13:30:11 -08:00
Josh Hawn
2a82dba34d Fix token basic auth header issue
When requesting a token, the basic auth header is always being set even
if there is no username value. This patch corrects this and does not set
the basic auth header if the username is empty.

Also fixes an issue where pulling all tags from a v2 registry succeeds
when the image does not actually exist on the registry.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-02-02 13:30:11 -08:00
Erik Hollensbe
13fd2a908c Remove "OMG IPV6" log message
Signed-off-by: Erik Hollensbe <erik+github@hollensbe.org>
2015-02-02 13:30:11 -08:00
Arnaud Porterie
464891aaf8 Fix race in test registry setup
Wait for the local registry-v2 test instance to become available to
avoid random tests failures.

Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
2015-02-02 13:30:10 -08:00
Phil Estes
9974663ed7 Add missing $HOST in a couple places in HTTPS/TLS setup docs
Fix typos in setup docs where tcp://:2376 is used without the $HOST
parameter.

Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com>
2015-02-02 13:30:10 -08:00
Derek McGowan
76269e5c9d Add push fallback to v1 for the official registry
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-02-02 13:30:10 -08:00
Jessica Frazelle
1121d7c4fd Validate toml
Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-02-02 13:30:10 -08:00
Josh Hawn
7e197575a2 Remove Checksum field from image.Image struct
The checksum is now being stored in a separate file beside the image
JSON file.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-02-02 13:30:10 -08:00
Derek McGowan
3dc3059d94 Store tar checksum in separate file
Fixes #10432

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-02-02 13:30:10 -08:00
Derek McGowan
7b6de74c9a Revert client signature
Supports multiple tag push with daemon signature

Fixes #10444

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-02-02 13:30:10 -08:00
Phil Estes
cad8adacb8 Setup TCP keep-alive on hijacked HTTP(S) client <--> daemon sessions
Fixes #10387

Without TCP keep-alive set on socket connections to the daemon, any
long-running container with std{out,err,in} attached that doesn't
read/write for a minute or longer will end in ECONNTIMEDOUT (depending
on network settings/OS defaults, etc.), leaving the docker client side
believing it is still waiting on data with no actual underlying socket
connection.

This patch turns on TCP keep-alive for the underlying TCP connection
for both TLS and standard HTTP hijacked daemon connections from the
docker client, with a keep-alive timeout of 30 seconds.

Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com>
2015-02-02 13:30:10 -08:00
Srini Brahmaroutu
6226deeaf4 Removing the check on Architecture to build and run Docker on IBM Power and Z platforms
Signed-off-by: Srini Brahmaroutu <srbrahma@us.ibm.com>
2015-02-02 13:30:10 -08:00
gdi2290
3ec19f56cf Update AUTHORS file and .mailmap
added `LC_ALL=C.UTF-8` due to osx
http://www.inmotionhosting.com/support/website/ssh/speed-up-grep-searche
s-with-lc-all

Signed-off-by: Patrick Stapleton <github@gdi2290.com>
2015-02-02 13:30:10 -08:00
Josh Hawn
48c71787ed No longer compute checksum when installing images.
While checksums are verified when a layer is pulled from v2 registries,
there are known issues where the checksum may change when the layer diff
is computed again. To avoid these issues, the checksum should no longer
be computed and stored until after it has been extracted to the docker
storage driver. The checksums are instead computed lazily before they
are pushed to a v2 registry.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-02-02 13:30:10 -08:00
Jessica Frazelle
604731a930 Some small updates to the dev env docs.
Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
2015-02-02 13:30:09 -08:00
Derek McGowan
e8650e01f8 Defer creation of trust key file until needed
Fixes #10442

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-02-02 13:30:09 -08:00
Sven Dowideit
817d04d992 DHE documentation placeholder and Navbar changes
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-02 13:30:09 -08:00
Sven Dowideit
cdff91a01c comment out the docker and curl lines we'll run later
Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@docker.com> (github: SvenDowideit)
2015-02-02 13:30:09 -08:00
Mehul Kar
6f26bd0e16 Improve explanation of port mapping from containers
Signed-off-by: Mehul Kar <mehul.kar@gmail.com>
2015-02-02 13:30:09 -08:00
Tianon Gravi
3c090db4e9 Update .deb version numbers to be more sane
Example output:
```console
root@906b21a861fb:/go/src/github.com/docker/docker# ./hack/make.sh binary ubuntu
bundles/1.4.1-dev already exists. Removing.

---> Making bundle: binary (in bundles/1.4.1-dev/binary)
Created binary: /go/src/github.com/docker/docker/bundles/1.4.1-dev/binary/docker-1.4.1-dev

---> Making bundle: ubuntu (in bundles/1.4.1-dev/ubuntu)
Created package {:path=>"lxc-docker-1.4.1-dev_1.4.1~dev~git20150128.182847.0.17e840a_amd64.deb"}
Created package {:path=>"lxc-docker_1.4.1~dev~git20150128.182847.0.17e840a_amd64.deb"}

```

As noted in a comment in the code here, this sums up the reasoning for this change: (which is how APT and reprepro compare versions)
```console
$ dpkg --compare-versions 1.5.0 gt 1.5.0~rc1 && echo true || echo false
true
$ dpkg --compare-versions 1.5.0~rc1 gt 1.5.0~git20150128.112847.17e840a && echo true || echo false
true
$ dpkg --compare-versions 1.5.0~git20150128.112847.17e840a gt 1.5.0~dev~git20150128.112847.17e840a && echo true || echo false
true
```

ie, `1.5.0` > `1.5.0~rc1` > `1.5.0~git20150128.112847.17e840a` > `1.5.0~dev~git20150128.112847.17e840a`

Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
2015-01-28 13:51:12 -08:00
Arnaud Porterie
b7c3fdfd0d Update fish completion for 1.5.0
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
2015-01-28 10:29:29 -08:00
Phil Estes
aa682a845b Fix bridge initialization for IPv6 if IPv4-only docker0 exists
This fixes the daemon's failure to start when setting --ipv6=true for
the first time without deleting `docker0` bridge from a prior use with
only IPv4 addressing.

The addition of the IPv6 bridge address is factored out into a separate
initialization routine which is called even if the bridge exists but no
IPv6 addresses are found.

Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
2015-01-28 10:29:29 -08:00
Jonathan Rudenberg
218d0dcc9d Fix missing err assignment in bridge creation
Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>
2015-01-28 10:29:29 -08:00
Stephen J Day
510d8f8634 Open up v2 http status code checks for put and head checks
Under certain cases, such as when putting a manifest or check for the existence
of a layer, the status code checks in session_v2.go were too narrow for their
purpose. In the case of putting a manifest, the handler only cares that an
error is not returned. Whether it is a 304 or 202 does not matter, as long as
the server reports success. Having the client only accept specific http codes
inhibits future protocol evolution.

Signed-off-by: Stephen J Day <stephen.day@docker.com>
2015-01-28 08:48:03 -08:00
Derek McGowan
b65600f6b6 Buffer tar file on v2 push
fixes #10312
fixes #10306

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-01-28 08:48:03 -08:00
Jessica Frazelle
79dcea718c Add completion for stats.
Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
2015-01-28 08:45:57 -08:00
Sven Dowideit
072b09c45d Add the registry mirror document to the menu
Signed-off-by: Sven Dowideit <SvenDowideit@docker.com>

Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@docker.com> (github: SvenDowideit)
2015-01-27 19:35:26 -08:00
Derek McGowan
c2d9837745 Use layer checksum if calculated during manifest creation
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-01-27 19:35:26 -08:00
Josh Hawn
fa5dfbb18b Fix premature close of build output on pull
The build job will sometimes trigger a pull job when the base image
does not exist. Now that engine jobs properly close their output by default
the pull job would also close the build job's stdout in a cascading close
upon completion of the pull.

This patch corrects this by wrapping the `pull` job's stdout with a
nopCloseWriter which will not close the stdout of the `build` job.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-01-27 19:35:26 -08:00
Sven Dowideit
6532a075f3 tell users they can what IP range Hub webhooks can come from so they can filter
Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@docker.com> (github: SvenDowideit)

Signed-off-by: Sven Dowideit <SvenDowideit@docker.com>
2015-01-27 19:35:26 -08:00
Chen Hanxiao
3b4a4bf809 docs: fix a typo in docker-build man page
s/Dockefile/Dockerfile

Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
2015-01-27 19:35:26 -08:00
Tony Miller
4602909566 fix /etc/host typo in remote API docs
Signed-off-by: Tony Miller <mcfiredrill@gmail.com>
2015-01-27 19:35:25 -08:00
Sven Dowideit
588f350b61 as we're not using the search suggestion feature only load the search_content when we have a search ?q= param
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>

Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au> (github: SvenDowideit)
2015-01-27 19:35:25 -08:00
Sven Dowideit
6e5ff509b2 set the content-type for the search_content.json.gz
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>

Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au> (github: SvenDowideit)
2015-01-27 19:35:25 -08:00
Sven Dowideit
61d341c2ca Change to load the json.gz file
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>

Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au> (github: SvenDowideit)
2015-01-27 19:35:25 -08:00
unclejack
b996d379a1 docs: compress search_content.json for release
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>

Docker-DCO-1.1-Signed-off-by: unclejack <unclejacksons@gmail.com> (github: SvenDowideit)
2015-01-27 19:35:25 -08:00
Derek McGowan
b0935ea730 Better error messaging and logging for v2 registry requests
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-01-27 19:35:25 -08:00
Derek McGowan
96fe13b49b Add file path to errors loading the key file
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-01-27 19:35:25 -08:00
Brian Goff
12ccde442a Do not return err on symlink eval
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2015-01-27 19:35:25 -08:00
Michael Crosby
4262cfe41f Remove omitempty json tags from stucts
When unmarshaling the json response from the API in languages to a
dynamic object having the omitempty field tag on types such as float64
case the key to be omitted on 0.0 values.  Various langages will
interpret this as a null when 0.0 is the actual value.

This patch removes the omitempty tags on fields that are not structs
where they can be safely omited.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-01-27 19:35:25 -08:00
Brian Goff
ddc2e25546 Fix bind-mounts only partially removed
When calling delete on a bind-mount volume, the config file was bing
removed, but it was not actually being removed from the volume index.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2015-01-27 19:35:25 -08:00
unclejack
6646cff646 docs: shrink sprites-small_360.png
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
2015-01-27 19:35:25 -08:00
Euan
ac8fd856c0 Allow empty layer configs in manifests
Before the V2 registry changes, images with no config could be pushed.
This change fixes a regression that made those images not able to be
pushed to a registry.

Signed-off-by: Euan Kemp <euank@euank.com>
2015-01-27 19:35:25 -08:00
DiuDiugirl
48754d673c Fix a minor typo
Docker inspect can also be used on images, this patch fixed the
minor typo in file docker/flags.go and docs/man/docker.1.md

Signed-off-by:  DiuDiugirl <sophia.wang@pku.edu.cn>
2015-01-27 19:35:25 -08:00
unclejack
723684525a pkg/archive: remove tar autodetection log line
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
2015-01-27 19:35:24 -08:00
Derek McGowan
32aceadbe6 Revert progressreader to not defer close
When progress reader closes it overwrites the progress line with the full progress bar, replaces the completed message.

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-01-27 19:35:24 -08:00
Derek McGowan
c67d3e159c Use filepath instead of path
Currently loading the trust key uses path instead of filepath. This creates problems on some operating systems such as Windows.

Fixes #10319

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-01-27 19:35:24 -08:00
Jessica Frazelle
a080e2add7 Make debugs logs suck less.
Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
2015-01-27 19:35:24 -08:00
Josh Hawn
24d81b0ddb Always store images with tarsum.v1 checksum added
Updates `image.StoreImage()` to always ensure that images
that are installed in Docker have a tarsum.v1 checksum.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-01-27 19:35:24 -08:00
Tony Miller
08f2fad40b document the ExtraHosts parameter for /containers/create for the remote API
I think this was added from version 1.15.

Signed-off-by: Tony Miller <mcfiredrill@gmail.com>
2015-01-27 19:35:24 -08:00
GennadySpb
f91fbe39ce Update using_supervisord.md
Fix factual error

change made by: GennadySpb <lipenkov@gmail.com>

Signed-off-by: Sven Dowideit <SvenDowideit@docker.com>
2015-01-27 19:35:24 -08:00
Tianon Gravi
018ab080bb Remove windows from the list of supported platforms
Since it can still be tested natively without this, this won't cause any harm while we fix the tests to actually work on Windows.

Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
2015-01-27 19:35:24 -08:00
Tibor Vass
fe94ecb2c1 integration-cli: wait for container before sending ^D
Signed-off-by: Tibor Vass <teabee89@gmail.com>
2015-01-27 19:35:24 -08:00
Lorenz Leutgeb
7b2e67036f Fix inconsistent formatting
Colon was bold, but regular at other occurences.

Blame cf27b310c4

Signed-off-by: Lorenz Leutgeb <lorenz.leutgeb@gmail.com>
2015-01-27 19:35:24 -08:00
Lorenz Leutgeb
e130faea1b doc: Minor semantical/editorial fixes in HTTPS article
"read-only" vs. "only readable by you"

Refer to:
https://github.com/docker/docker/pull/9952#discussion_r22690266

Signed-off-by: Lorenz Leutgeb <lorenz.leutgeb@gmail.com>
2015-01-27 19:35:24 -08:00
Lorenz Leutgeb
38f09de334 doc: Editorial changes as suggested by @fredlf
Refer to:
 * https://github.com/docker/docker/pull/9952#discussion_r22686652
 * https://github.com/docker/docker/pull/9952#discussion_r22686804

Signed-off-by: Lorenz Leutgeb <lorenz.leutgeb@gmail.com>
2015-01-27 19:35:24 -08:00
Lorenz Leutgeb
f9ba68ddfb doc: Improve article on HTTPS
* Adjust header to match _page_title
 * Add instructions on deletion of CSRs and setting permissions
 * Simplify some path expressions and commands
 * Consqeuently use ~ instead of ${HOME}
 * Precise formulation ('key' vs. 'public key')
 * Fix wrong indentation of output of `openssl req`
 * Use dash ('--') instead of minus ('-')

Remark on permissions:

It's not a problem to `chmod 0400` the private keys, because the
Docker daemon runs as root (can read the file anyway) and the Docker
client runs as user.

Signed-off-by: Lorenz Leutgeb <lorenz.leutgeb@gmail.com>
2015-01-27 19:35:23 -08:00
Abin Shahab
16913455bd Fixes apparmor regression
Signed-off-by: Abin Shahab <ashahab@altiscale.com> (github: ashahab-altiscale)
Docker-DCO-1.1-Signed-off-by: Abin Shahab <ashahab@altiscale.com> (github: ashahab-altiscale)
2015-01-27 19:35:23 -08:00
Andrew C. Bodine
32f189cd08 Adds docs for /containers/(id)/attach/ws api endpoint
Signed-off-by: Andrew C. Bodine <acbodine@us.ibm.com>
2015-01-27 19:35:23 -08:00
Josh Hawn
526ca42282 Split API Version header when checking for v2
Since the Docker-Distribution-API-Version header value may contain multiple
space delimited versions as well as many instances of the header key, the
header value is now split on whitespace characters to iterate over all versions
that may be listed in one instance of the header.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-01-27 19:35:23 -08:00
Harald Albers
b98b42d843 Add bash completions for daemon flags, simplify with extglob
Implementing the deamon flags the traditional way introduced even more
redundancy than usual because the same list of options with flags
had to be added twice.

This can be avoided by using variables in the case statements when
using the extglob shell option.

Signed-off-by: Harald Albers <github@albersweb.de>
2015-01-27 19:35:23 -08:00
imre Fitos
7bf03dd132 fix typo 'setup/set up'
Signed-off-by: imre Fitos <imre.fitos+github@gmail.com>
2015-01-27 19:35:23 -08:00
imre Fitos
034aa3b2c4 start docker before checking for updated NAT rule
Signed-off-by: imre Fitos <imre.fitos+github@gmail.com>
2015-01-27 19:35:23 -08:00
imre Fitos
6da1e01e6c docs: remove NAT rule when removing bridge
Signed-off-by: imre Fitos <imre.fitos+github@gmail.com>
2015-01-27 19:35:23 -08:00
115 changed files with 3247 additions and 1374 deletions

View File

@@ -1,5 +1,39 @@
# Changelog
## 1.7.0 (2015-06-16)
#### Runtime
+ Experimental feature: support for out-of-process volume plugins
+ Experimental feature: support for out-of-process network plugins
* Logging: syslog logging driver is available
* The userland proxy can be disabled in favor of hairpin NAT using the daemons `--userland-proxy=false` flag
* The `exec` command supports the `-u|--user` flag to specify the new process owner
+ Default gateway for containers can be specified daemon-wide using the `--default-gateway` and `--default-gateway-v6` flags
+ The CPU CFS (Completely Fair Scheduler) quota can be set in `docker run` using `--cpu-quota`
+ Container block IO can be controlled in `docker run` using`--blkio-weight`
+ ZFS support
+ The `docker logs` command supports a `--since` argument
+ UTS namespace can be shared with the host with `docker run --uts=host`
#### Quality
* Networking stack was entirely rewritten as part of the libnetwork effort
* Engine internals refactoring (shout out to the contributors?)
* Volumes code was entirely rewritten to support the plugins effort
+ Sending SIGUSR1 to a daemon will dump all goroutines stacks without exiting
#### Build
+ Support ${variable:-value} and ${variable:+value} syntax for environment variables
+ Support resource management flags `--cgroup-parent`, `--cpu-period`, `--cpu-quota`, `--cpuset-cpus`, `--cpuset-mems`
+ git context changes with branches and directories
* The .dockerignore file support exclusion rules
#### Distribution
+ Client support for v2 mirroring support for the official registry
#### Bugfixes
* Firewalld is now supported and will automatically be used when available
* mounting --device recursively
## 1.6.2 (2015-05-13)
#### Runtime

View File

@@ -180,7 +180,7 @@ Contributing to Docker
======================
[![GoDoc](https://godoc.org/github.com/docker/docker?status.svg)](https://godoc.org/github.com/docker/docker)
[![Jenkins Build Status](https://jenkins.dockerproject.com/job/Docker%20Master/badge/icon)](https://jenkins.dockerproject.com/job/Docker%20Master/)
[![Jenkins Build Status](https://jenkins.dockerproject.org/job/Docker%20Master/badge/icon)](https://jenkins.dockerproject.org/job/Docker%20Master/)
Want to hack on Docker? Awesome! We have [instructions to help you get
started contributing code or documentation.](https://docs.docker.com/project/who-written-for/).
@@ -192,12 +192,12 @@ Getting the development builds
==============================
Want to run Docker from a master build? You can download
master builds at [master.dockerproject.com](https://master.dockerproject.com).
master builds at [master.dockerproject.org](https://master.dockerproject.org).
They are updated with each commit merged into the master branch.
Don't know how to use that super cool new feature in the master build? Check
out the master docs at
[docs.master.dockerproject.com](http://docs.master.dockerproject.com).
[docs.master.dockerproject.org](http://docs.master.dockerproject.org).
How the project is run
======================

View File

@@ -1 +1 @@
1.7.0-dev
1.7.0-rc2

View File

@@ -7,7 +7,6 @@ import (
"fmt"
"io"
"net/http"
"os"
"path/filepath"
"reflect"
"strings"
@@ -97,7 +96,7 @@ func (cli *DockerCli) Cmd(args ...string) error {
if len(args) > 0 {
method, exists := cli.getMethod(args[0])
if !exists {
return fmt.Errorf("docker: '%s' is not a docker command. See 'docker --help'.", args[0])
return fmt.Errorf("docker: '%s' is not a docker command.\nSee 'docker --help'.", args[0])
}
return method(args[1:]...)
}
@@ -117,18 +116,19 @@ func (cli *DockerCli) Subcmd(name, signature, description string, exitOnError bo
errorHandling = flag.ContinueOnError
}
flags := flag.NewFlagSet(name, errorHandling)
if signature != "" {
signature = " " + signature
}
flags.Usage = func() {
flags.ShortUsage()
flags.PrintDefaults()
}
flags.ShortUsage = func() {
options := ""
if signature != "" {
signature = " " + signature
}
if flags.FlagCountUndeprecated() > 0 {
options = " [OPTIONS]"
}
fmt.Fprintf(cli.out, "\nUsage: docker %s%s%s\n\n%s\n\n", name, options, signature, description)
flags.SetOutput(cli.out)
flags.PrintDefaults()
os.Exit(0)
fmt.Fprintf(flags.Out(), "\nUsage: docker %s%s%s\n\n%s\n", name, options, signature, description)
}
return flags
}

View File

@@ -145,6 +145,7 @@ func (cli *DockerCli) CmdCreate(args ...string) error {
config, hostConfig, cmd, err := runconfig.Parse(cmd, args)
if err != nil {
cmd.ReportError(err.Error(), true)
os.Exit(1)
}
if config.Image == "" {
cmd.Usage()

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"github.com/docker/docker/api/types"
"github.com/docker/docker/pkg/ioutils"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/units"
)
@@ -15,7 +16,7 @@ import (
func (cli *DockerCli) CmdInfo(args ...string) error {
cmd := cli.Subcmd("info", "", "Display system-wide information", true)
cmd.Require(flag.Exact, 0)
cmd.ParseFlags(args, false)
cmd.ParseFlags(args, true)
rdr, _, err := cli.call("GET", "/info", nil, nil)
if err != nil {
@@ -29,20 +30,20 @@ func (cli *DockerCli) CmdInfo(args ...string) error {
fmt.Fprintf(cli.out, "Containers: %d\n", info.Containers)
fmt.Fprintf(cli.out, "Images: %d\n", info.Images)
fmt.Fprintf(cli.out, "Storage Driver: %s\n", info.Driver)
ioutils.FprintfIfNotEmpty(cli.out, "Storage Driver: %s\n", info.Driver)
if info.DriverStatus != nil {
for _, pair := range info.DriverStatus {
fmt.Fprintf(cli.out, " %s: %s\n", pair[0], pair[1])
}
}
fmt.Fprintf(cli.out, "Execution Driver: %s\n", info.ExecutionDriver)
fmt.Fprintf(cli.out, "Logging Driver: %s\n", info.LoggingDriver)
fmt.Fprintf(cli.out, "Kernel Version: %s\n", info.KernelVersion)
fmt.Fprintf(cli.out, "Operating System: %s\n", info.OperatingSystem)
ioutils.FprintfIfNotEmpty(cli.out, "Execution Driver: %s\n", info.ExecutionDriver)
ioutils.FprintfIfNotEmpty(cli.out, "Logging Driver: %s\n", info.LoggingDriver)
ioutils.FprintfIfNotEmpty(cli.out, "Kernel Version: %s\n", info.KernelVersion)
ioutils.FprintfIfNotEmpty(cli.out, "Operating System: %s\n", info.OperatingSystem)
fmt.Fprintf(cli.out, "CPUs: %d\n", info.NCPU)
fmt.Fprintf(cli.out, "Total Memory: %s\n", units.BytesSize(float64(info.MemTotal)))
fmt.Fprintf(cli.out, "Name: %s\n", info.Name)
fmt.Fprintf(cli.out, "ID: %s\n", info.ID)
ioutils.FprintfIfNotEmpty(cli.out, "Name: %s\n", info.Name)
ioutils.FprintfIfNotEmpty(cli.out, "ID: %s\n", info.ID)
if info.Debug {
fmt.Fprintf(cli.out, "Debug mode (server): %v\n", info.Debug)
@@ -55,15 +56,9 @@ func (cli *DockerCli) CmdInfo(args ...string) error {
fmt.Fprintf(cli.out, "Docker Root Dir: %s\n", info.DockerRootDir)
}
if info.HttpProxy != "" {
fmt.Fprintf(cli.out, "Http Proxy: %s\n", info.HttpProxy)
}
if info.HttpsProxy != "" {
fmt.Fprintf(cli.out, "Https Proxy: %s\n", info.HttpsProxy)
}
if info.NoProxy != "" {
fmt.Fprintf(cli.out, "No Proxy: %s\n", info.NoProxy)
}
ioutils.FprintfIfNotEmpty(cli.out, "Http Proxy: %s\n", info.HttpProxy)
ioutils.FprintfIfNotEmpty(cli.out, "Https Proxy: %s\n", info.HttpsProxy)
ioutils.FprintfIfNotEmpty(cli.out, "No Proxy: %s\n", info.NoProxy)
if info.IndexServerAddress != "" {
u := cli.configFile.AuthConfigs[info.IndexServerAddress].Username

View File

@@ -16,7 +16,7 @@ func (cli *DockerCli) CmdLogout(args ...string) error {
cmd := cli.Subcmd("logout", "[SERVER]", "Log out from a Docker registry, if no server is\nspecified \""+registry.IndexServerAddress()+"\" is the default.", true)
cmd.Require(flag.Max, 1)
cmd.ParseFlags(args, false)
cmd.ParseFlags(args, true)
serverAddress := registry.IndexServerAddress()
if len(cmd.Args()) > 0 {
serverAddress = cmd.Arg(0)

View File

@@ -12,7 +12,7 @@ import (
func (cli *DockerCli) CmdPause(args ...string) error {
cmd := cli.Subcmd("pause", "CONTAINER [CONTAINER...]", "Pause all processes within a container", true)
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, false)
cmd.ParseFlags(args, true)
var errNames []string
for _, name := range cmd.Args() {

View File

@@ -57,6 +57,7 @@ func (cli *DockerCli) CmdRun(args ...string) error {
// just in case the Parse does not exit
if err != nil {
cmd.ReportError(err.Error(), true)
os.Exit(1)
}
if len(hostConfig.Dns) > 0 {

View File

@@ -46,7 +46,6 @@ func (s *containerStats) Collect(cli *DockerCli, streamStats bool) {
var (
previousCPU uint64
previousSystem uint64
start = true
dec = json.NewDecoder(stream)
u = make(chan error, 1)
)
@@ -61,10 +60,9 @@ func (s *containerStats) Collect(cli *DockerCli, streamStats bool) {
memPercent = float64(v.MemoryStats.Usage) / float64(v.MemoryStats.Limit) * 100.0
cpuPercent = 0.0
)
if !start {
cpuPercent = calculateCPUPercent(previousCPU, previousSystem, v)
}
start = false
previousCPU = v.PreCpuStats.CpuUsage.TotalUsage
previousSystem = v.PreCpuStats.SystemUsage
cpuPercent = calculateCPUPercent(previousCPU, previousSystem, v)
s.mu.Lock()
s.CPUPercentage = cpuPercent
s.Memory = float64(v.MemoryStats.Usage)
@@ -73,8 +71,6 @@ func (s *containerStats) Collect(cli *DockerCli, streamStats bool) {
s.NetworkRx = float64(v.Network.RxBytes)
s.NetworkTx = float64(v.Network.TxBytes)
s.mu.Unlock()
previousCPU = v.CpuStats.CpuUsage.TotalUsage
previousSystem = v.CpuStats.SystemUsage
u <- nil
if !streamStats {
return
@@ -151,7 +147,7 @@ func (cli *DockerCli) CmdStats(args ...string) error {
}
// do a quick pause so that any failed connections for containers that do not exist are able to be
// evicted before we display the initial or default values.
time.Sleep(500 * time.Millisecond)
time.Sleep(1500 * time.Millisecond)
var errs []string
for _, c := range cStats {
c.mu.Lock()

View File

@@ -12,7 +12,7 @@ import (
func (cli *DockerCli) CmdUnpause(args ...string) error {
cmd := cli.Subcmd("unpause", "CONTAINER [CONTAINER...]", "Unpause all processes within a container", true)
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, false)
cmd.ParseFlags(args, true)
var errNames []string
for _, name := range cmd.Args() {

View File

@@ -9,6 +9,7 @@ import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/autogen/dockerversion"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/utils"
)
// CmdVersion shows Docker version information.
@@ -20,7 +21,7 @@ func (cli *DockerCli) CmdVersion(args ...string) error {
cmd := cli.Subcmd("version", "", "Show the Docker version information.", true)
cmd.Require(flag.Exact, 0)
cmd.ParseFlags(args, false)
cmd.ParseFlags(args, true)
if dockerversion.VERSION != "" {
fmt.Fprintf(cli.out, "Client version: %s\n", dockerversion.VERSION)
@@ -31,6 +32,9 @@ func (cli *DockerCli) CmdVersion(args ...string) error {
fmt.Fprintf(cli.out, "Git commit (client): %s\n", dockerversion.GITCOMMIT)
}
fmt.Fprintf(cli.out, "OS/Arch (client): %s/%s\n", runtime.GOOS, runtime.GOARCH)
if utils.ExperimentalBuild() {
fmt.Fprintf(cli.out, "Experimental (client): true\n")
}
stream, _, err := cli.call("GET", "/version", nil, nil)
if err != nil {
@@ -50,6 +54,8 @@ func (cli *DockerCli) CmdVersion(args ...string) error {
fmt.Fprintf(cli.out, "Go version (server): %s\n", v.GoVersion)
fmt.Fprintf(cli.out, "Git commit (server): %s\n", v.GitCommit)
fmt.Fprintf(cli.out, "OS/Arch (server): %s/%s\n", v.Os, v.Arch)
if v.Experimental {
fmt.Fprintf(cli.out, "Experimental (server): true\n")
}
return nil
}

View File

@@ -11,6 +11,15 @@ func boolValue(r *http.Request, k string) bool {
return !(s == "" || s == "0" || s == "no" || s == "false" || s == "none")
}
// boolValueOrDefault returns the default bool passed if the query param is
// missing, otherwise it's just a proxy to boolValue above
func boolValueOrDefault(r *http.Request, k string, d bool) bool {
if _, ok := r.Form[k]; !ok {
return d
}
return boolValue(r, k)
}
func int64ValueOrZero(r *http.Request, k string) int64 {
val, err := strconv.ParseInt(r.FormValue(k), 10, 64)
if err != nil {

View File

@@ -33,6 +33,21 @@ func TestBoolValue(t *testing.T) {
}
}
func TestBoolValueOrDefault(t *testing.T) {
r, _ := http.NewRequest("GET", "", nil)
if !boolValueOrDefault(r, "queryparam", true) {
t.Fatal("Expected to get true default value, got false")
}
v := url.Values{}
v.Set("param", "")
r, _ = http.NewRequest("GET", "", nil)
r.Form = v
if boolValueOrDefault(r, "param", true) {
t.Fatal("Expected not to get true")
}
}
func TestInt64ValueOrZero(t *testing.T) {
cases := map[string]int64{
"": 0,

View File

@@ -97,15 +97,17 @@ func (s *Server) ServeApi(protoAddrs []string) error {
if err != nil {
return err
}
s.servers = append(s.servers, srv)
s.servers = append(s.servers, srv...)
go func(proto, addr string) {
logrus.Infof("Listening for HTTP on %s (%s)", proto, addr)
if err := srv.Serve(); err != nil && strings.Contains(err.Error(), "use of closed network connection") {
err = nil
}
chErrors <- err
}(protoAddrParts[0], protoAddrParts[1])
for _, s := range srv {
logrus.Infof("Listening for HTTP on %s (%s)", protoAddrParts[0], protoAddrParts[1])
go func(s serverCloser) {
if err := s.Serve(); err != nil && strings.Contains(err.Error(), "use of closed network connection") {
err = nil
}
chErrors <- err
}(s)
}
}
for i := 0; i < len(protoAddrs); i++ {
@@ -252,6 +254,11 @@ func (s *Server) getVersion(version version.Version, w http.ResponseWriter, r *h
Os: runtime.GOOS,
Arch: runtime.GOARCH,
}
if version.GreaterThanOrEqualTo("1.19") {
v.Experimental = utils.ExperimentalBuild()
}
if kernelVersion, err := kernel.GetKernelVersion(); err == nil {
v.KernelVersion = kernelVersion.String()
}
@@ -271,14 +278,20 @@ func (s *Server) postContainersKill(version version.Version, w http.ResponseWrit
name := vars["name"]
// If we have a signal, look at it. Otherwise, do nothing
if sigStr := vars["signal"]; sigStr != "" {
if sigStr := r.Form.Get("signal"); sigStr != "" {
// Check if we passed the signal as a number:
// The largest legal signal is 31, so let's parse on 5 bits
sig, err := strconv.ParseUint(sigStr, 10, 5)
sigN, err := strconv.ParseUint(sigStr, 10, 5)
if err != nil {
// The signal is not a number, treat it as a string (either like
// "KILL" or like "SIGKILL")
sig = uint64(signal.SignalMap[strings.TrimPrefix(sigStr, "SIG")])
syscallSig, ok := signal.SignalMap[strings.TrimPrefix(sigStr, "SIG")]
if !ok {
return fmt.Errorf("Invalid signal: %s", sigStr)
}
sig = uint64(syscallSig)
} else {
sig = sigN
}
if sig == 0 {
@@ -453,6 +466,12 @@ func (s *Server) getEvents(version version.Version, w http.ResponseWriter, r *ht
return err
}
}
var closeNotify <-chan bool
if closeNotifier, ok := w.(http.CloseNotifier); ok {
closeNotify = closeNotifier.CloseNotify()
}
for {
select {
case ev := <-l:
@@ -465,6 +484,9 @@ func (s *Server) getEvents(version version.Version, w http.ResponseWriter, r *ht
}
case <-timer.C:
return nil
case <-closeNotify:
logrus.Debug("Client disconnected, stop sending events")
return nil
}
}
}
@@ -550,7 +572,7 @@ func (s *Server) getContainersStats(version version.Version, w http.ResponseWrit
return fmt.Errorf("Missing parameter")
}
return s.daemon.ContainerStats(vars["name"], boolValue(r, "stream"), ioutils.NewWriteFlusher(w))
return s.daemon.ContainerStats(vars["name"], boolValueOrDefault(r, "stream", true), ioutils.NewWriteFlusher(w))
}
func (s *Server) getContainersLogs(version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -1133,6 +1155,14 @@ func (s *Server) getContainersByName(version version.Version, w http.ResponseWri
return fmt.Errorf("Missing parameter")
}
if version.LessThan("1.19") {
containerJSONRaw, err := s.daemon.ContainerInspectRaw(vars["name"])
if err != nil {
return err
}
return writeJSON(w, http.StatusOK, containerJSONRaw)
}
containerJSON, err := s.daemon.ContainerInspect(vars["name"])
if err != nil {
return err

View File

@@ -12,57 +12,48 @@ import (
"github.com/docker/docker/pkg/systemd"
)
// newServer sets up the required serverCloser and does protocol specific checking.
func (s *Server) newServer(proto, addr string) (serverCloser, error) {
// newServer sets up the required serverClosers and does protocol specific checking.
func (s *Server) newServer(proto, addr string) ([]serverCloser, error) {
var (
err error
l net.Listener
ls []net.Listener
)
switch proto {
case "fd":
ls, err := systemd.ListenFD(addr)
ls, err = systemd.ListenFD(addr)
if err != nil {
return nil, err
}
chErrors := make(chan error, len(ls))
// We don't want to start serving on these sockets until the
// daemon is initialized and installed. Otherwise required handlers
// won't be ready.
<-s.start
// Since ListenFD will return one or more sockets we have
// to create a go func to spawn off multiple serves
for i := range ls {
listener := ls[i]
go func() {
httpSrv := http.Server{Handler: s.router}
chErrors <- httpSrv.Serve(listener)
}()
}
for i := 0; i < len(ls); i++ {
if err := <-chErrors; err != nil {
return nil, err
}
}
return nil, nil
case "tcp":
l, err = s.initTcpSocket(addr)
l, err := s.initTcpSocket(addr)
if err != nil {
return nil, err
}
ls = append(ls, l)
case "unix":
if l, err = sockets.NewUnixSocket(addr, s.cfg.SocketGroup, s.start); err != nil {
l, err := sockets.NewUnixSocket(addr, s.cfg.SocketGroup, s.start)
if err != nil {
return nil, err
}
ls = append(ls, l)
default:
return nil, fmt.Errorf("Invalid protocol format: %q", proto)
}
return &HttpServer{
&http.Server{
Addr: addr,
Handler: s.router,
},
l,
}, nil
var res []serverCloser
for _, l := range ls {
res = append(res, &HttpServer{
&http.Server{
Addr: addr,
Handler: s.router,
},
l,
})
}
return res, nil
}
func (s *Server) AcceptConnections(d *daemon.Daemon) {

View File

@@ -81,6 +81,7 @@ type Network struct {
type Stats struct {
Read time.Time `json:"read"`
Network Network `json:"network,omitempty"`
PreCpuStats CpuStats `json:"precpu_stats,omitempty"`
CpuStats CpuStats `json:"cpu_stats,omitempty"`
MemoryStats MemoryStats `json:"memory_stats,omitempty"`
BlkioStats BlkioStats `json:"blkio_stats,omitempty"`

View File

@@ -101,16 +101,16 @@ type Port struct {
}
type Container struct {
ID string `json:"Id"`
Names []string `json:",omitempty"`
Image string `json:",omitempty"`
Command string `json:",omitempty"`
Created int `json:",omitempty"`
Ports []Port `json:",omitempty"`
SizeRw int `json:",omitempty"`
SizeRootFs int `json:",omitempty"`
Labels map[string]string `json:",omitempty"`
Status string `json:",omitempty"`
ID string `json:"Id"`
Names []string
Image string
Command string
Created int
Ports []Port
SizeRw int `json:",omitempty"`
SizeRootFs int `json:",omitempty"`
Labels map[string]string
Status string
}
// POST "/containers/"+containerID+"/copy"
@@ -132,6 +132,7 @@ type Version struct {
Os string
Arch string
KernelVersion string `json:",omitempty"`
Experimental bool `json:",omitempty"`
}
// GET "/info"
@@ -194,12 +195,11 @@ type ContainerState struct {
}
// GET "/containers/{name:.*}/json"
type ContainerJSON struct {
type ContainerJSONBase struct {
Id string
Created time.Time
Path string
Args []string
Config *runconfig.Config
State *ContainerState
Image string
NetworkSettings *network.Settings
@@ -219,3 +219,24 @@ type ContainerJSON struct {
ExecIDs []string
HostConfig *runconfig.HostConfig
}
type ContainerJSON struct {
*ContainerJSONBase
Config *runconfig.Config
}
// backcompatibility struct along with ContainerConfig
type ContainerJSONRaw struct {
*ContainerJSONBase
Config *ContainerConfig
}
type ContainerConfig struct {
*runconfig.Config
// backward compatibility, they now live in HostConfig
Memory int64
MemorySwap int64
CpuShares int64
Cpuset string
}

View File

@@ -0,0 +1,15 @@
#
# THIS FILE IS AUTOGENERATED; SEE "contrib/builder/rpm/generate.sh"!
#
FROM fedora:22
RUN yum install -y @development-tools fedora-packager
RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libselinux-devel sqlite-devel tar
ENV GO_VERSION 1.4.2
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
ENV AUTO_GOPATH 1
ENV DOCKER_BUILDTAGS selinux

View File

@@ -212,11 +212,12 @@ _docker_docker() {
--selinux-enabled
--tls
--tlsverify
--userland-proxy=false
--version -v
"
case "$prev" in
--graph|-g)
--exec-root|--graph|-g)
_filedir -d
return
;;
@@ -267,22 +268,25 @@ _docker_attach() {
_docker_build() {
case "$prev" in
--tag|-t)
__docker_image_repos_and_tags
--cgroup-parent|--cpuset-cpus|--cpuset-mems|--cpu-shares|-c|--cpu-period|--cpu-quota|--memory|-m|--memory-swap)
return
;;
--file|-f)
_filedir
return
;;
--tag|-t)
__docker_image_repos_and_tags
return
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--cpu-shares -c --cpuset-cpus --cpu-quota --file -f --force-rm --help --memory -m --memory-swap --no-cache --pull --quiet -q --rm --tag -t" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--cgroup-parent --cpuset-cpus --cpuset-mems --cpu-shares -c --cpu-period --cpu-quota --file -f --force-rm --help --memory -m --memory-swap --no-cache --pull --quiet -q --rm --tag -t" -- "$cur" ) )
;;
*)
local counter="$(__docker_pos_first_nonflag '--tag|-t')"
local counter="$(__docker_pos_first_nonflag '--cgroup-parent|--cpuset-cpus|--cpuset-mems|--cpu-shares|-c|--cpu-period|--cpu-quota|--file|-f|--memory|-m|--memory-swap|--tag|-t')"
if [ $cword -eq $counter ]; then
_filedir -d
fi
@@ -405,6 +409,12 @@ _docker_events() {
}
_docker_exec() {
case "$prev" in
--user|-u)
return
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--detach -d --help --interactive -i -t --tty -u --user" -- "$cur" ) )
@@ -586,7 +596,7 @@ _docker_logout() {
_docker_logs() {
case "$prev" in
--tail)
--since|--tail)
return
;;
esac
@@ -771,15 +781,16 @@ _docker_rmi() {
_docker_run() {
local options_with_args="
--add-host
--blkio-weight
--attach -a
--cap-add
--cap-drop
--cgroup-parent
--cidfile
--cpuset
--cpu-shares -c
--cpu-period
--cpu-quota
--cpu-shares -c
--device
--dns
--dns-search
@@ -805,6 +816,7 @@ _docker_run() {
--security-opt
--user -u
--ulimit
--uts
--volumes-from
--volume -v
--workdir -w
@@ -1156,6 +1168,8 @@ _docker() {
--api-cors-header
--bip
--bridge -b
--default-gateway
--default-gateway-v6
--default-ulimit
--dns
--dns-search
@@ -1203,6 +1217,9 @@ _docker() {
;;
-*)
;;
=)
(( counter++ ))
;;
*)
command="${words[$counter]}"
cpos=$counter

View File

@@ -1054,6 +1054,15 @@ func (container *Container) networkMounts() []execdriver.Mount {
return mounts
}
func (container *Container) addBindMountPoint(name, source, destination string, rw bool) {
container.MountPoints[destination] = &mountPoint{
Name: name,
Source: source,
Destination: destination,
RW: rw,
}
}
func (container *Container) addLocalMountPoint(name, destination string, rw bool) {
container.MountPoints[destination] = &mountPoint{
Name: name,

View File

@@ -183,7 +183,7 @@ func getDevicesFromPath(deviceMapping runconfig.DeviceMapping) (devs []*configs.
func populateCommand(c *Container, env []string) error {
var en *execdriver.Network
if !c.daemon.config.DisableNetwork {
if !c.Config.NetworkDisabled {
en = &execdriver.Network{
NamespacePath: c.NetworkSettings.SandboxKey,
}
@@ -227,9 +227,10 @@ func populateCommand(c *Container, env []string) error {
userSpecifiedDevices = append(userSpecifiedDevices, devs...)
}
allowedDevices := append(configs.DefaultAllowedDevices, userSpecifiedDevices...)
autoCreatedDevices := append(configs.DefaultAutoCreatedDevices, userSpecifiedDevices...)
allowedDevices := mergeDevices(configs.DefaultAllowedDevices, userSpecifiedDevices)
autoCreatedDevices := mergeDevices(configs.DefaultAutoCreatedDevices, userSpecifiedDevices)
// TODO: this can be removed after lxc-conf is fully deprecated
lxcConfig, err := mergeLxcConfIntoOptions(c.hostConfig)
@@ -309,6 +310,25 @@ func populateCommand(c *Container, env []string) error {
return nil
}
func mergeDevices(defaultDevices, userDevices []*configs.Device) []*configs.Device {
if len(userDevices) == 0 {
return defaultDevices
}
paths := map[string]*configs.Device{}
for _, d := range userDevices {
paths[d.Path] = d
}
var devs []*configs.Device
for _, d := range defaultDevices {
if _, defined := paths[d.Path]; !defined {
devs = append(devs, d)
}
}
return append(devs, userDevices...)
}
// GetSize, return real size, virtual size
func (container *Container) GetSize() (int64, int64) {
var (
@@ -493,13 +513,23 @@ func (container *Container) buildPortMapInfo(n libnetwork.Network, ep libnetwork
networkSettings.MacAddress = mac.(net.HardwareAddr).String()
}
networkSettings.Ports = nat.PortMap{}
if expData, ok := driverInfo[netlabel.ExposedPorts]; ok {
if exposedPorts, ok := expData.([]types.TransportPort); ok {
for _, tp := range exposedPorts {
natPort := nat.NewPort(tp.Proto.String(), strconv.Itoa(int(tp.Port)))
networkSettings.Ports[natPort] = nil
}
}
}
mapData, ok := driverInfo[netlabel.PortMap]
if !ok {
return networkSettings, nil
}
if portMapping, ok := mapData.([]types.PortBinding); ok {
networkSettings.Ports = nat.PortMap{}
for _, pp := range portMapping {
natPort := nat.NewPort(pp.Proto.String(), strconv.Itoa(int(pp.Port)))
natBndg := nat.PortBinding{HostIp: pp.HostIP.String(), HostPort: strconv.Itoa(int(pp.HostPort))}
@@ -913,6 +943,12 @@ func (container *Container) ReleaseNetwork() {
return
}
// If the container is not attached to any network do not try
// to release network and generate spurious error messages.
if container.NetworkSettings.NetworkID == "" {
return
}
n, err := container.daemon.netController.NetworkByID(container.NetworkSettings.NetworkID)
if err != nil {
logrus.Errorf("error locating network id %s: %v", container.NetworkSettings.NetworkID, err)

View File

@@ -213,7 +213,7 @@ func (daemon *Daemon) register(container *Container, updateSuffixarray bool) err
// we'll waste time if we update it for every container
daemon.idIndex.Add(container.ID)
if err := daemon.verifyOldVolumesInfo(container); err != nil {
if err := daemon.verifyVolumesInfo(container); err != nil {
return err
}
@@ -1003,8 +1003,9 @@ func (daemon *Daemon) Shutdown() error {
go func() {
defer group.Done()
if err := c.KillSig(15); err != nil {
logrus.Debugf("kill 15 error for %s - %s", c.ID, err)
// If container failed to exit in 10 seconds of SIGTERM, then using the force
if err := c.Stop(10); err != nil {
logrus.Errorf("Stop container %s with error: %v", c.ID, err)
}
c.WaitStop(-1 * time.Second)
logrus.Debugf("container stopped %s", c.ID)

View File

@@ -129,12 +129,25 @@ func TestLoadWithVolume(t *testing.T) {
containerId := "d59df5276e7b219d510fe70565e0404bc06350e0d4b43fe961f22f339980170e"
containerPath := filepath.Join(tmp, containerId)
if err = os.MkdirAll(containerPath, 0755); err != nil {
if err := os.MkdirAll(containerPath, 0755); err != nil {
t.Fatal(err)
}
hostVolumeId := stringid.GenerateRandomID()
volumePath := filepath.Join(tmp, "vfs", "dir", hostVolumeId)
vfsPath := filepath.Join(tmp, "vfs", "dir", hostVolumeId)
volumePath := filepath.Join(tmp, "volumes", hostVolumeId)
if err := os.MkdirAll(vfsPath, 0755); err != nil {
t.Fatal(err)
}
if err := os.MkdirAll(volumePath, 0755); err != nil {
t.Fatal(err)
}
content := filepath.Join(vfsPath, "helo")
if err := ioutil.WriteFile(content, []byte("HELO"), 0644); err != nil {
t.Fatal(err)
}
config := `{"State":{"Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":2464,"ExitCode":0,
"Error":"","StartedAt":"2015-05-26T16:48:53.869308965Z","FinishedAt":"0001-01-01T00:00:00Z"},
@@ -152,7 +165,7 @@ func TestLoadWithVolume(t *testing.T) {
"Name":"/ubuntu","Driver":"aufs","ExecDriver":"native-0.2","MountLabel":"","ProcessLabel":"","AppArmorProfile":"","RestartCount":0,
"UpdateDns":false,"Volumes":{"/vol1":"%s"},"VolumesRW":{"/vol1":true},"AppliedVolumesFrom":null}`
cfg := fmt.Sprintf(config, volumePath)
cfg := fmt.Sprintf(config, vfsPath)
if err = ioutil.WriteFile(filepath.Join(containerPath, "config.json"), []byte(cfg), 0644); err != nil {
t.Fatal(err)
}
@@ -165,10 +178,6 @@ func TestLoadWithVolume(t *testing.T) {
t.Fatal(err)
}
if err = os.MkdirAll(volumePath, 0755); err != nil {
t.Fatal(err)
}
daemon := &Daemon{
repository: tmp,
root: tmp,
@@ -179,7 +188,7 @@ func TestLoadWithVolume(t *testing.T) {
t.Fatal(err)
}
err = daemon.verifyOldVolumesInfo(c)
err = daemon.verifyVolumesInfo(c)
if err != nil {
t.Fatal(err)
}
@@ -204,4 +213,91 @@ func TestLoadWithVolume(t *testing.T) {
if m.Driver != volume.DefaultDriverName {
t.Fatalf("Expected mount driver local, was %s\n", m.Driver)
}
newVolumeContent := filepath.Join(volumePath, "helo")
b, err := ioutil.ReadFile(newVolumeContent)
if err != nil {
t.Fatal(err)
}
if string(b) != "HELO" {
t.Fatalf("Expected HELO, was %s\n", string(b))
}
}
func TestLoadWithBindMount(t *testing.T) {
tmp, err := ioutil.TempDir("", "docker-daemon-test-")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmp)
containerId := "d59df5276e7b219d510fe70565e0404bc06350e0d4b43fe961f22f339980170e"
containerPath := filepath.Join(tmp, containerId)
if err = os.MkdirAll(containerPath, 0755); err != nil {
t.Fatal(err)
}
config := `{"State":{"Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":2464,"ExitCode":0,
"Error":"","StartedAt":"2015-05-26T16:48:53.869308965Z","FinishedAt":"0001-01-01T00:00:00Z"},
"ID":"d59df5276e7b219d510fe70565e0404bc06350e0d4b43fe961f22f339980170e","Created":"2015-05-26T16:48:53.7987917Z","Path":"top",
"Args":[],"Config":{"Hostname":"d59df5276e7b","Domainname":"","User":"","Memory":0,"MemorySwap":0,"CpuShares":0,"Cpuset":"",
"AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"PortSpecs":null,"ExposedPorts":null,"Tty":true,"OpenStdin":true,
"StdinOnce":false,"Env":null,"Cmd":["top"],"Image":"ubuntu:latest","Volumes":null,"WorkingDir":"","Entrypoint":null,
"NetworkDisabled":false,"MacAddress":"","OnBuild":null,"Labels":{}},"Image":"07f8e8c5e66084bef8f848877857537ffe1c47edd01a93af27e7161672ad0e95",
"NetworkSettings":{"IPAddress":"172.17.0.1","IPPrefixLen":16,"MacAddress":"02:42:ac:11:00:01","LinkLocalIPv6Address":"fe80::42:acff:fe11:1",
"LinkLocalIPv6PrefixLen":64,"GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"Gateway":"172.17.42.1","IPv6Gateway":"","Bridge":"docker0","PortMapping":null,"Ports":{}},
"ResolvConfPath":"/var/lib/docker/containers/d59df5276e7b219d510fe70565e0404bc06350e0d4b43fe961f22f339980170e/resolv.conf",
"HostnamePath":"/var/lib/docker/containers/d59df5276e7b219d510fe70565e0404bc06350e0d4b43fe961f22f339980170e/hostname",
"HostsPath":"/var/lib/docker/containers/d59df5276e7b219d510fe70565e0404bc06350e0d4b43fe961f22f339980170e/hosts",
"LogPath":"/var/lib/docker/containers/d59df5276e7b219d510fe70565e0404bc06350e0d4b43fe961f22f339980170e/d59df5276e7b219d510fe70565e0404bc06350e0d4b43fe961f22f339980170e-json.log",
"Name":"/ubuntu","Driver":"aufs","ExecDriver":"native-0.2","MountLabel":"","ProcessLabel":"","AppArmorProfile":"","RestartCount":0,
"UpdateDns":false,"Volumes":{"/vol1": "/vol1"},"VolumesRW":{"/vol1":true},"AppliedVolumesFrom":null}`
if err = ioutil.WriteFile(filepath.Join(containerPath, "config.json"), []byte(config), 0644); err != nil {
t.Fatal(err)
}
hostConfig := `{"Binds":["/vol1:/vol1"],"ContainerIDFile":"","LxcConf":[],"Memory":0,"MemorySwap":0,"CpuShares":0,"CpusetCpus":"",
"Privileged":false,"PortBindings":{},"Links":null,"PublishAllPorts":false,"Dns":null,"DnsSearch":null,"ExtraHosts":null,"VolumesFrom":null,
"Devices":[],"NetworkMode":"bridge","IpcMode":"","PidMode":"","CapAdd":null,"CapDrop":null,"RestartPolicy":{"Name":"no","MaximumRetryCount":0},
"SecurityOpt":null,"ReadonlyRootfs":false,"Ulimits":null,"LogConfig":{"Type":"","Config":null},"CgroupParent":""}`
if err = ioutil.WriteFile(filepath.Join(containerPath, "hostconfig.json"), []byte(hostConfig), 0644); err != nil {
t.Fatal(err)
}
daemon := &Daemon{
repository: tmp,
root: tmp,
}
c, err := daemon.load(containerId)
if err != nil {
t.Fatal(err)
}
err = daemon.verifyVolumesInfo(c)
if err != nil {
t.Fatal(err)
}
if len(c.MountPoints) != 1 {
t.Fatalf("Expected 1 volume mounted, was 0\n")
}
m := c.MountPoints["/vol1"]
if m.Name != "" {
t.Fatalf("Expected empty mount name, was %s\n", m.Name)
}
if m.Source != "/vol1" {
t.Fatalf("Expected mount source /vol1, was %s\n", m.Source)
}
if m.Destination != "/vol1" {
t.Fatalf("Expected mount destination /vol1, was %s\n", m.Destination)
}
if !m.RW {
t.Fatalf("Expected mount point to be RW but it was not\n")
}
}

View File

@@ -63,7 +63,8 @@ func (daemon *Daemon) imgDeleteHelper(name string, list *[]types.ImageDelete, fi
repos := daemon.Repositories().ByID()[img.ID]
//If delete by id, see if the id belong only to one repository
if repoName == "" {
deleteByID := repoName == ""
if deleteByID {
for _, repoAndTag := range repos {
parsedRepo, parsedTag := parsers.ParseRepositoryTag(repoAndTag)
if repoName == "" || repoName == parsedRepo {
@@ -91,7 +92,7 @@ func (daemon *Daemon) imgDeleteHelper(name string, list *[]types.ImageDelete, fi
return nil
}
if len(repos) <= 1 {
if len(repos) <= 1 || (len(repoAndTags) <= 1 && deleteByID) {
if err := daemon.canDeleteImage(img.ID, force); err != nil {
return err
}

View File

@@ -25,6 +25,40 @@ func (daemon *Daemon) ContainerInspect(name string) (*types.ContainerJSON, error
container.Lock()
defer container.Unlock()
base, err := daemon.getInspectData(container)
if err != nil {
return nil, err
}
return &types.ContainerJSON{base, container.Config}, nil
}
func (daemon *Daemon) ContainerInspectRaw(name string) (*types.ContainerJSONRaw, error) {
container, err := daemon.Get(name)
if err != nil {
return nil, err
}
container.Lock()
defer container.Unlock()
base, err := daemon.getInspectData(container)
if err != nil {
return nil, err
}
config := &types.ContainerConfig{
container.Config,
container.hostConfig.Memory,
container.hostConfig.MemorySwap,
container.hostConfig.CpuShares,
container.hostConfig.CpusetCpus,
}
return &types.ContainerJSONRaw{base, config}, nil
}
func (daemon *Daemon) getInspectData(container *Container) (*types.ContainerJSONBase, error) {
// make a copy to play with
hostConfig := *container.hostConfig
@@ -60,12 +94,11 @@ func (daemon *Daemon) ContainerInspect(name string) (*types.ContainerJSON, error
volumesRW[m.Destination] = m.RW
}
contJSON := &types.ContainerJSON{
contJSONBase := &types.ContainerJSONBase{
Id: container.ID,
Created: container.Created,
Path: container.Path,
Args: container.Args,
Config: container.Config,
State: containerState,
Image: container.ImageID,
NetworkSettings: container.NetworkSettings,
@@ -86,7 +119,7 @@ func (daemon *Daemon) ContainerInspect(name string) (*types.ContainerJSON, error
HostConfig: &hostConfig,
}
return contJSON, nil
return contJSONBase, nil
}
func (daemon *Daemon) ContainerExecInspect(id string) (*execConfig, error) {

View File

@@ -2,6 +2,7 @@ package logger
import (
"bufio"
"bytes"
"io"
"sync"
"time"
@@ -40,14 +41,27 @@ func (c *Copier) Run() {
func (c *Copier) copySrc(name string, src io.Reader) {
defer c.copyJobs.Done()
scanner := bufio.NewScanner(src)
for scanner.Scan() {
if err := c.dst.Log(&Message{ContainerID: c.cid, Line: scanner.Bytes(), Source: name, Timestamp: time.Now().UTC()}); err != nil {
logrus.Errorf("Failed to log msg %q for logger %s: %s", scanner.Bytes(), c.dst.Name(), err)
reader := bufio.NewReader(src)
for {
line, err := reader.ReadBytes('\n')
line = bytes.TrimSuffix(line, []byte{'\n'})
// ReadBytes can return full or partial output even when it failed.
// e.g. it can return a full entry and EOF.
if err == nil || len(line) > 0 {
if logErr := c.dst.Log(&Message{ContainerID: c.cid, Line: line, Source: name, Timestamp: time.Now().UTC()}); logErr != nil {
logrus.Errorf("Failed to log msg %q for logger %s: %s", line, c.dst.Name(), logErr)
}
}
}
if err := scanner.Err(); err != nil {
logrus.Errorf("Error scanning log stream: %s", err)
if err != nil {
if err != io.EOF {
logrus.Errorf("Error scanning log stream: %s", err)
}
return
}
}
}

View File

@@ -3,14 +3,17 @@
package syslog
import (
"fmt"
"io"
"log/syslog"
"net"
"net/url"
"os"
"path"
"strings"
"github.com/Sirupsen/logrus"
"github.com/docker/docker/daemon/logger"
"github.com/docker/docker/pkg/urlutil"
)
const name = "syslog"
@@ -27,7 +30,18 @@ func init() {
func New(ctx logger.Context) (logger.Logger, error) {
tag := ctx.ContainerID[:12]
log, err := syslog.New(syslog.LOG_DAEMON, fmt.Sprintf("%s/%s", path.Base(os.Args[0]), tag))
proto, address, err := parseAddress(ctx.Config["syslog-address"])
if err != nil {
return nil, err
}
log, err := syslog.Dial(
proto,
address,
syslog.LOG_DAEMON,
path.Base(os.Args[0])+"/"+tag,
)
if err != nil {
return nil, err
}
@@ -55,3 +69,33 @@ func (s *Syslog) Name() string {
func (s *Syslog) GetReader() (io.Reader, error) {
return nil, logger.ReadLogsNotSupported
}
func parseAddress(address string) (string, string, error) {
if urlutil.IsTransportURL(address) {
url, err := url.Parse(address)
if err != nil {
return "", "", err
}
// unix socket validation
if url.Scheme == "unix" {
if _, err := os.Stat(url.Path); err != nil {
return "", "", err
}
return url.Scheme, url.Path, nil
}
// here we process tcp|udp
host := url.Host
if _, _, err := net.SplitHostPort(host); err != nil {
if !strings.Contains(err.Error(), "missing port in address") {
return "", "", err
}
host = host + ":514"
}
return url.Scheme, host, nil
}
return "", "", nil
}

View File

@@ -15,13 +15,23 @@ func (daemon *Daemon) ContainerStats(name string, stream bool, out io.Writer) er
if err != nil {
return err
}
var pre_cpu_stats types.CpuStats
for first_v := range updates {
first_update := first_v.(*execdriver.ResourceStats)
first_stats := convertToAPITypes(first_update.Stats)
pre_cpu_stats = first_stats.CpuStats
pre_cpu_stats.SystemUsage = first_update.SystemUsage
break
}
enc := json.NewEncoder(out)
for v := range updates {
update := v.(*execdriver.ResourceStats)
ss := convertToAPITypes(update.Stats)
ss.PreCpuStats = pre_cpu_stats
ss.MemoryStats.Limit = uint64(update.MemoryLimit)
ss.Read = update.Read
ss.CpuStats.SystemUsage = update.SystemUsage
pre_cpu_stats = ss.CpuStats
if err := enc.Encode(ss); err != nil {
// TODO: handle the specific broken pipe
daemon.UnsubscribeToContainerStats(name, updates)

View File

@@ -8,6 +8,7 @@ import (
"path/filepath"
"strings"
"github.com/Sirupsen/logrus"
"github.com/docker/docker/pkg/chrootarchive"
"github.com/docker/docker/runconfig"
"github.com/docker/docker/volume"
@@ -74,7 +75,7 @@ func parseBindMount(spec string, mountLabel string, config *runconfig.Config) (*
return nil, fmt.Errorf("Invalid volume specification: %s", spec)
}
name, source, err := parseVolumeSource(arr[0], config)
name, source, err := parseVolumeSource(arr[0])
if err != nil {
return nil, err
}
@@ -238,9 +239,9 @@ func (daemon *Daemon) registerMountPoints(container *Container, hostConfig *runc
return nil
}
// verifyOldVolumesInfo ports volumes configured for the containers pre docker 1.7.
// verifyVolumesInfo ports volumes configured for the containers pre docker 1.7.
// It reads the container configuration and creates valid mount points for the old volumes.
func (daemon *Daemon) verifyOldVolumesInfo(container *Container) error {
func (daemon *Daemon) verifyVolumesInfo(container *Container) error {
jsonPath, err := container.jsonPath()
if err != nil {
return err
@@ -268,18 +269,59 @@ func (daemon *Daemon) verifyOldVolumesInfo(container *Container) error {
for destination, hostPath := range vols.Volumes {
vfsPath := filepath.Join(daemon.root, "vfs", "dir")
rw := vols.VolumesRW != nil && vols.VolumesRW[destination]
if strings.HasPrefix(hostPath, vfsPath) {
id := filepath.Base(hostPath)
rw := vols.VolumesRW != nil && vols.VolumesRW[destination]
if err := daemon.migrateVolume(id, hostPath); err != nil {
return err
}
container.addLocalMountPoint(id, destination, rw)
} else { // Bind mount
id, source, err := parseVolumeSource(hostPath)
// We should not find an error here coming
// from the old configuration, but who knows.
if err != nil {
return err
}
container.addBindMountPoint(id, source, destination, rw)
}
}
return container.ToDisk()
}
// migrateVolume moves the contents of a volume created pre Docker 1.7
// to the location expected by the local driver. Steps:
// 1. Save old directory that includes old volume's config json file.
// 2. Move virtual directory with content to where the local driver expects it to be.
// 3. Remove the backup of the old volume config.
func (daemon *Daemon) migrateVolume(id, vfs string) error {
volumeInfo := filepath.Join(daemon.root, defaultVolumesPathName, id)
backup := filepath.Join(daemon.root, defaultVolumesPathName, id+".back")
var err error
if err = os.Rename(volumeInfo, backup); err != nil {
return err
}
defer func() {
// Put old configuration back in place in case one of the next steps fails.
if err != nil {
os.Rename(backup, volumeInfo)
}
}()
if err = os.Rename(vfs, volumeInfo); err != nil {
return err
}
if err = os.RemoveAll(backup); err != nil {
logrus.Errorf("Unable to remove volume info backup directory %s: %v", backup, err)
}
return nil
}
func createVolume(name, driverName string) (volume.Volume, error) {
vd, err := getVolumeDriver(driverName)
if err != nil {

View File

@@ -5,7 +5,6 @@ package daemon
import (
"path/filepath"
"github.com/docker/docker/runconfig"
"github.com/docker/docker/volume"
"github.com/docker/docker/volume/drivers"
)
@@ -17,7 +16,7 @@ func getVolumeDriver(name string) (volume.Driver, error) {
return volumedrivers.Lookup(name)
}
func parseVolumeSource(spec string, config *runconfig.Config) (string, string, error) {
func parseVolumeSource(spec string) (string, string, error) {
if !filepath.IsAbs(spec) {
return spec, "", nil
}

View File

@@ -6,7 +6,6 @@ import (
"fmt"
"path/filepath"
"github.com/docker/docker/runconfig"
"github.com/docker/docker/volume"
"github.com/docker/docker/volume/drivers"
)
@@ -15,7 +14,7 @@ func getVolumeDriver(_ string) (volume.Driver, error) {
return volumedrivers.Lookup(volume.DefaultDriverName)
}
func parseVolumeSource(spec string, _ *runconfig.Config) (string, string, error) {
func parseVolumeSource(spec string) (string, string, error) {
if !filepath.IsAbs(spec) {
return "", "", fmt.Errorf("cannot bind mount volume: %s volume paths must be absolute.", spec)
}

View File

@@ -16,6 +16,7 @@ import (
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/reexec"
"github.com/docker/docker/pkg/term"
"github.com/docker/docker/utils"
)
const (
@@ -161,5 +162,9 @@ func main() {
}
func showVersion() {
fmt.Printf("Docker version %s, build %s\n", dockerversion.VERSION, dockerversion.GITCOMMIT)
if utils.ExperimentalBuild() {
fmt.Printf("Docker version %s, build %s, experimental\n", dockerversion.VERSION, dockerversion.GITCOMMIT)
} else {
fmt.Printf("Docker version %s, build %s\n", dockerversion.VERSION, dockerversion.GITCOMMIT)
}
}

View File

@@ -12,7 +12,7 @@ Docker has two primary branches for documentation:
| Branch | Description | URL (published via commit-hook) |
|----------|--------------------------------|------------------------------------------------------------------------------|
| `docs` | Official release documentation | [https://docs.docker.com](https://docs.docker.com) |
| `master` | Merged but unreleased development work | [http://docs.master.dockerproject.com](http://docs.master.dockerproject.com) |
| `master` | Merged but unreleased development work | [http://docs.master.dockerproject.org](http://docs.master.dockerproject.org) |
Additions and updates to upcoming releases are made in a feature branch off of
the `master` branch. The Docker maintainers also support a `docs` branch that
@@ -22,7 +22,7 @@ After a release, documentation updates are continually merged into `master` as
they occur. This work includes new documentation for forthcoming features, bug
fixes, and other updates. Docker's CI system automatically builds and updates
the `master` documentation after each merge and posts it to
[http://docs.master.dockerproject.com](http://docs.master.dockerproject.com).
[http://docs.master.dockerproject.org](http://docs.master.dockerproject.org).
Periodically, the Docker maintainers update `docs.docker.com` between official
releases of Docker. They do this by cherry-picking commits from `master`,

View File

@@ -33,6 +33,7 @@ docker-create - Create a new container
[**--link**[=*[]*]]
[**--lxc-conf**[=*[]*]]
[**--log-driver**[=*[]*]]
[**--log-opt**[=*[]*]]
[**-m**|**--memory**[=*MEMORY*]]
[**--memory-swap**[=*MEMORY-SWAP*]]
[**--mac-address**[=*MAC-ADDRESS*]]
@@ -148,6 +149,9 @@ two memory nodes.
Logging driver for container. Default is defined by daemon `--log-driver` flag.
**Warning**: `docker logs` command works only for `json-file` logging driver.
**--log-opt**=[]
Logging driver specific options.
**-m**, **--memory**=""
Memory limit (format: <number><optional unit>, where unit = b, k, m or g)

View File

@@ -34,6 +34,7 @@ docker-run - Run a command in a new container
[**--link**[=*[]*]]
[**--lxc-conf**[=*[]*]]
[**--log-driver**[=*[]*]]
[**--log-opt**[=*[]*]]
[**-m**|**--memory**[=*MEMORY*]]
[**--memory-swap**[=*MEMORY-SWAP*]]
[**--mac-address**[=*MAC-ADDRESS*]]
@@ -255,6 +256,9 @@ which interface and port to use.
Logging driver for container. Default is defined by daemon `--log-driver` flag.
**Warning**: `docker logs` command works only for `json-file` logging driver.
**--log-opt**=[]
Logging driver specific options.
**-m**, **--memory**=""
Memory limit (format: <number><optional unit>, where unit = b, k, m or g)

View File

@@ -105,6 +105,9 @@ unix://[/path/to/socket] to use.
Default driver for container logs. Default is `json-file`.
**Warning**: `docker logs` command works only for `json-file` logging driver.
**--log-opt**=[]
Logging driver specific options.
**--mtu**=VALUE
Set the containers network mtu. Default is `0`.

View File

@@ -86,8 +86,8 @@ program code and documentation code.
## Merges after pull requests
* After a merge, [a master build](https://master.dockerproject.com/) is
* After a merge, [a master build](https://master.dockerproject.org/) is
available almost immediately.
* If you made a documentation change, you can see it at
[docs.master.dockerproject.com](http://docs.master.dockerproject.com/).
[docs.master.dockerproject.org](http://docs.master.dockerproject.org/).

View File

@@ -108,7 +108,7 @@ It can take time to see a merged pull request in Docker's official release.
A master build is available almost immediately though. Docker builds and
updates its development binaries after each merge to `master`.
1. Browse to <a href="https://master.dockerproject.com/" target="_blank">https://master.dockerproject.com/</a>.
1. Browse to <a href="https://master.dockerproject.org/" target="_blank">https://master.dockerproject.org/</a>.
2. Look for the binary appropriate to your system.
@@ -117,7 +117,7 @@ updates its development binaries after each merge to `master`.
You might want to run the binary in a container though. This
will keep your local host environment clean.
4. View any documentation changes at <a href="http://docs.master.dockerproject.com/" target="_blank">docs.master.dockerproject.com</a>.
4. View any documentation changes at <a href="http://docs.master.dockerproject.org/" target="_blank">docs.master.dockerproject.org</a>.
Once you've verified everything merged, feel free to delete your feature branch
from your fork. For information on how to do this,

File diff suppressed because it is too large Load Diff

View File

@@ -169,6 +169,7 @@ expect an integer, and they can only be specified once.
-l, --log-level="info" Set the logging level
--label=[] Set key=value labels to the daemon
--log-driver="json-file" Default driver for container logs
--log-opt=[] Log driver specific options
--mtu=0 Set the containers network MTU
-p, --pidfile="/var/run/docker.pid" Path to use for daemon PID file
--registry-mirror=[] Preferred Docker registry mirror
@@ -282,6 +283,18 @@ set `zfs.fsname` option as described in [Storage driver options](#storage-driver
The `overlay` is a very fast union filesystem. It is now merged in the main
Linux kernel as of [3.18.0](https://lkml.org/lkml/2014/10/26/137).
Call `docker -d -s overlay` to use it.
> **Note:**
> As promising as `overlay` is, the feature is still quite young and should not
> be used in production. Most notably, using `overlay` can cause excessive
> inode consumption (especially as the number of images grows), as well as
> being incompatible with the use of RPMs.
> **Note:**
> As promising as `overlay` is, the feature is still quite young and should not
> be used in production. Most notably, using `overlay` can cause excessive
> inode consumption (especially as the number of images grows), as well as
> being incompatible with the use of RPMs.
> **Note:**
> It is currently unsupported on `btrfs` or any Copy on Write filesystem
> and should only be used over `ext4` partitions.
@@ -984,6 +997,7 @@ Creates a new container.
--label-file=[] Read in a line delimited file of labels
--link=[] Add link to another container
--log-driver="" Logging driver for container
--log-opt=[] Log driver specific options
--lxc-conf=[] Add custom lxc options
-m, --memory="" Memory limit
--mac-address="" Container MAC address (e.g. 92:d0:c6:0a:29:33)
@@ -1948,6 +1962,7 @@ To remove an image using its digest:
--ipc="" IPC namespace to use
--link=[] Add link to another container
--log-driver="" Logging driver for container
--log-opt=[] Log driver specific options
--lxc-conf=[] Add custom lxc options
-m, --memory="" Memory limit
-l, --label=[] Set metadata on the container (e.g., --label=com.example.key=value)

View File

@@ -873,18 +873,31 @@ this driver.
Default logging driver for Docker. Writes JSON messages to file. `docker logs`
command is available only for this logging driver
The following logging options are supported for this logging driver: [none]
#### Logging driver: syslog
Syslog logging driver for Docker. Writes log messages to syslog. `docker logs`
command is not available for this logging driver
The following logging options are supported for this logging driver:
--log-opt address=[tcp|udp]://host:port
--log-opt address=unix://path
`address` specifies the remote syslog server address where the driver connects to.
If not specified it defaults to the local unix socket of the running system.
If transport is either `tcp` or `udp` and `port` is not specified it defaults to `514`
The following example shows how to have the `syslog` driver connect to a `syslog`
remote server at `192.168.0.42` on port `123`
$ docker run --log-driver=syslog --log-opt address=tcp://192.168.0.42:123
#### Logging driver: journald
Journald logging driver for Docker. Writes log messages to journald; the container id will be stored in the journal's `CONTAINER_ID` field. `docker logs` command is not available for this logging driver. For detailed information on working with this logging driver, see [the journald logging driver](reference/logging/journald) reference documentation.
#### Log Opts :
Logging options for configuring a log driver. The following log options are supported: [none]
The following logging options are supported for this logging driver: [none]
## Overriding Dockerfile image defaults

View File

@@ -38,7 +38,7 @@ notation. Use the following guidelines to name your keys:
reverse DNS notation of a domain controlled by the author. For
example, `com.example.some-label`.
- The `com.docker.*`, `io.docker.*` and `com.dockerproject.*` namespaces are
- The `com.docker.*`, `io.docker.*` and `org.dockerproject.*` namespaces are
reserved for Docker's internal use.
- Keys should only consist of lower-cased alphanumeric characters,

View File

@@ -8,92 +8,164 @@ import (
"github.com/docker/distribution/digest"
"github.com/docker/docker/registry"
"github.com/docker/docker/trust"
"github.com/docker/docker/utils"
"github.com/docker/libtrust"
)
// loadManifest loads a manifest from a byte array and verifies its content.
// The signature must be verified or an error is returned. If the manifest
// contains no signatures by a trusted key for the name in the manifest, the
// image is not considered verified. The parsed manifest object and a boolean
// for whether the manifest is verified is returned.
func (s *TagStore) loadManifest(manifestBytes []byte, dgst, ref string) (*registry.ManifestData, bool, error) {
sig, err := libtrust.ParsePrettySignature(manifestBytes, "signatures")
// loadManifest loads a manifest from a byte array and verifies its content,
// returning the local digest, the manifest itself, whether or not it was
// verified. If ref is a digest, rather than a tag, this will be treated as
// the local digest. An error will be returned if the signature verification
// fails, local digest verification fails and, if provided, the remote digest
// verification fails. The boolean return will only be false without error on
// the failure of signatures trust check.
func (s *TagStore) loadManifest(manifestBytes []byte, ref string, remoteDigest digest.Digest) (digest.Digest, *registry.ManifestData, bool, error) {
payload, keys, err := unpackSignedManifest(manifestBytes)
if err != nil {
return nil, false, fmt.Errorf("error parsing payload: %s", err)
return "", nil, false, fmt.Errorf("error unpacking manifest: %v", err)
}
keys, err := sig.Verify()
if err != nil {
return nil, false, fmt.Errorf("error verifying payload: %s", err)
}
// TODO(stevvooe): It would be a lot better here to build up a stack of
// verifiers, then push the bytes one time for signatures and digests, but
// the manifests are typically small, so this optimization is not worth
// hacking this code without further refactoring.
payload, err := sig.Payload()
if err != nil {
return nil, false, fmt.Errorf("error retrieving payload: %s", err)
}
var localDigest digest.Digest
var manifestDigest digest.Digest
// Verify the local digest, if present in ref. ParseDigest will validate
// that the ref is a digest and verify against that if present. Otherwize
// (on error), we simply compute the localDigest and proceed.
if dgst, err := digest.ParseDigest(ref); err == nil {
// verify the manifest against local ref
if err := verifyDigest(dgst, payload); err != nil {
return "", nil, false, fmt.Errorf("verifying local digest: %v", err)
}
if dgst != "" {
manifestDigest, err = digest.ParseDigest(dgst)
localDigest = dgst
} else {
// We don't have a local digest, since we are working from a tag.
// Compute the digest of the payload and return that.
logrus.Debugf("provided manifest reference %q is not a digest: %v", ref, err)
localDigest, err = digest.FromBytes(payload)
if err != nil {
return nil, false, fmt.Errorf("invalid manifest digest from registry: %s", err)
}
dgstVerifier, err := digest.NewDigestVerifier(manifestDigest)
if err != nil {
return nil, false, fmt.Errorf("unable to verify manifest digest from registry: %s", err)
}
dgstVerifier.Write(payload)
if !dgstVerifier.Verified() {
computedDigest, _ := digest.FromBytes(payload)
return nil, false, fmt.Errorf("unable to verify manifest digest: registry has %q, computed %q", manifestDigest, computedDigest)
// near impossible
logrus.Errorf("error calculating local digest during tag pull: %v", err)
return "", nil, false, err
}
}
if utils.DigestReference(ref) && ref != manifestDigest.String() {
return nil, false, fmt.Errorf("mismatching image manifest digest: got %q, expected %q", manifestDigest, ref)
// verify against the remote digest, if available
if remoteDigest != "" {
if err := verifyDigest(remoteDigest, payload); err != nil {
return "", nil, false, fmt.Errorf("verifying remote digest: %v", err)
}
}
var manifest registry.ManifestData
if err := json.Unmarshal(payload, &manifest); err != nil {
return nil, false, fmt.Errorf("error unmarshalling manifest: %s", err)
return "", nil, false, fmt.Errorf("error unmarshalling manifest: %s", err)
}
if manifest.SchemaVersion != 1 {
return nil, false, fmt.Errorf("unsupported schema version: %d", manifest.SchemaVersion)
// validate the contents of the manifest
if err := validateManifest(&manifest); err != nil {
return "", nil, false, err
}
var verified bool
verified, err = s.verifyTrustedKeys(manifest.Name, keys)
if err != nil {
return "", nil, false, fmt.Errorf("error verifying trusted keys: %v", err)
}
return localDigest, &manifest, verified, nil
}
// unpackSignedManifest takes the raw, signed manifest bytes, unpacks the jws
// and returns the payload and public keys used to signed the manifest.
// Signatures are verified for authenticity but not against the trust store.
func unpackSignedManifest(p []byte) ([]byte, []libtrust.PublicKey, error) {
sig, err := libtrust.ParsePrettySignature(p, "signatures")
if err != nil {
return nil, nil, fmt.Errorf("error parsing payload: %s", err)
}
keys, err := sig.Verify()
if err != nil {
return nil, nil, fmt.Errorf("error verifying payload: %s", err)
}
payload, err := sig.Payload()
if err != nil {
return nil, nil, fmt.Errorf("error retrieving payload: %s", err)
}
return payload, keys, nil
}
// verifyTrustedKeys checks the keys provided against the trust store,
// ensuring that the provided keys are trusted for the namespace. The keys
// provided from this method must come from the signatures provided as part of
// the manifest JWS package, obtained from unpackSignedManifest or libtrust.
func (s *TagStore) verifyTrustedKeys(namespace string, keys []libtrust.PublicKey) (verified bool, err error) {
if namespace[0] != '/' {
namespace = "/" + namespace
}
for _, key := range keys {
namespace := manifest.Name
if namespace[0] != '/' {
namespace = "/" + namespace
}
b, err := key.MarshalJSON()
if err != nil {
return nil, false, fmt.Errorf("error marshalling public key: %s", err)
return false, fmt.Errorf("error marshalling public key: %s", err)
}
// Check key has read/write permission (0x03)
v, err := s.trustService.CheckKey(namespace, b, 0x03)
if err != nil {
vErr, ok := err.(trust.NotVerifiedError)
if !ok {
return nil, false, fmt.Errorf("error running key check: %s", err)
return false, fmt.Errorf("error running key check: %s", err)
}
logrus.Debugf("Key check result: %v", vErr)
}
verified = v
if verified {
logrus.Debug("Key check result: verified")
}
}
return &manifest, verified, nil
if verified {
logrus.Debug("Key check result: verified")
}
return
}
func checkValidManifest(manifest *registry.ManifestData) error {
// verifyDigest checks the contents of p against the provided digest. Note
// that for manifests, this is the signed payload and not the raw bytes with
// signatures.
func verifyDigest(dgst digest.Digest, p []byte) error {
if err := dgst.Validate(); err != nil {
return fmt.Errorf("error validating digest %q: %v", dgst, err)
}
verifier, err := digest.NewDigestVerifier(dgst)
if err != nil {
// There are not many ways this can go wrong: if it does, its
// fatal. Likley, the cause would be poor validation of the
// incoming reference.
return fmt.Errorf("error creating verifier for digest %q: %v", dgst, err)
}
if _, err := verifier.Write(p); err != nil {
return fmt.Errorf("error writing payload to digest verifier (verifier target %q): %v", dgst, err)
}
if !verifier.Verified() {
return fmt.Errorf("verification against digest %q failed", dgst)
}
return nil
}
func validateManifest(manifest *registry.ManifestData) error {
if manifest.SchemaVersion != 1 {
return fmt.Errorf("unsupported schema version: %d", manifest.SchemaVersion)
}
if len(manifest.FSLayers) != len(manifest.History) {
return fmt.Errorf("length of history not equal to number of layers")
}

View File

@@ -8,11 +8,13 @@ import (
"os"
"testing"
"github.com/docker/distribution/digest"
"github.com/docker/docker/image"
"github.com/docker/docker/pkg/tarsum"
"github.com/docker/docker/registry"
"github.com/docker/docker/runconfig"
"github.com/docker/docker/utils"
"github.com/docker/libtrust"
)
const (
@@ -181,3 +183,121 @@ func TestManifestTarsumCache(t *testing.T) {
t.Fatalf("Unexpected json value\nExpected:\n%s\nActual:\n%s", v1compat, manifest.History[0].V1Compatibility)
}
}
// TestManifestDigestCheck ensures that loadManifest properly verifies the
// remote and local digest.
func TestManifestDigestCheck(t *testing.T) {
tmp, err := utils.TestDirectory("")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmp)
store := mkTestTagStore(tmp, t)
defer store.graph.driver.Cleanup()
archive, err := fakeTar()
if err != nil {
t.Fatal(err)
}
img := &image.Image{ID: testManifestImageID}
if err := store.graph.Register(img, archive); err != nil {
t.Fatal(err)
}
if err := store.Tag(testManifestImageName, testManifestTag, testManifestImageID, false); err != nil {
t.Fatal(err)
}
if cs, err := img.GetCheckSum(store.graph.ImageRoot(testManifestImageID)); err != nil {
t.Fatal(err)
} else if cs != "" {
t.Fatalf("Non-empty checksum file after register")
}
// Generate manifest
payload, err := store.newManifest(testManifestImageName, testManifestImageName, testManifestTag)
if err != nil {
t.Fatalf("unexpected error generating test manifest: %v", err)
}
pk, err := libtrust.GenerateECP256PrivateKey()
if err != nil {
t.Fatalf("unexpected error generating private key: %v", err)
}
sig, err := libtrust.NewJSONSignature(payload)
if err != nil {
t.Fatalf("error creating signature: %v", err)
}
if err := sig.Sign(pk); err != nil {
t.Fatalf("error signing manifest bytes: %v", err)
}
signedBytes, err := sig.PrettySignature("signatures")
if err != nil {
t.Fatalf("error getting signed bytes: %v", err)
}
dgst, err := digest.FromBytes(payload)
if err != nil {
t.Fatalf("error getting digest of manifest: %v", err)
}
// use this as the "bad" digest
zeroDigest, err := digest.FromBytes([]byte{})
if err != nil {
t.Fatalf("error making zero digest: %v", err)
}
// Remote and local match, everything should look good
local, _, _, err := store.loadManifest(signedBytes, dgst.String(), dgst)
if err != nil {
t.Fatalf("unexpected error verifying local and remote digest: %v", err)
}
if local != dgst {
t.Fatalf("local digest not correctly calculated: %v", err)
}
// remote and no local, since pulling by tag
local, _, _, err = store.loadManifest(signedBytes, "tag", dgst)
if err != nil {
t.Fatalf("unexpected error verifying tag pull and remote digest: %v", err)
}
if local != dgst {
t.Fatalf("local digest not correctly calculated: %v", err)
}
// remote and differing local, this is the most important to fail
local, _, _, err = store.loadManifest(signedBytes, zeroDigest.String(), dgst)
if err == nil {
t.Fatalf("error expected when verifying with differing local digest")
}
// no remote, no local (by tag)
local, _, _, err = store.loadManifest(signedBytes, "tag", "")
if err != nil {
t.Fatalf("unexpected error verifying manifest without remote digest: %v", err)
}
if local != dgst {
t.Fatalf("local digest not correctly calculated: %v", err)
}
// no remote, with local
local, _, _, err = store.loadManifest(signedBytes, dgst.String(), "")
if err != nil {
t.Fatalf("unexpected error verifying manifest without remote digest: %v", err)
}
if local != dgst {
t.Fatalf("local digest not correctly calculated: %v", err)
}
// bad remote, we fail the check.
local, _, _, err = store.loadManifest(signedBytes, dgst.String(), zeroDigest)
if err == nil {
t.Fatalf("error expected when verifying with differing remote digest")
}
}

View File

@@ -142,7 +142,6 @@ func makeMirrorRepoInfo(repoInfo *registry.RepositoryInfo, mirror string) *regis
func configureV2Mirror(repoInfo *registry.RepositoryInfo, s *registry.Service) (*registry.Endpoint, *registry.RepositoryInfo, error) {
mirrors := repoInfo.Index.Mirrors
if len(mirrors) == 0 {
// no mirrors configured
return nil, nil, nil
@@ -151,13 +150,11 @@ func configureV2Mirror(repoInfo *registry.RepositoryInfo, s *registry.Service) (
v1MirrorCount := 0
var v2MirrorEndpoint *registry.Endpoint
var v2MirrorRepoInfo *registry.RepositoryInfo
var lastErr error
for _, mirror := range mirrors {
mirrorRepoInfo := makeMirrorRepoInfo(repoInfo, mirror)
endpoint, err := registry.NewEndpoint(mirrorRepoInfo.Index, nil)
if err != nil {
logrus.Errorf("Unable to create endpoint for %s: %s", mirror, err)
lastErr = err
continue
}
if endpoint.Version == 2 {
@@ -182,9 +179,11 @@ func configureV2Mirror(repoInfo *registry.RepositoryInfo, s *registry.Service) (
return v2MirrorEndpoint, v2MirrorRepoInfo, nil
}
if v2MirrorEndpoint != nil && v1MirrorCount > 0 {
lastErr = fmt.Errorf("v1 and v2 mirrors configured")
return nil, nil, fmt.Errorf("v1 and v2 mirrors configured")
}
return nil, nil, lastErr
// No endpoint could be established with the given mirror configurations
// Fallback to pulling from the hub as per v1 behavior.
return nil, nil, nil
}
func (s *TagStore) pullFromV2Mirror(mirrorEndpoint *registry.Endpoint, repoInfo *registry.RepositoryInfo,
@@ -458,17 +457,6 @@ func WriteStatus(requestedTag string, out io.Writer, sf *streamformatter.StreamF
}
}
// downloadInfo is used to pass information from download to extractor
type downloadInfo struct {
imgJSON []byte
img *image.Image
digest digest.Digest
tmpFile *os.File
length int64
downloaded bool
err chan error
}
func (s *TagStore) pullV2Repository(r *registry.Session, out io.Writer, repoInfo *registry.RepositoryInfo, tag string, sf *streamformatter.StreamFormatter) error {
endpoint, err := r.V2RegistryEndpoint(repoInfo.Index)
if err != nil {
@@ -518,27 +506,34 @@ func (s *TagStore) pullV2Repository(r *registry.Session, out io.Writer, repoInfo
func (s *TagStore) pullV2Tag(r *registry.Session, out io.Writer, endpoint *registry.Endpoint, repoInfo *registry.RepositoryInfo, tag string, sf *streamformatter.StreamFormatter, auth *registry.RequestAuthorization) (bool, error) {
logrus.Debugf("Pulling tag from V2 registry: %q", tag)
manifestBytes, manifestDigest, err := r.GetV2ImageManifest(endpoint, repoInfo.RemoteName, tag, auth)
remoteDigest, manifestBytes, err := r.GetV2ImageManifest(endpoint, repoInfo.RemoteName, tag, auth)
if err != nil {
return false, err
}
// loadManifest ensures that the manifest payload has the expected digest
// if the tag is a digest reference.
manifest, verified, err := s.loadManifest(manifestBytes, manifestDigest, tag)
localDigest, manifest, verified, err := s.loadManifest(manifestBytes, tag, remoteDigest)
if err != nil {
return false, fmt.Errorf("error verifying manifest: %s", err)
}
if err := checkValidManifest(manifest); err != nil {
return false, err
}
if verified {
logrus.Printf("Image manifest for %s has been verified", utils.ImageReference(repoInfo.CanonicalName, tag))
}
out.Write(sf.FormatStatus(tag, "Pulling from %s", repoInfo.CanonicalName))
// downloadInfo is used to pass information from download to extractor
type downloadInfo struct {
imgJSON []byte
img *image.Image
digest digest.Digest
tmpFile *os.File
length int64
downloaded bool
err chan error
}
downloads := make([]downloadInfo, len(manifest.FSLayers))
for i := len(manifest.FSLayers) - 1; i >= 0; i-- {
@@ -611,8 +606,7 @@ func (s *TagStore) pullV2Tag(r *registry.Session, out io.Writer, endpoint *regis
out.Write(sf.FormatProgress(stringid.TruncateID(img.ID), "Verifying Checksum", nil))
if !verifier.Verified() {
logrus.Infof("Image verification failed: checksum mismatch for %q", di.digest.String())
verified = false
return fmt.Errorf("image layer digest verification failed for %q", di.digest)
}
out.Write(sf.FormatProgress(stringid.TruncateID(img.ID), "Download complete", nil))
@@ -689,15 +683,33 @@ func (s *TagStore) pullV2Tag(r *registry.Session, out io.Writer, endpoint *regis
out.Write(sf.FormatStatus(utils.ImageReference(repoInfo.CanonicalName, tag), "The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security."))
}
if manifestDigest != "" {
out.Write(sf.FormatStatus("", "Digest: %s", manifestDigest))
if localDigest != remoteDigest { // this is not a verification check.
// NOTE(stevvooe): This is a very defensive branch and should never
// happen, since all manifest digest implementations use the same
// algorithm.
logrus.WithFields(
logrus.Fields{
"local": localDigest,
"remote": remoteDigest,
}).Debugf("local digest does not match remote")
out.Write(sf.FormatStatus("", "Remote Digest: %s", remoteDigest))
}
if utils.DigestReference(tag) {
if err = s.SetDigest(repoInfo.LocalName, tag, downloads[0].img.ID); err != nil {
out.Write(sf.FormatStatus("", "Digest: %s", localDigest))
if tag == localDigest.String() {
// TODO(stevvooe): Ideally, we should always set the digest so we can
// use the digest whether we pull by it or not. Unfortunately, the tag
// store treats the digest as a separate tag, meaning there may be an
// untagged digest image that would seem to be dangling by a user.
if err = s.SetDigest(repoInfo.LocalName, localDigest.String(), downloads[0].img.ID); err != nil {
return false, err
}
} else {
}
if !utils.DigestReference(tag) {
// only set the repository/tag -> image ID mapping when pulling by tag (i.e. not by digest)
if err = s.Tag(repoInfo.LocalName, tag, downloads[0].img.ID, true); err != nil {
return false, err

View File

@@ -413,7 +413,7 @@ func (s *TagStore) pushV2Repository(r *registry.Session, localRepo Repository, o
m.History[i] = &registry.ManifestHistory{V1Compatibility: string(jsonData)}
}
if err := checkValidManifest(m); err != nil {
if err := validateManifest(m); err != nil {
return fmt.Errorf("invalid manifest: %s", err)
}

View File

@@ -12,6 +12,7 @@ import (
"github.com/docker/docker/daemon/graphdriver"
_ "github.com/docker/docker/daemon/graphdriver/vfs" // import the vfs driver so it is used in the tests
"github.com/docker/docker/image"
"github.com/docker/docker/trust"
"github.com/docker/docker/utils"
)
@@ -60,9 +61,16 @@ func mkTestTagStore(root string, t *testing.T) *TagStore {
if err != nil {
t.Fatal(err)
}
trust, err := trust.NewTrustStore(root + "/trust")
if err != nil {
t.Fatal(err)
}
tagCfg := &TagStoreConfig{
Graph: graph,
Events: events.New(),
Trust: trust,
}
store, err := NewTagStore(path.Join(root, "tags"), tagCfg)
if err != nil {

View File

@@ -96,7 +96,6 @@ fi
if [ "$DOCKER_EXPERIMENTAL" ]; then
echo >&2 '# WARNING! DOCKER_EXPERIMENTAL is set: building experimental features'
echo >&2
VERSION+="-experimental"
DOCKER_BUILDTAGS+=" experimental"
fi
@@ -198,13 +197,13 @@ go_test_dir() {
# if our current go install has -cover, we want to use it :)
mkdir -p "$DEST/coverprofiles"
coverprofile="docker${dir#.}"
coverprofile="$DEST/coverprofiles/${coverprofile//\//-}"
coverprofile="$ABS_DEST/coverprofiles/${coverprofile//\//-}"
testcover=( -cover -coverprofile "$coverprofile" $coverpkg )
fi
(
export DEST
echo '+ go test' $TESTFLAGS "${DOCKER_PKG}${dir#.}"
cd "$dir"
export DEST="$ABS_DEST" # we're in a subshell, so this is safe -- our integration-cli tests need DEST, and "cd" screws it up
test_env go test ${testcover[@]} -ldflags "$LDFLAGS" "${BUILDFLAGS[@]}" $TESTFLAGS
)
}
@@ -217,7 +216,7 @@ test_env() {
DOCKER_USERLANDPROXY="$DOCKER_USERLANDPROXY" \
DOCKER_HOST="$DOCKER_HOST" \
GOPATH="$GOPATH" \
HOME="$DEST/fake-HOME" \
HOME="$ABS_DEST/fake-HOME" \
PATH="$PATH" \
TEST_DOCKERINIT_PATH="$TEST_DOCKERINIT_PATH" \
"$@"
@@ -271,11 +270,9 @@ hash_files() {
}
bundle() {
bundlescript=$1
bundle=$(basename $bundlescript)
echo "---> Making bundle: $bundle (in bundles/$VERSION/$bundle)"
mkdir -p "bundles/$VERSION/$bundle"
source "$bundlescript" "$(pwd)/bundles/$VERSION/$bundle"
local bundle="$1"; shift
echo "---> Making bundle: $(basename "$bundle") (in $DEST)"
source "$SCRIPTDIR/make/$bundle" "$@"
}
main() {
@@ -301,7 +298,14 @@ main() {
bundles=($@)
fi
for bundle in ${bundles[@]}; do
bundle "$SCRIPTDIR/make/$bundle"
export DEST="bundles/$VERSION/$(basename "$bundle")"
# Cygdrive paths don't play well with go build -o.
if [[ "$(uname -s)" == CYGWIN* ]]; then
export DEST="$(cygpath -mw "$DEST")"
fi
mkdir -p "$DEST"
ABS_DEST="$(cd "$DEST" && pwd -P)"
bundle "$bundle"
echo
done
}

View File

@@ -1,6 +1,6 @@
Source: docker-engine
Maintainer: Docker <support@docker.com>
Homepage: https://dockerproject.com
Homepage: https://dockerproject.org
Vcs-Browser: https://github.com/docker/docker
Vcs-Git: git://github.com/docker/docker.git

View File

@@ -6,7 +6,7 @@ Summary: The open-source application container engine
License: ASL 2.0
Source: %{name}.tar.gz
URL: https://dockerproject.com
URL: https://dockerproject.org
Vendor: Docker
Packager: Docker <support@docker.com>

View File

@@ -2,7 +2,7 @@
# see test-integration-cli for example usage of this script
export PATH="$DEST/../binary:$DEST/../dynbinary:$DEST/../gccgo:$DEST/../dyngccgo:$PATH"
export PATH="$ABS_DEST/../binary:$ABS_DEST/../dynbinary:$ABS_DEST/../gccgo:$ABS_DEST/../dyngccgo:$PATH"
if ! command -v docker &> /dev/null; then
echo >&2 'error: binary or dynbinary must be run before .integration-daemon-start'

View File

@@ -1,16 +1,10 @@
#!/bin/bash
set -e
DEST=$1
BINARY_NAME="docker-$VERSION"
BINARY_EXTENSION="$(binary_extension)"
BINARY_FULLNAME="$BINARY_NAME$BINARY_EXTENSION"
# Cygdrive paths don't play well with go build -o.
if [[ "$(uname -s)" == CYGWIN* ]]; then
DEST=$(cygpath -mw $DEST)
fi
source "${MAKEDIR}/.go-autogen"
echo "Building: $DEST/$BINARY_FULLNAME"

View File

@@ -1,10 +1,10 @@
#!/bin/bash
set -e
DEST=$1
# subshell so that we can export PATH without breaking other things
# subshell so that we can export PATH and TZ without breaking other things
(
export TZ=UTC # make sure our "date" variables are UTC-based
source "${MAKEDIR}/.integration-daemon-start"
# TODO consider using frozen images for the dockercore/builder-deb tags

View File

@@ -1,10 +1,10 @@
#!/bin/bash
set -e
DEST=$1
# subshell so that we can export PATH without breaking other things
# subshell so that we can export PATH and TZ without breaking other things
(
export TZ=UTC # make sure our "date" variables are UTC-based
source "$(dirname "$BASH_SOURCE")/.integration-daemon-start"
# TODO consider using frozen images for the dockercore/builder-rpm tags

View File

@@ -1,8 +1,6 @@
#!/bin/bash
set -e
DEST="$1"
bundle_cover() {
coverprofiles=( "$DEST/../"*"/coverprofiles/"* )
for p in "${coverprofiles[@]}"; do

View File

@@ -1,8 +1,6 @@
#!/bin/bash
set -e
DEST=$1
# explicit list of os/arch combos that support being a daemon
declare -A daemonSupporting
daemonSupporting=(
@@ -21,13 +19,15 @@ fi
for platform in $DOCKER_CROSSPLATFORMS; do
(
mkdir -p "$DEST/$platform" # bundles/VERSION/cross/GOOS/GOARCH/docker-VERSION
export DEST="$DEST/$platform" # bundles/VERSION/cross/GOOS/GOARCH/docker-VERSION
mkdir -p "$DEST"
ABS_DEST="$(cd "$DEST" && pwd -P)"
export GOOS=${platform%/*}
export GOARCH=${platform##*/}
if [ -z "${daemonSupporting[$platform]}" ]; then
export LDFLAGS_STATIC_DOCKER="" # we just need a simple client for these platforms
export BUILDFLAGS=( "${ORIG_BUILDFLAGS[@]/ daemon/}" ) # remove the "daemon" build tag from platforms that aren't supported
fi
source "${MAKEDIR}/binary" "$DEST/$platform"
source "${MAKEDIR}/binary"
)
done

View File

@@ -1,8 +1,6 @@
#!/bin/bash
set -e
DEST=$1
if [ -z "$DOCKER_CLIENTONLY" ]; then
source "${MAKEDIR}/.dockerinit"

View File

@@ -1,8 +1,6 @@
#!/bin/bash
set -e
DEST=$1
if [ -z "$DOCKER_CLIENTONLY" ]; then
source "${MAKEDIR}/.dockerinit-gccgo"

View File

@@ -1,7 +1,6 @@
#!/bin/bash
set -e
DEST=$1
BINARY_NAME="docker-$VERSION"
BINARY_EXTENSION="$(binary_extension)"
BINARY_FULLNAME="$BINARY_NAME$BINARY_EXTENSION"

View File

@@ -1,8 +1,6 @@
#!/bin/bash
set -e
DEST=$1
# subshell so that we can export PATH without breaking other things
(
source "${MAKEDIR}/.integration-daemon-start"

View File

@@ -1,8 +1,6 @@
#!/bin/bash
set -e
DEST=$1
bundle_test_integration_cli() {
go_test_dir ./integration-cli
}

View File

@@ -1,7 +1,6 @@
#!/bin/bash
set -e
DEST=$1
: ${PARALLEL_JOBS:=$(nproc 2>/dev/null || echo 1)} # if nproc fails (usually because we don't have it), let's not parallelize by default
RED=$'\033[31m'
@@ -26,10 +25,9 @@ bundle_test_unit() {
export LDFLAGS
export TESTFLAGS
export HAVE_GO_TEST_COVER
export DEST
# some hack to export array variables
export BUILDFLAGS_FILE="buildflags_file"
export BUILDFLAGS_FILE="$DEST/buildflags-file"
( IFS=$'\n'; echo "${BUILDFLAGS[*]}" ) > "$BUILDFLAGS_FILE"
if command -v parallel &> /dev/null; then
@@ -59,7 +57,7 @@ go_run_test_dir() {
while read dir; do
echo
echo '+ go test' $TESTFLAGS "${DOCKER_PKG}${dir#.}"
precompiled="$DEST/precompiled/$dir.test$(binary_extension)"
precompiled="$ABS_DEST/precompiled/$dir.test$(binary_extension)"
if ! ( cd "$dir" && test_env "$precompiled" $TESTFLAGS ); then
TESTS_FAILED+=("$dir")
echo

View File

@@ -1,6 +1,5 @@
#!/bin/bash
DEST="$1"
CROSS="$DEST/../cross"
set -e

View File

@@ -1,7 +1,5 @@
#!/bin/bash
DEST=$1
PKGVERSION="${VERSION//-/'~'}"
# if we have a "-dev" suffix or have change in Git, let's make this package version more complex so it works better
if [[ "$VERSION" == *-dev ]] || [ -n "$(git status --porcelain)" ]; then
@@ -37,7 +35,7 @@ PACKAGE_LICENSE="Apache-2.0"
# Build docker as an ubuntu package using FPM and REPREPRO (sue me).
# bundle_binary must be called first.
bundle_ubuntu() {
DIR=$DEST/build
DIR="$ABS_DEST/build"
# Include our udev rules
mkdir -p "$DIR/etc/udev/rules.d"
@@ -140,9 +138,9 @@ EOF
# create lxc-docker-VERSION package
fpm -s dir -C "$DIR" \
--name "lxc-docker-$VERSION" --version "$PKGVERSION" \
--after-install "$DEST/postinst" \
--before-remove "$DEST/prerm" \
--after-remove "$DEST/postrm" \
--after-install "$ABS_DEST/postinst" \
--before-remove "$ABS_DEST/prerm" \
--after-remove "$ABS_DEST/postrm" \
--architecture "$PACKAGE_ARCHITECTURE" \
--prefix / \
--depends iptables \

View File

@@ -266,7 +266,7 @@ Architectures: amd64 i386
EOF
# Add the DEB package to the APT repo
DEBFILE=bundles/$VERSION/ubuntu/lxc-docker*.deb
DEBFILE=( bundles/$VERSION/ubuntu/lxc-docker*.deb )
reprepro -b "$APTDIR" includedeb docker "$DEBFILE"
# Sign

View File

@@ -55,12 +55,12 @@ clone hg code.google.com/p/go.net 84a4013f96e0
clone hg code.google.com/p/gosqlite 74691fb6f837
#get libnetwork packages
clone git github.com/docker/libnetwork 2da2dc055de5a474c8540871ad88a48213b0994f
clone git github.com/docker/libnetwork f72ad20491e8c46d9664da3f32a0eddb301e7c8d
clone git github.com/vishvananda/netns 008d17ae001344769b031375bdb38a86219154c6
clone git github.com/vishvananda/netlink 8eb64238879fed52fd51c5b30ad20b928fb4c36c
# get distribution packages
clone git github.com/docker/distribution d957768537c5af40e4f4cd96871f7b2bde9e2923
clone git github.com/docker/distribution b9eeb328080d367dbde850ec6e94f1e4ac2b5efe
mv src/github.com/docker/distribution/digest tmp-digest
mv src/github.com/docker/distribution/registry/api tmp-api
rm -rf src/github.com/docker/distribution

View File

@@ -51,6 +51,39 @@ func (s *DockerSuite) TestContainerApiGetAll(c *check.C) {
}
}
// regression test for empty json field being omitted #13691
func (s *DockerSuite) TestContainerApiGetJSONNoFieldsOmitted(c *check.C) {
runCmd := exec.Command(dockerBinary, "run", "busybox", "true")
_, err := runCommand(runCmd)
c.Assert(err, check.IsNil)
status, body, err := sockRequest("GET", "/containers/json?all=1", nil)
c.Assert(status, check.Equals, http.StatusOK)
c.Assert(err, check.IsNil)
// empty Labels field triggered this bug, make sense to check for everything
// cause even Ports for instance can trigger this bug
// better safe than sorry..
fields := []string{
"Id",
"Names",
"Image",
"Command",
"Created",
"Ports",
"Labels",
"Status",
}
// decoding into types.Container do not work since it eventually unmarshal
// and empty field to an empty go map, so we just check for a string
for _, f := range fields {
if !strings.Contains(string(body), f) {
c.Fatalf("Field %s is missing and it shouldn't", f)
}
}
}
func (s *DockerSuite) TestContainerApiGetExport(c *check.C) {
name := "exportcontainer"
runCmd := exec.Command(dockerBinary, "run", "--name", name, "busybox", "touch", "/test")
@@ -254,7 +287,7 @@ func (s *DockerSuite) TestGetContainerStats(c *check.C) {
}
}
func (s *DockerSuite) TestContainerStatsRmRunning(c *check.C) {
func (s *DockerSuite) TestGetContainerStatsRmRunning(c *check.C) {
out, _ := dockerCmd(c, "run", "-d", "busybox", "top")
id := strings.TrimSpace(out)
@@ -290,6 +323,89 @@ func (s *DockerSuite) TestContainerStatsRmRunning(c *check.C) {
c.Assert(err, check.Not(check.IsNil))
}
// regression test for gh13421
// previous test was just checking one stat entry so it didn't fail (stats with
// stream false always return one stat)
func (s *DockerSuite) TestGetContainerStatsStream(c *check.C) {
name := "statscontainer"
runCmd := exec.Command(dockerBinary, "run", "-d", "--name", name, "busybox", "top")
_, err := runCommand(runCmd)
c.Assert(err, check.IsNil)
type b struct {
status int
body []byte
err error
}
bc := make(chan b, 1)
go func() {
status, body, err := sockRequest("GET", "/containers/"+name+"/stats", nil)
bc <- b{status, body, err}
}()
// allow some time to stream the stats from the container
time.Sleep(4 * time.Second)
if _, err := runCommand(exec.Command(dockerBinary, "rm", "-f", name)); err != nil {
c.Fatal(err)
}
// collect the results from the stats stream or timeout and fail
// if the stream was not disconnected.
select {
case <-time.After(2 * time.Second):
c.Fatal("stream was not closed after container was removed")
case sr := <-bc:
c.Assert(sr.err, check.IsNil)
c.Assert(sr.status, check.Equals, http.StatusOK)
s := string(sr.body)
// count occurrences of "read" of types.Stats
if l := strings.Count(s, "read"); l < 2 {
c.Fatalf("Expected more than one stat streamed, got %d", l)
}
}
}
func (s *DockerSuite) TestGetContainerStatsNoStream(c *check.C) {
name := "statscontainer"
runCmd := exec.Command(dockerBinary, "run", "-d", "--name", name, "busybox", "top")
_, err := runCommand(runCmd)
c.Assert(err, check.IsNil)
type b struct {
status int
body []byte
err error
}
bc := make(chan b, 1)
go func() {
status, body, err := sockRequest("GET", "/containers/"+name+"/stats?stream=0", nil)
bc <- b{status, body, err}
}()
// allow some time to stream the stats from the container
time.Sleep(4 * time.Second)
if _, err := runCommand(exec.Command(dockerBinary, "rm", "-f", name)); err != nil {
c.Fatal(err)
}
// collect the results from the stats stream or timeout and fail
// if the stream was not disconnected.
select {
case <-time.After(2 * time.Second):
c.Fatal("stream was not closed after container was removed")
case sr := <-bc:
c.Assert(sr.err, check.IsNil)
c.Assert(sr.status, check.Equals, http.StatusOK)
s := string(sr.body)
// count occurrences of "read" of types.Stats
if l := strings.Count(s, "read"); l != 1 {
c.Fatalf("Expected only one stat streamed, got %d", l)
}
}
}
func (s *DockerSuite) TestGetStoppedContainerStats(c *check.C) {
// TODO: this test does nothing because we are c.Assert'ing in goroutine
var (

View File

@@ -0,0 +1,48 @@
package main
import (
"encoding/json"
"fmt"
"github.com/docker/docker/api/types"
"github.com/go-check/check"
"strings"
"time"
)
func (s *DockerSuite) TestCliStatsNoStreamGetCpu(c *check.C) {
out, _ := dockerCmd(c, "run", "-d", "--cpu-quota=2000", "busybox", "/bin/sh", "-c", "while true;do echo 'Hello';done")
id := strings.TrimSpace(out)
if err := waitRun(id); err != nil {
c.Fatal(err)
}
ch := make(chan error)
var v *types.Stats
go func() {
_, body, err := sockRequestRaw("GET", fmt.Sprintf("/containers/%s/stats?stream=1", id), nil, "")
if err != nil {
ch <- err
}
dec := json.NewDecoder(body)
if err := dec.Decode(&v); err != nil {
ch <- err
}
ch <- nil
}()
select {
case e := <-ch:
if e == nil {
var cpuPercent = 0.0
cpuDelta := float64(v.CpuStats.CpuUsage.TotalUsage - v.PreCpuStats.CpuUsage.TotalUsage)
systemDelta := float64(v.CpuStats.SystemUsage - v.PreCpuStats.SystemUsage)
cpuPercent = (cpuDelta / systemDelta) * float64(len(v.CpuStats.CpuUsage.PercpuUsage)) * 100.0
if cpuPercent < 1.8 || cpuPercent > 2.2 {
c.Fatal("docker stats with no-stream get cpu usage failed")
}
}
case <-time.After(4 * time.Second):
c.Fatal("docker stats with no-stream timeout")
}
}

View File

@@ -1165,3 +1165,19 @@ func (s *DockerDaemonSuite) TestRunContainerWithBridgeNone(c *check.C) {
c.Assert(strings.Contains(out, "eth0"), check.Equals, false,
check.Commentf("There shouldn't be eth0 in container when network is disabled: %s", out))
}
func (s *DockerDaemonSuite) TestDaemonRestartWithContainerRunning(t *check.C) {
if err := s.d.StartWithBusybox(); err != nil {
t.Fatal(err)
}
if out, err := s.d.Cmd("run", "-ti", "-d", "--name", "test", "busybox"); err != nil {
t.Fatal(out, err)
}
if err := s.d.Restart(); err != nil {
t.Fatal(err)
}
// Container 'test' should be removed without error
if out, err := s.d.Cmd("rm", "test"); err != nil {
t.Fatal(out, err)
}
}

View File

@@ -17,8 +17,13 @@ func (s *DockerSuite) TestExperimentalVersion(c *check.C) {
}
for _, line := range strings.Split(out, "\n") {
if strings.HasPrefix(line, "Client version:") || strings.HasPrefix(line, "Server version:") {
c.Assert(line, check.Matches, "*-experimental")
if strings.HasPrefix(line, "Experimental (client):") || strings.HasPrefix(line, "Experimental (server):") {
c.Assert(line, check.Matches, "*true")
}
}
versionCmd = exec.Command(dockerBinary, "-v")
if out, _, err = runCommandWithOutput(versionCmd); err != nil || !strings.Contains(out, ", experimental") {
c.Fatalf("docker version did not contain experimental: %s, %v", out, err)
}
}

View File

@@ -93,6 +93,8 @@ func (s *DockerSuite) TestHelpTextVerify(c *check.C) {
// Skip first line, its "Commands:"
cmds := []string{}
for _, cmd := range strings.Split(out[i:], "\n")[1:] {
var stderr string
// Stop on blank line or non-idented line
if cmd == "" || !unicode.IsSpace(rune(cmd[0])) {
break
@@ -102,12 +104,24 @@ func (s *DockerSuite) TestHelpTextVerify(c *check.C) {
cmd = strings.Split(strings.TrimSpace(cmd), " ")[0]
cmds = append(cmds, cmd)
// Check the full usage text
helpCmd := exec.Command(dockerBinary, cmd, "--help")
helpCmd.Env = newEnvs
out, ec, err := runCommandWithOutput(helpCmd)
out, stderr, ec, err = runCommandWithStdoutStderr(helpCmd)
if len(stderr) != 0 {
c.Fatalf("Error on %q help. non-empty stderr:%q", cmd, stderr)
}
if strings.HasSuffix(out, "\n\n") {
c.Fatalf("Should not have blank line on %q\nout:%q", cmd, out)
}
if !strings.Contains(out, "--help=false") {
c.Fatalf("Should show full usage on %q\nout:%q", cmd, out)
}
if err != nil || ec != 0 {
c.Fatalf("Error on %q help: %s\nexit code:%d", cmd, out, ec)
}
// Check each line for lots of stuff
lines := strings.Split(out, "\n")
for _, line := range lines {
if len(line) > 80 {
@@ -142,6 +156,77 @@ func (s *DockerSuite) TestHelpTextVerify(c *check.C) {
}
}
// For each command make sure we generate an error
// if we give a bad arg
dCmd := exec.Command(dockerBinary, cmd, "--badArg")
out, stderr, ec, err = runCommandWithStdoutStderr(dCmd)
if len(out) != 0 || len(stderr) == 0 || ec == 0 || err == nil {
c.Fatalf("Bad results from 'docker %s --badArg'\nec:%d\nstdout:%s\nstderr:%s\nerr:%q", cmd, ec, out, stderr, err)
}
// Be really picky
if strings.HasSuffix(stderr, "\n\n") {
c.Fatalf("Should not have a blank line at the end of 'docker rm'\n%s", stderr)
}
// Now make sure that each command will print a short-usage
// (not a full usage - meaning no opts section) if we
// are missing a required arg or pass in a bad arg
// These commands will never print a short-usage so don't test
noShortUsage := map[string]string{
"images": "",
"login": "",
"logout": "",
}
if _, ok := noShortUsage[cmd]; !ok {
// For each command run it w/o any args. It will either return
// valid output or print a short-usage
var dCmd *exec.Cmd
var stdout, stderr string
var args []string
// skipNoArgs are ones that we don't want to try w/o
// any args. Either because it'll hang the test or
// lead to incorrect test result (like false negative).
// Whatever the reason, skip trying to run w/o args and
// jump to trying with a bogus arg.
skipNoArgs := map[string]string{
"events": "",
"load": "",
}
ec = 0
if _, ok := skipNoArgs[cmd]; !ok {
args = []string{cmd}
dCmd = exec.Command(dockerBinary, args...)
stdout, stderr, ec, err = runCommandWithStdoutStderr(dCmd)
}
// If its ok w/o any args then try again with an arg
if ec == 0 {
args = []string{cmd, "badArg"}
dCmd = exec.Command(dockerBinary, args...)
stdout, stderr, ec, err = runCommandWithStdoutStderr(dCmd)
}
if len(stdout) != 0 || len(stderr) == 0 || ec == 0 || err == nil {
c.Fatalf("Bad output from %q\nstdout:%q\nstderr:%q\nec:%d\nerr:%q", args, stdout, stderr, ec, err)
}
// Should have just short usage
if !strings.Contains(stderr, "\nUsage: ") {
c.Fatalf("Missing short usage on %q\nstderr:%q", args, stderr)
}
// But shouldn't have full usage
if strings.Contains(stderr, "--help=false") {
c.Fatalf("Should not have full usage on %q\nstderr:%q", args, stderr)
}
if strings.HasSuffix(stderr, "\n\n") {
c.Fatalf("Should not have a blank line on %q\nstderr:%q", args, stderr)
}
}
}
expected := 39
@@ -153,31 +238,92 @@ func (s *DockerSuite) TestHelpTextVerify(c *check.C) {
}
func (s *DockerSuite) TestHelpErrorStderr(c *check.C) {
// If we had a generic CLI test file this one shoudl go in there
func (s *DockerSuite) TestHelpExitCodesHelpOutput(c *check.C) {
// Test to make sure the exit code and output (stdout vs stderr) of
// various good and bad cases are what we expect
cmd := exec.Command(dockerBinary, "boogie")
out, ec, err := runCommandWithOutput(cmd)
if err == nil || ec == 0 {
c.Fatalf("Boogie command should have failed")
// docker : stdout=all, stderr=empty, rc=0
cmd := exec.Command(dockerBinary)
stdout, stderr, ec, err := runCommandWithStdoutStderr(cmd)
if len(stdout) == 0 || len(stderr) != 0 || ec != 0 || err != nil {
c.Fatalf("Bad results from 'docker'\nec:%d\nstdout:%s\nstderr:%s\nerr:%q", ec, stdout, stderr, err)
}
// Be really pick
if strings.HasSuffix(stdout, "\n\n") {
c.Fatalf("Should not have a blank line at the end of 'docker'\n%s", stdout)
}
expected := "docker: 'boogie' is not a docker command. See 'docker --help'.\n"
if out != expected {
c.Fatalf("Bad output from boogie\nGot:%s\nExpected:%s", out, expected)
// docker help: stdout=all, stderr=empty, rc=0
cmd = exec.Command(dockerBinary, "help")
stdout, stderr, ec, err = runCommandWithStdoutStderr(cmd)
if len(stdout) == 0 || len(stderr) != 0 || ec != 0 || err != nil {
c.Fatalf("Bad results from 'docker help'\nec:%d\nstdout:%s\nstderr:%s\nerr:%q", ec, stdout, stderr, err)
}
// Be really pick
if strings.HasSuffix(stdout, "\n\n") {
c.Fatalf("Should not have a blank line at the end of 'docker help'\n%s", stdout)
}
cmd = exec.Command(dockerBinary, "rename", "foo", "bar")
out, ec, err = runCommandWithOutput(cmd)
if err == nil || ec == 0 {
c.Fatalf("Rename should have failed")
// docker --help: stdout=all, stderr=empty, rc=0
cmd = exec.Command(dockerBinary, "--help")
stdout, stderr, ec, err = runCommandWithStdoutStderr(cmd)
if len(stdout) == 0 || len(stderr) != 0 || ec != 0 || err != nil {
c.Fatalf("Bad results from 'docker --help'\nec:%d\nstdout:%s\nstderr:%s\nerr:%q", ec, stdout, stderr, err)
}
// Be really pick
if strings.HasSuffix(stdout, "\n\n") {
c.Fatalf("Should not have a blank line at the end of 'docker --help'\n%s", stdout)
}
expected = `Error response from daemon: no such id: foo
Error: failed to rename container named foo
`
if out != expected {
c.Fatalf("Bad output from rename\nGot:%s\nExpected:%s", out, expected)
// docker inspect busybox: stdout=all, stderr=empty, rc=0
// Just making sure stderr is empty on valid cmd
cmd = exec.Command(dockerBinary, "inspect", "busybox")
stdout, stderr, ec, err = runCommandWithStdoutStderr(cmd)
if len(stdout) == 0 || len(stderr) != 0 || ec != 0 || err != nil {
c.Fatalf("Bad results from 'docker inspect busybox'\nec:%d\nstdout:%s\nstderr:%s\nerr:%q", ec, stdout, stderr, err)
}
// Be really pick
if strings.HasSuffix(stdout, "\n\n") {
c.Fatalf("Should not have a blank line at the end of 'docker inspect busyBox'\n%s", stdout)
}
// docker rm: stdout=empty, stderr=all, rc!=0
// testing the min arg error msg
cmd = exec.Command(dockerBinary, "rm")
stdout, stderr, ec, err = runCommandWithStdoutStderr(cmd)
if len(stdout) != 0 || len(stderr) == 0 || ec == 0 || err == nil {
c.Fatalf("Bad results from 'docker rm'\nec:%d\nstdout:%s\nstderr:%s\nerr:%q", ec, stdout, stderr, err)
}
// Should not contain full help text but should contain info about
// # of args and Usage line
if !strings.Contains(stderr, "requires a minimum") {
c.Fatalf("Missing # of args text from 'docker rm'\nstderr:%s", stderr)
}
// docker rm NoSuchContainer: stdout=empty, stderr=all, rc=0
// testing to make sure no blank line on error
cmd = exec.Command(dockerBinary, "rm", "NoSuchContainer")
stdout, stderr, ec, err = runCommandWithStdoutStderr(cmd)
if len(stdout) != 0 || len(stderr) == 0 || ec == 0 || err == nil {
c.Fatalf("Bad results from 'docker rm NoSuchContainer'\nec:%d\nstdout:%s\nstderr:%s\nerr:%q", ec, stdout, stderr, err)
}
// Be really picky
if strings.HasSuffix(stderr, "\n\n") {
c.Fatalf("Should not have a blank line at the end of 'docker rm'\n%s", stderr)
}
// docker BadCmd: stdout=empty, stderr=all, rc=0
cmd = exec.Command(dockerBinary, "BadCmd")
stdout, stderr, ec, err = runCommandWithStdoutStderr(cmd)
if len(stdout) != 0 || len(stderr) == 0 || ec == 0 || err == nil {
c.Fatalf("Bad results from 'docker BadCmd'\nec:%d\nstdout:%s\nstderr:%s\nerr:%q", ec, stdout, stderr, err)
}
if stderr != "docker: 'BadCmd' is not a docker command.\nSee 'docker --help'.\n" {
c.Fatalf("Unexcepted output for 'docker badCmd'\nstderr:%s", stderr)
}
// Be really picky
if strings.HasSuffix(stderr, "\n\n") {
c.Fatalf("Should not have a blank line at the end of 'docker rm'\n%s", stderr)
}
}

View File

@@ -58,3 +58,62 @@ func (s *DockerSuite) TestKillDifferentUserContainer(c *check.C) {
c.Fatal("killed container is still running")
}
}
// regression test about correct signal parsing see #13665
func (s *DockerSuite) TestKillWithSignal(c *check.C) {
runCmd := exec.Command(dockerBinary, "run", "-d", "busybox", "top")
out, _, err := runCommandWithOutput(runCmd)
c.Assert(err, check.IsNil)
cid := strings.TrimSpace(out)
c.Assert(waitRun(cid), check.IsNil)
killCmd := exec.Command(dockerBinary, "kill", "-s", "SIGWINCH", cid)
_, err = runCommand(killCmd)
c.Assert(err, check.IsNil)
running, err := inspectField(cid, "State.Running")
if running != "true" {
c.Fatal("Container should be in running state after SIGWINCH")
}
}
func (s *DockerSuite) TestKillWithInvalidSignal(c *check.C) {
runCmd := exec.Command(dockerBinary, "run", "-d", "busybox", "top")
out, _, err := runCommandWithOutput(runCmd)
c.Assert(err, check.IsNil)
cid := strings.TrimSpace(out)
c.Assert(waitRun(cid), check.IsNil)
killCmd := exec.Command(dockerBinary, "kill", "-s", "0", cid)
out, _, err = runCommandWithOutput(killCmd)
c.Assert(err, check.NotNil)
if !strings.ContainsAny(out, "Invalid signal: 0") {
c.Fatal("Kill with an invalid signal didn't error out correctly")
}
running, err := inspectField(cid, "State.Running")
if running != "true" {
c.Fatal("Container should be in running state after an invalid signal")
}
runCmd = exec.Command(dockerBinary, "run", "-d", "busybox", "top")
out, _, err = runCommandWithOutput(runCmd)
c.Assert(err, check.IsNil)
cid = strings.TrimSpace(out)
c.Assert(waitRun(cid), check.IsNil)
killCmd = exec.Command(dockerBinary, "kill", "-s", "SIG42", cid)
out, _, err = runCommandWithOutput(killCmd)
c.Assert(err, check.NotNil)
if !strings.ContainsAny(out, "Invalid signal: SIG42") {
c.Fatal("Kill with an invalid signal error out correctly")
}
running, err = inspectField(cid, "State.Running")
if running != "true" {
c.Fatal("Container should be in running state after an invalid signal")
}
}

View File

@@ -2,6 +2,7 @@ package main
import (
"fmt"
"io/ioutil"
"net"
"os/exec"
"strings"
@@ -9,15 +10,14 @@ import (
"github.com/go-check/check"
)
func startServerContainer(c *check.C, proto string, port int) string {
pStr := fmt.Sprintf("%d:%d", port, port)
bCmd := fmt.Sprintf("nc -lp %d && echo bye", port)
cmd := []string{"-d", "-p", pStr, "busybox", "sh", "-c", bCmd}
if proto == "udp" {
cmd = append(cmd, "-u")
}
func startServerContainer(c *check.C, msg string, port int) string {
name := "server"
cmd := []string{
"-d",
"-p", fmt.Sprintf("%d:%d", port, port),
"busybox",
"sh", "-c", fmt.Sprintf("echo %q | nc -lp %d", msg, port),
}
if err := waitForContainer(name, cmd...); err != nil {
c.Fatalf("Failed to launch server container: %v", err)
}
@@ -60,52 +60,41 @@ func getContainerStatus(c *check.C, containerID string) string {
func (s *DockerSuite) TestNetworkNat(c *check.C) {
testRequires(c, SameHostDaemon, NativeExecDriver)
srv := startServerContainer(c, "tcp", 8080)
// Spawn a new container which connects to the server through the
// interface address.
msg := "it works"
startServerContainer(c, msg, 8080)
endpoint := getExternalAddress(c)
runCmd := exec.Command(dockerBinary, "run", "busybox", "sh", "-c", fmt.Sprintf("echo hello world | nc -w 30 %s 8080", endpoint))
if out, _, err := runCommandWithOutput(runCmd); err != nil {
c.Fatalf("Failed to connect to server: %v (output: %q)", err, string(out))
conn, err := net.Dial("tcp", fmt.Sprintf("%s:%d", endpoint.String(), 8080))
if err != nil {
c.Fatalf("Failed to connect to container (%v)", err)
}
result := getContainerLogs(c, srv)
// Ideally we'd like to check for "hello world" but sometimes
// nc doesn't show the data it received so instead let's look for
// the output of the 'echo bye' that should be printed once
// the nc command gets a connection
expected := "bye"
if !strings.Contains(result, expected) {
c.Fatalf("Unexpected output. Expected: %q, received: %q", expected, result)
data, err := ioutil.ReadAll(conn)
conn.Close()
if err != nil {
c.Fatal(err)
}
final := strings.TrimRight(string(data), "\n")
if final != msg {
c.Fatalf("Expected message %q but received %q", msg, final)
}
}
func (s *DockerSuite) TestNetworkLocalhostTCPNat(c *check.C) {
testRequires(c, SameHostDaemon, NativeExecDriver)
srv := startServerContainer(c, "tcp", 8081)
// Attempt to connect from the host to the listening container.
var (
msg = "hi yall"
)
startServerContainer(c, msg, 8081)
conn, err := net.Dial("tcp", "localhost:8081")
if err != nil {
c.Fatalf("Failed to connect to container (%v)", err)
}
if _, err := conn.Write([]byte("hello world\n")); err != nil {
data, err := ioutil.ReadAll(conn)
conn.Close()
if err != nil {
c.Fatal(err)
}
conn.Close()
result := getContainerLogs(c, srv)
// Ideally we'd like to check for "hello world" but sometimes
// nc doesn't show the data it received so instead let's look for
// the output of the 'echo bye' that should be printed once
// the nc command gets a connection
expected := "bye"
if !strings.Contains(result, expected) {
c.Fatalf("Unexpected output. Expected: %q, received: %q", expected, result)
final := strings.TrimRight(string(data), "\n")
if final != msg {
c.Fatalf("Expected message %q but received %q", msg, final)
}
}

View File

@@ -74,6 +74,55 @@ func (s *DockerSuite) TestRmiTag(c *check.C) {
}
}
func (s *DockerSuite) TestRmiImgIDMultipleTag(c *check.C) {
runCmd := exec.Command(dockerBinary, "run", "-d", "busybox", "/bin/sh", "-c", "mkdir '/busybox-one'")
out, _, err := runCommandWithOutput(runCmd)
if err != nil {
c.Fatalf("failed to create a container:%s, %v", out, err)
}
containerID := strings.TrimSpace(out)
runCmd = exec.Command(dockerBinary, "commit", containerID, "busybox-one")
out, _, err = runCommandWithOutput(runCmd)
if err != nil {
c.Fatalf("failed to commit a new busybox-one:%s, %v", out, err)
}
imagesBefore, _ := dockerCmd(c, "images", "-a")
dockerCmd(c, "tag", "busybox-one", "busybox-one:tag1")
dockerCmd(c, "tag", "busybox-one", "busybox-one:tag2")
imagesAfter, _ := dockerCmd(c, "images", "-a")
if strings.Count(imagesAfter, "\n") != strings.Count(imagesBefore, "\n")+2 {
c.Fatalf("tag busybox to create 2 more images with same imageID; docker images shows: %q\n", imagesAfter)
}
imgID, err := inspectField("busybox-one:tag1", "Id")
c.Assert(err, check.IsNil)
// run a container with the image
out, _, err = runCommandWithOutput(exec.Command(dockerBinary, "run", "-d", "busybox-one", "top"))
if err != nil {
c.Fatalf("failed to create a container:%s, %v", out, err)
}
containerID = strings.TrimSpace(out)
// first checkout without force it fails
out, _, err = runCommandWithOutput(exec.Command(dockerBinary, "rmi", imgID))
expected := fmt.Sprintf("Conflict, cannot delete %s because the running container %s is using it, stop it and use -f to force", imgID[:12], containerID[:12])
if err == nil || !strings.Contains(out, expected) {
c.Fatalf("rmi tagged in multiple repos should have failed without force: %s, %v, expected: %s", out, err, expected)
}
dockerCmd(c, "stop", containerID)
dockerCmd(c, "rmi", "-f", imgID)
imagesAfter, _ = dockerCmd(c, "images", "-a")
if strings.Contains(imagesAfter, imgID[:12]) {
c.Fatalf("rmi -f %s failed, image still exists: %q\n\n", imgID, imagesAfter)
}
}
func (s *DockerSuite) TestRmiImgIDForce(c *check.C) {
runCmd := exec.Command(dockerBinary, "run", "-d", "busybox", "/bin/sh", "-c", "mkdir '/busybox-test'")
out, _, err := runCommandWithOutput(runCmd)
@@ -119,7 +168,6 @@ func (s *DockerSuite) TestRmiImgIDForce(c *check.C) {
}
func (s *DockerSuite) TestRmiTagWithExistingContainers(c *check.C) {
container := "test-delete-tag"
newtag := "busybox:newtag"
bb := "busybox:latest"

View File

@@ -3186,3 +3186,26 @@ func (s *DockerSuite) TestRunUnshareProc(c *check.C) {
c.Fatalf("unshare should have failed with permission denied, got: %s, %v", out, err)
}
}
func (s *DockerSuite) TestRunPublishPort(c *check.C) {
out, _, err := runCommandWithOutput(exec.Command(dockerBinary, "run", "-d", "--name", "test", "--expose", "8080", "busybox", "top"))
c.Assert(err, check.IsNil)
out, _, err = runCommandWithOutput(exec.Command(dockerBinary, "port", "test"))
c.Assert(err, check.IsNil)
out = strings.Trim(out, "\r\n")
if out != "" {
c.Fatalf("run without --publish-all should not publish port, out should be nil, but got: %s", out)
}
}
// Issue #10184.
func (s *DockerSuite) TestDevicePermissions(c *check.C) {
const permissions = "crw-rw-rw-"
out, status := dockerCmd(c, "run", "--device", "/dev/fuse:/dev/fuse:mrw", "busybox:latest", "ls", "-l", "/dev/fuse")
if status != 0 {
c.Fatalf("expected status 0, got %d", status)
}
if !strings.HasPrefix(out, permissions) {
c.Fatalf("output should begin with %q, got %q", permissions, out)
}
}

View File

@@ -29,7 +29,7 @@ func (s *DockerSuite) TestCliStatsNoStream(c *check.C) {
if err != nil {
c.Fatalf("Error running stats: %v", err)
}
case <-time.After(2 * time.Second):
case <-time.After(3 * time.Second):
statsCmd.Process.Kill()
c.Fatalf("stats did not return immediately when not streaming")
}

View File

@@ -60,10 +60,10 @@ func SortPortMap(ports []Port, bindings PortMap) {
for _, b := range binding {
s = append(s, portMapEntry{port: p, binding: b})
}
bindings[p] = []PortBinding{}
} else {
s = append(s, portMapEntry{port: p})
}
bindings[p] = []PortBinding{}
}
sort.Sort(s)
@@ -79,7 +79,9 @@ func SortPortMap(ports []Port, bindings PortMap) {
i++
}
// reorder bindings for this port
bindings[entry.port] = append(bindings[entry.port], entry.binding)
if _, ok := bindings[entry.port]; ok {
bindings[entry.port] = append(bindings[entry.port], entry.binding)
}
}
}

View File

@@ -33,7 +33,7 @@ func MapVar(values map[string]string, names []string, usage string) {
}
func LogOptsVar(values map[string]string, names []string, usage string) {
flag.Var(newMapOpt(values, ValidateLogOpts), names, usage)
flag.Var(newMapOpt(values, nil), names, usage)
}
func HostListVar(values *[]string, names []string, usage string) {
@@ -176,15 +176,6 @@ func newMapOpt(values map[string]string, validator ValidatorFctType) *MapOpts {
type ValidatorFctType func(val string) (string, error)
type ValidatorFctListType func(val string) ([]string, error)
func ValidateLogOpts(val string) (string, error) {
allowedKeys := map[string]string{}
vals := strings.Split(val, "=")
if allowedKeys[vals[0]] != "" {
return val, nil
}
return "", fmt.Errorf("%s is not a valid log opt", vals[0])
}
func ValidateAttach(val string) (string, error) {
s := strings.ToLower(val)
for _, str := range []string{"stdin", "stdout", "stderr"} {

14
pkg/ioutils/fmt.go Normal file
View File

@@ -0,0 +1,14 @@
package ioutils
import (
"fmt"
"io"
)
// FprintfIfNotEmpty prints the string value if it's not empty
func FprintfIfNotEmpty(w io.Writer, format, value string) (int, error) {
if value != "" {
return fmt.Fprintf(w, format, value)
}
return 0, nil
}

17
pkg/ioutils/fmt_test.go Normal file
View File

@@ -0,0 +1,17 @@
package ioutils
import "testing"
func TestFprintfIfNotEmpty(t *testing.T) {
wc := NewWriteCounter(&NopWriter{})
n, _ := FprintfIfNotEmpty(wc, "foo%s", "")
if wc.Count != 0 || n != 0 {
t.Errorf("Wrong count: %v vs. %v vs. 0", wc.Count, n)
}
n, _ = FprintfIfNotEmpty(wc, "foo%s", "bar")
if wc.Count != 6 || n != 6 {
t.Errorf("Wrong count: %v vs. %v vs. 6", wc.Count, n)
}
}

View File

@@ -289,7 +289,8 @@ type FlagSet struct {
// Usage is the function called when an error occurs while parsing flags.
// The field is a function (not a method) that may be changed to point to
// a custom error handler.
Usage func()
Usage func()
ShortUsage func()
name string
parsed bool
@@ -511,6 +512,12 @@ func (f *FlagSet) PrintDefaults() {
if runtime.GOOS != "windows" && home == "/" {
home = ""
}
// Add a blank line between cmd description and list of options
if f.FlagCount() > 0 {
fmt.Fprintln(writer, "")
}
f.VisitAll(func(flag *Flag) {
format := " -%s=%s"
names := []string{}
@@ -564,6 +571,12 @@ var Usage = func() {
PrintDefaults()
}
// Usage prints to standard error a usage message documenting the standard command layout
// The function is a variable that may be changed to point to a custom function.
var ShortUsage = func() {
fmt.Fprintf(CommandLine.output, "Usage of %s:\n", os.Args[0])
}
// FlagCount returns the number of flags that have been defined.
func (f *FlagSet) FlagCount() int { return len(sortFlags(f.formal)) }
@@ -1067,12 +1080,15 @@ func (cmd *FlagSet) ParseFlags(args []string, withHelp bool) error {
return err
}
if help != nil && *help {
cmd.SetOutput(os.Stdout)
cmd.Usage()
// just in case Usage does not exit
os.Exit(0)
}
if str := cmd.CheckArgs(); str != "" {
cmd.SetOutput(os.Stderr)
cmd.ReportError(str, withHelp)
cmd.ShortUsage()
os.Exit(1)
}
return nil
}
@@ -1080,13 +1096,12 @@ func (cmd *FlagSet) ParseFlags(args []string, withHelp bool) error {
func (cmd *FlagSet) ReportError(str string, withHelp bool) {
if withHelp {
if os.Args[0] == cmd.Name() {
str += ". See '" + os.Args[0] + " --help'"
str += ".\nSee '" + os.Args[0] + " --help'"
} else {
str += ". See '" + os.Args[0] + " " + cmd.Name() + " --help'"
str += ".\nSee '" + os.Args[0] + " " + cmd.Name() + " --help'"
}
}
fmt.Fprintf(cmd.Out(), "docker: %s\n", str)
os.Exit(1)
fmt.Fprintf(cmd.Out(), "docker: %s.\n", str)
}
// Parsed reports whether f.Parse has been called.

View File

@@ -68,6 +68,7 @@ func (c *Client) callWithRetry(serviceMethod string, args interface{}, ret inter
continue
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
remoteErr, err := ioutil.ReadAll(resp.Body)
if err != nil {

View File

@@ -1,19 +0,0 @@
package urlutil
import "strings"
var validUrlPrefixes = []string{
"http://",
"https://",
}
// IsURL returns true if the provided str is a valid URL by doing
// a simple change for the transport of the url.
func IsURL(str string) bool {
for _, prefix := range validUrlPrefixes {
if strings.HasPrefix(str, prefix) {
return true
}
}
return false
}

View File

@@ -6,26 +6,25 @@ import (
)
var (
validPrefixes = []string{
"git://",
"github.com/",
"git@",
validPrefixes = map[string][]string{
"url": {"http://", "https://"},
"git": {"git://", "github.com/", "git@"},
"transport": {"tcp://", "udp://", "unix://"},
}
urlPathWithFragmentSuffix = regexp.MustCompile(".git(?:#.+)?$")
)
// IsURL returns true if the provided str is an HTTP(S) URL.
func IsURL(str string) bool {
return checkURL(str, "url")
}
// IsGitURL returns true if the provided str is a git repository URL.
func IsGitURL(str string) bool {
if IsURL(str) && urlPathWithFragmentSuffix.MatchString(str) {
return true
}
for _, prefix := range validPrefixes {
if strings.HasPrefix(str, prefix) {
return true
}
}
return false
return checkURL(str, "git")
}
// IsGitTransport returns true if the provided str is a git transport by inspecting
@@ -33,3 +32,17 @@ func IsGitURL(str string) bool {
func IsGitTransport(str string) bool {
return IsURL(str) || strings.HasPrefix(str, "git://") || strings.HasPrefix(str, "git@")
}
// IsTransportURL returns true if the provided str is a transport (tcp, udp, unix) URL.
func IsTransportURL(str string) bool {
return checkURL(str, "transport")
}
func checkURL(str, kind string) bool {
for _, prefix := range validPrefixes[kind] {
if strings.HasPrefix(str, prefix) {
return true
}
}
return false
}

View File

@@ -5,7 +5,7 @@ the Docker project.
### CI
The Docker project uses [Jenkins](https://jenkins.dockerproject.com/) as our
The Docker project uses [Jenkins](https://jenkins.dockerproject.org/) as our
continuous integration server. Each Pull Request to Docker is tested by running the
equivalent of `make all`. We chose Jenkins because we can host it ourselves and
we run Docker in Docker to test.

View File

@@ -0,0 +1,30 @@
#!/bin/bash
source "$(dirname "$BASH_SOURCE")/.validate"
IFS=$'\n'
files=( $(validate_diff --diff-filter=ACMR --name-only -- 'MAINTAINERS' || true) )
unset IFS
badFiles=()
for f in "${files[@]}"; do
# we use "git show" here to validate that what's committed is formatted
if [ "$(git show "$VALIDATE_HEAD:$f" | tomlv)" ]; then
badFiles+=( "$f" )
fi
done
if [ ${#badFiles[@]} -eq 0 ]; then
echo 'Congratulations! All toml source files have valid syntax.'
else
{
echo "These files are not valid toml:"
for f in "${badFiles[@]}"; do
echo " - $f"
done
echo
echo 'Please reformat the above files as valid toml'
echo
} >&2
false
fi

View File

@@ -14,6 +14,7 @@ import (
"path/filepath"
"runtime"
"strings"
"sync"
"time"
"github.com/Sirupsen/logrus"
@@ -56,7 +57,10 @@ func init() {
dockerUserAgent = useragent.AppendVersions("", httpVersion...)
}
type httpsRequestModifier struct{ tlsConfig *tls.Config }
type httpsRequestModifier struct {
mu sync.Mutex
tlsConfig *tls.Config
}
// DRAGONS(tiborvass): If someone wonders why do we set tlsconfig in a roundtrip,
// it's because it's so as to match the current behavior in master: we generate the
@@ -125,8 +129,10 @@ func (m *httpsRequestModifier) ModifyRequest(req *http.Request) error {
}
}
}
m.mu.Lock()
m.tlsConfig.RootCAs = roots
m.tlsConfig.Certificates = certs
m.mu.Unlock()
}
return nil
}
@@ -175,7 +181,7 @@ func NewTransport(timeout TimeoutType, secure bool) http.RoundTripper {
if secure {
// note: httpsTransport also handles http transport
// but for HTTPS, it sets up the certs
return transport.NewTransport(tr, &httpsRequestModifier{tlsConfig})
return transport.NewTransport(tr, &httpsRequestModifier{tlsConfig: tlsConfig})
}
return tr

View File

@@ -84,7 +84,13 @@ func (tr *authTransport) RoundTrip(orig *http.Request) (*http.Response, error) {
if req.Header.Get("Authorization") == "" {
if req.Header.Get("X-Docker-Token") == "true" && len(tr.Username) > 0 {
req.SetBasicAuth(tr.Username, tr.Password)
} else if len(tr.token) > 0 {
} else if len(tr.token) > 0 &&
// Authorization should not be set on 302 redirect for untrusted locations.
// This logic mirrors the behavior in AddRequiredHeadersToRedirectedRequests.
// As the authorization logic is currently implemented in RoundTrip,
// a 302 redirect is detected by looking at the Referer header as go http package adds said header.
// This is safe as Docker doesn't set Referer in other scenarios.
(req.Header.Get("Referer") == "" || trustedLocation(orig)) {
req.Header.Set("Authorization", "Token "+strings.Join(tr.token, ","))
}
}
@@ -98,7 +104,11 @@ func (tr *authTransport) RoundTrip(orig *http.Request) (*http.Response, error) {
}
resp.Body = &transport.OnEOFReader{
Rc: resp.Body,
Fn: func() { delete(tr.modReq, orig) },
Fn: func() {
tr.mu.Lock()
delete(tr.modReq, orig)
tr.mu.Unlock()
},
}
return resp, nil
}

View File

@@ -68,10 +68,15 @@ func (r *Session) GetV2Authorization(ep *Endpoint, imageName string, readOnly bo
// 1.c) if anything else, err
// 2) PUT the created/signed manifest
//
func (r *Session) GetV2ImageManifest(ep *Endpoint, imageName, tagName string, auth *RequestAuthorization) ([]byte, string, error) {
// GetV2ImageManifest simply fetches the bytes of a manifest and the remote
// digest, if available in the request. Note that the application shouldn't
// rely on the untrusted remoteDigest, and should also verify against a
// locally provided digest, if applicable.
func (r *Session) GetV2ImageManifest(ep *Endpoint, imageName, tagName string, auth *RequestAuthorization) (remoteDigest digest.Digest, p []byte, err error) {
routeURL, err := getV2Builder(ep).BuildManifestURL(imageName, tagName)
if err != nil {
return nil, "", err
return "", nil, err
}
method := "GET"
@@ -79,31 +84,45 @@ func (r *Session) GetV2ImageManifest(ep *Endpoint, imageName, tagName string, au
req, err := http.NewRequest(method, routeURL, nil)
if err != nil {
return nil, "", err
return "", nil, err
}
if err := auth.Authorize(req); err != nil {
return nil, "", err
return "", nil, err
}
res, err := r.client.Do(req)
if err != nil {
return nil, "", err
return "", nil, err
}
defer res.Body.Close()
if res.StatusCode != 200 {
if res.StatusCode == 401 {
return nil, "", errLoginRequired
return "", nil, errLoginRequired
} else if res.StatusCode == 404 {
return nil, "", ErrDoesNotExist
return "", nil, ErrDoesNotExist
}
return nil, "", httputils.NewHTTPRequestError(fmt.Sprintf("Server error: %d trying to fetch for %s:%s", res.StatusCode, imageName, tagName), res)
return "", nil, httputils.NewHTTPRequestError(fmt.Sprintf("Server error: %d trying to fetch for %s:%s", res.StatusCode, imageName, tagName), res)
}
manifestBytes, err := ioutil.ReadAll(res.Body)
p, err = ioutil.ReadAll(res.Body)
if err != nil {
return nil, "", fmt.Errorf("Error while reading the http response: %s", err)
return "", nil, fmt.Errorf("Error while reading the http response: %s", err)
}
return manifestBytes, res.Header.Get(DockerDigestHeader), nil
dgstHdr := res.Header.Get(DockerDigestHeader)
if dgstHdr != "" {
remoteDigest, err = digest.ParseDigest(dgstHdr)
if err != nil {
// NOTE(stevvooe): Including the remote digest is optional. We
// don't need to verify against it, but it is good practice.
remoteDigest = ""
logrus.Debugf("error parsing remote digest when fetching %v: %v", routeURL, err)
}
}
return
}
// - Succeeded to head image blob (already exists)

View File

@@ -10,6 +10,7 @@ import (
"path/filepath"
"strings"
"github.com/docker/docker/pkg/symlink"
"github.com/docker/docker/pkg/urlutil"
)
@@ -69,7 +70,11 @@ func checkoutGit(fragment, root string) (string, error) {
}
if len(refAndDir) > 1 && len(refAndDir[1]) != 0 {
newCtx := filepath.Join(root, refAndDir[1])
newCtx, err := symlink.FollowSymlinkInScope(filepath.Join(root, refAndDir[1]), root)
if err != nil {
return "", fmt.Errorf("Error setting git context, %q not within git root: %s", refAndDir[1], err)
}
fi, err := os.Stat(newCtx)
if err != nil {
return "", err

View File

@@ -103,6 +103,14 @@ func TestCheckoutGit(t *testing.T) {
t.Fatal(err)
}
if err = os.Symlink("../subdir", filepath.Join(gitDir, "parentlink")); err != nil {
t.Fatal(err)
}
if err = os.Symlink("/subdir", filepath.Join(gitDir, "absolutelink")); err != nil {
t.Fatal(err)
}
if _, err = gitWithinDir(gitDir, "add", "-A"); err != nil {
t.Fatal(err)
}
@@ -147,6 +155,9 @@ func TestCheckoutGit(t *testing.T) {
{":Dockerfile", "", true}, // not a directory error
{"master:nosubdir", "", true},
{"master:subdir", "FROM scratch\nEXPOSE 5000", false},
{"master:parentlink", "FROM scratch\nEXPOSE 5000", false},
{"master:absolutelink", "FROM scratch\nEXPOSE 5000", false},
{"master:../subdir", "", true},
{"test", "FROM scratch\nEXPOSE 3000", false},
{"test:", "FROM scratch\nEXPOSE 3000", false},
{"test:subdir", "FROM busybox\nEXPOSE 5000", false},

View File

@@ -2,7 +2,6 @@ package digest
import (
"bytes"
"crypto/sha256"
"fmt"
"hash"
"io"
@@ -16,6 +15,7 @@ import (
const (
// DigestTarSumV1EmptyTar is the digest for the empty tar file.
DigestTarSumV1EmptyTar = "tarsum.v1+sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
// DigestSha256EmptyTar is the canonical sha256 digest of empty data
DigestSha256EmptyTar = "sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
)
@@ -39,7 +39,7 @@ const (
type Digest string
// NewDigest returns a Digest from alg and a hash.Hash object.
func NewDigest(alg string, h hash.Hash) Digest {
func NewDigest(alg Algorithm, h hash.Hash) Digest {
return Digest(fmt.Sprintf("%s:%x", alg, h.Sum(nil)))
}
@@ -72,13 +72,13 @@ func ParseDigest(s string) (Digest, error) {
// FromReader returns the most valid digest for the underlying content.
func FromReader(rd io.Reader) (Digest, error) {
h := sha256.New()
digester := Canonical.New()
if _, err := io.Copy(h, rd); err != nil {
if _, err := io.Copy(digester.Hash(), rd); err != nil {
return "", err
}
return NewDigest("sha256", h), nil
return digester.Digest(), nil
}
// FromTarArchive produces a tarsum digest from reader rd.
@@ -131,8 +131,8 @@ func (d Digest) Validate() error {
return ErrDigestInvalidFormat
}
switch s[:i] {
case "sha256", "sha384", "sha512":
switch Algorithm(s[:i]) {
case SHA256, SHA384, SHA512:
break
default:
return ErrDigestUnsupported
@@ -143,8 +143,8 @@ func (d Digest) Validate() error {
// Algorithm returns the algorithm portion of the digest. This will panic if
// the underlying digest is not in a valid format.
func (d Digest) Algorithm() string {
return string(d[:d.sepIndex()])
func (d Digest) Algorithm() Algorithm {
return Algorithm(d[:d.sepIndex()])
}
// Hex returns the hex digest portion of the digest. This will panic if the

View File

@@ -10,7 +10,7 @@ func TestParseDigest(t *testing.T) {
for _, testcase := range []struct {
input string
err error
algorithm string
algorithm Algorithm
hex string
}{
{

View File

@@ -1,44 +1,95 @@
package digest
import (
"crypto/sha256"
"crypto"
"hash"
)
// Digester calculates the digest of written data. It is functionally
// equivalent to hash.Hash but provides methods for returning the Digest type
// rather than raw bytes.
type Digester struct {
alg string
hash hash.Hash
// Algorithm identifies and implementation of a digester by an identifier.
// Note the that this defines both the hash algorithm used and the string
// encoding.
type Algorithm string
// supported digest types
const (
SHA256 Algorithm = "sha256" // sha256 with hex encoding
SHA384 Algorithm = "sha384" // sha384 with hex encoding
SHA512 Algorithm = "sha512" // sha512 with hex encoding
TarsumV1SHA256 Algorithm = "tarsum+v1+sha256" // supported tarsum version, verification only
// Canonical is the primary digest algorithm used with the distribution
// project. Other digests may be used but this one is the primary storage
// digest.
Canonical = SHA256
)
var (
// TODO(stevvooe): Follow the pattern of the standard crypto package for
// registration of digests. Effectively, we are a registerable set and
// common symbol access.
// algorithms maps values to hash.Hash implementations. Other algorithms
// may be available but they cannot be calculated by the digest package.
algorithms = map[Algorithm]crypto.Hash{
SHA256: crypto.SHA256,
SHA384: crypto.SHA384,
SHA512: crypto.SHA512,
}
)
// Available returns true if the digest type is available for use. If this
// returns false, New and Hash will return nil.
func (a Algorithm) Available() bool {
h, ok := algorithms[a]
if !ok {
return false
}
// check availability of the hash, as well
return h.Available()
}
// NewDigester create a new Digester with the given hashing algorithm and instance
// of that algo's hasher.
func NewDigester(alg string, h hash.Hash) Digester {
return Digester{
alg: alg,
hash: h,
// New returns a new digester for the specified algorithm. If the algorithm
// does not have a digester implementation, nil will be returned. This can be
// checked by calling Available before calling New.
func (a Algorithm) New() Digester {
return &digester{
alg: a,
hash: a.Hash(),
}
}
// NewCanonicalDigester is a convenience function to create a new Digester with
// out default settings.
func NewCanonicalDigester() Digester {
return NewDigester("sha256", sha256.New())
// Hash returns a new hash as used by the algorithm. If not available, nil is
// returned. Make sure to check Available before calling.
func (a Algorithm) Hash() hash.Hash {
if !a.Available() {
return nil
}
return algorithms[a].New()
}
// Write data to the digester. These writes cannot fail.
func (d *Digester) Write(p []byte) (n int, err error) {
return d.hash.Write(p)
// TODO(stevvooe): Allow resolution of verifiers using the digest type and
// this registration system.
// Digester calculates the digest of written data. Writes should go directly
// to the return value of Hash, while calling Digest will return the current
// value of the digest.
type Digester interface {
Hash() hash.Hash // provides direct access to underlying hash instance.
Digest() Digest
}
// Digest returns the current digest for this digester.
func (d *Digester) Digest() Digest {
// digester provides a simple digester definition that embeds a hasher.
type digester struct {
alg Algorithm
hash hash.Hash
}
func (d *digester) Hash() hash.Hash {
return d.hash
}
func (d *digester) Digest() Digest {
return NewDigest(d.alg, d.hash)
}
// Reset the state of the digester.
func (d *Digester) Reset() {
d.hash.Reset()
}

View File

@@ -0,0 +1,21 @@
// +build !noresumabledigest
package digest
import (
"testing"
"github.com/stevvooe/resumable"
_ "github.com/stevvooe/resumable/sha256"
)
// TestResumableDetection just ensures that the resumable capability of a hash
// is exposed through the digester type, which is just a hash plus a Digest
// method.
func TestResumableDetection(t *testing.T) {
d := Canonical.New()
if _, ok := d.Hash().(resumable.Hash); !ok {
t.Fatalf("expected digester to implement resumable.Hash: %#v, %v", d, d.Hash())
}
}

View File

@@ -0,0 +1,195 @@
package digest
import (
"errors"
"sort"
"strings"
)
var (
// ErrDigestNotFound is used when a matching digest
// could not be found in a set.
ErrDigestNotFound = errors.New("digest not found")
// ErrDigestAmbiguous is used when multiple digests
// are found in a set. None of the matching digests
// should be considered valid matches.
ErrDigestAmbiguous = errors.New("ambiguous digest string")
)
// Set is used to hold a unique set of digests which
// may be easily referenced by easily referenced by a string
// representation of the digest as well as short representation.
// The uniqueness of the short representation is based on other
// digests in the set. If digests are ommited from this set,
// collisions in a larger set may not be detected, therefore it
// is important to always do short representation lookups on
// the complete set of digests. To mitigate collisions, an
// appropriately long short code should be used.
type Set struct {
entries digestEntries
}
// NewSet creates an empty set of digests
// which may have digests added.
func NewSet() *Set {
return &Set{
entries: digestEntries{},
}
}
// checkShortMatch checks whether two digests match as either whole
// values or short values. This function does not test equality,
// rather whether the second value could match against the first
// value.
func checkShortMatch(alg Algorithm, hex, shortAlg, shortHex string) bool {
if len(hex) == len(shortHex) {
if hex != shortHex {
return false
}
if len(shortAlg) > 0 && string(alg) != shortAlg {
return false
}
} else if !strings.HasPrefix(hex, shortHex) {
return false
} else if len(shortAlg) > 0 && string(alg) != shortAlg {
return false
}
return true
}
// Lookup looks for a digest matching the given string representation.
// If no digests could be found ErrDigestNotFound will be returned
// with an empty digest value. If multiple matches are found
// ErrDigestAmbiguous will be returned with an empty digest value.
func (dst *Set) Lookup(d string) (Digest, error) {
if len(dst.entries) == 0 {
return "", ErrDigestNotFound
}
var (
searchFunc func(int) bool
alg Algorithm
hex string
)
dgst, err := ParseDigest(d)
if err == ErrDigestInvalidFormat {
hex = d
searchFunc = func(i int) bool {
return dst.entries[i].val >= d
}
} else {
hex = dgst.Hex()
alg = dgst.Algorithm()
searchFunc = func(i int) bool {
if dst.entries[i].val == hex {
return dst.entries[i].alg >= alg
}
return dst.entries[i].val >= hex
}
}
idx := sort.Search(len(dst.entries), searchFunc)
if idx == len(dst.entries) || !checkShortMatch(dst.entries[idx].alg, dst.entries[idx].val, string(alg), hex) {
return "", ErrDigestNotFound
}
if dst.entries[idx].alg == alg && dst.entries[idx].val == hex {
return dst.entries[idx].digest, nil
}
if idx+1 < len(dst.entries) && checkShortMatch(dst.entries[idx+1].alg, dst.entries[idx+1].val, string(alg), hex) {
return "", ErrDigestAmbiguous
}
return dst.entries[idx].digest, nil
}
// Add adds the given digests to the set. An error will be returned
// if the given digest is invalid. If the digest already exists in the
// table, this operation will be a no-op.
func (dst *Set) Add(d Digest) error {
if err := d.Validate(); err != nil {
return err
}
entry := &digestEntry{alg: d.Algorithm(), val: d.Hex(), digest: d}
searchFunc := func(i int) bool {
if dst.entries[i].val == entry.val {
return dst.entries[i].alg >= entry.alg
}
return dst.entries[i].val >= entry.val
}
idx := sort.Search(len(dst.entries), searchFunc)
if idx == len(dst.entries) {
dst.entries = append(dst.entries, entry)
return nil
} else if dst.entries[idx].digest == d {
return nil
}
entries := append(dst.entries, nil)
copy(entries[idx+1:], entries[idx:len(entries)-1])
entries[idx] = entry
dst.entries = entries
return nil
}
// ShortCodeTable returns a map of Digest to unique short codes. The
// length represents the minimum value, the maximum length may be the
// entire value of digest if uniqueness cannot be achieved without the
// full value. This function will attempt to make short codes as short
// as possible to be unique.
func ShortCodeTable(dst *Set, length int) map[Digest]string {
m := make(map[Digest]string, len(dst.entries))
l := length
resetIdx := 0
for i := 0; i < len(dst.entries); i++ {
var short string
extended := true
for extended {
extended = false
if len(dst.entries[i].val) <= l {
short = dst.entries[i].digest.String()
} else {
short = dst.entries[i].val[:l]
for j := i + 1; j < len(dst.entries); j++ {
if checkShortMatch(dst.entries[j].alg, dst.entries[j].val, "", short) {
if j > resetIdx {
resetIdx = j
}
extended = true
} else {
break
}
}
if extended {
l++
}
}
}
m[dst.entries[i].digest] = short
if i >= resetIdx {
l = length
}
}
return m
}
type digestEntry struct {
alg Algorithm
val string
digest Digest
}
type digestEntries []*digestEntry
func (d digestEntries) Len() int {
return len(d)
}
func (d digestEntries) Less(i, j int) bool {
if d[i].val != d[j].val {
return d[i].val < d[j].val
}
return d[i].alg < d[j].alg
}
func (d digestEntries) Swap(i, j int) {
d[i], d[j] = d[j], d[i]
}

View File

@@ -0,0 +1,272 @@
package digest
import (
"crypto/sha256"
"encoding/binary"
"math/rand"
"testing"
)
func assertEqualDigests(t *testing.T, d1, d2 Digest) {
if d1 != d2 {
t.Fatalf("Digests do not match:\n\tActual: %s\n\tExpected: %s", d1, d2)
}
}
func TestLookup(t *testing.T) {
digests := []Digest{
"sha256:12345",
"sha256:1234",
"sha256:12346",
"sha256:54321",
"sha256:65431",
"sha256:64321",
"sha256:65421",
"sha256:65321",
}
dset := NewSet()
for i := range digests {
if err := dset.Add(digests[i]); err != nil {
t.Fatal(err)
}
}
dgst, err := dset.Lookup("54")
if err != nil {
t.Fatal(err)
}
assertEqualDigests(t, dgst, digests[3])
dgst, err = dset.Lookup("1234")
if err == nil {
t.Fatal("Expected ambiguous error looking up: 1234")
}
if err != ErrDigestAmbiguous {
t.Fatal(err)
}
dgst, err = dset.Lookup("9876")
if err == nil {
t.Fatal("Expected ambiguous error looking up: 9876")
}
if err != ErrDigestNotFound {
t.Fatal(err)
}
dgst, err = dset.Lookup("sha256:1234")
if err != nil {
t.Fatal(err)
}
assertEqualDigests(t, dgst, digests[1])
dgst, err = dset.Lookup("sha256:12345")
if err != nil {
t.Fatal(err)
}
assertEqualDigests(t, dgst, digests[0])
dgst, err = dset.Lookup("sha256:12346")
if err != nil {
t.Fatal(err)
}
assertEqualDigests(t, dgst, digests[2])
dgst, err = dset.Lookup("12346")
if err != nil {
t.Fatal(err)
}
assertEqualDigests(t, dgst, digests[2])
dgst, err = dset.Lookup("12345")
if err != nil {
t.Fatal(err)
}
assertEqualDigests(t, dgst, digests[0])
}
func TestAddDuplication(t *testing.T) {
digests := []Digest{
"sha256:1234",
"sha256:12345",
"sha256:12346",
"sha256:54321",
"sha256:65431",
"sha512:65431",
"sha512:65421",
"sha512:65321",
}
dset := NewSet()
for i := range digests {
if err := dset.Add(digests[i]); err != nil {
t.Fatal(err)
}
}
if len(dset.entries) != 8 {
t.Fatal("Invalid dset size")
}
if err := dset.Add(Digest("sha256:12345")); err != nil {
t.Fatal(err)
}
if len(dset.entries) != 8 {
t.Fatal("Duplicate digest insert allowed")
}
if err := dset.Add(Digest("sha384:12345")); err != nil {
t.Fatal(err)
}
if len(dset.entries) != 9 {
t.Fatal("Insert with different algorithm not allowed")
}
}
func assertEqualShort(t *testing.T, actual, expected string) {
if actual != expected {
t.Fatalf("Unexpected short value:\n\tExpected: %s\n\tActual: %s", expected, actual)
}
}
func TestShortCodeTable(t *testing.T) {
digests := []Digest{
"sha256:1234",
"sha256:12345",
"sha256:12346",
"sha256:54321",
"sha256:65431",
"sha256:64321",
"sha256:65421",
"sha256:65321",
}
dset := NewSet()
for i := range digests {
if err := dset.Add(digests[i]); err != nil {
t.Fatal(err)
}
}
dump := ShortCodeTable(dset, 2)
if len(dump) < len(digests) {
t.Fatalf("Error unexpected size: %d, expecting %d", len(dump), len(digests))
}
assertEqualShort(t, dump[digests[0]], "sha256:1234")
assertEqualShort(t, dump[digests[1]], "sha256:12345")
assertEqualShort(t, dump[digests[2]], "sha256:12346")
assertEqualShort(t, dump[digests[3]], "54")
assertEqualShort(t, dump[digests[4]], "6543")
assertEqualShort(t, dump[digests[5]], "64")
assertEqualShort(t, dump[digests[6]], "6542")
assertEqualShort(t, dump[digests[7]], "653")
}
func createDigests(count int) ([]Digest, error) {
r := rand.New(rand.NewSource(25823))
digests := make([]Digest, count)
for i := range digests {
h := sha256.New()
if err := binary.Write(h, binary.BigEndian, r.Int63()); err != nil {
return nil, err
}
digests[i] = NewDigest("sha256", h)
}
return digests, nil
}
func benchAddNTable(b *testing.B, n int) {
digests, err := createDigests(n)
if err != nil {
b.Fatal(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
dset := &Set{entries: digestEntries(make([]*digestEntry, 0, n))}
for j := range digests {
if err = dset.Add(digests[j]); err != nil {
b.Fatal(err)
}
}
}
}
func benchLookupNTable(b *testing.B, n int, shortLen int) {
digests, err := createDigests(n)
if err != nil {
b.Fatal(err)
}
dset := &Set{entries: digestEntries(make([]*digestEntry, 0, n))}
for i := range digests {
if err := dset.Add(digests[i]); err != nil {
b.Fatal(err)
}
}
shorts := make([]string, 0, n)
for _, short := range ShortCodeTable(dset, shortLen) {
shorts = append(shorts, short)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
if _, err = dset.Lookup(shorts[i%n]); err != nil {
b.Fatal(err)
}
}
}
func benchShortCodeNTable(b *testing.B, n int, shortLen int) {
digests, err := createDigests(n)
if err != nil {
b.Fatal(err)
}
dset := &Set{entries: digestEntries(make([]*digestEntry, 0, n))}
for i := range digests {
if err := dset.Add(digests[i]); err != nil {
b.Fatal(err)
}
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
ShortCodeTable(dset, shortLen)
}
}
func BenchmarkAdd10(b *testing.B) {
benchAddNTable(b, 10)
}
func BenchmarkAdd100(b *testing.B) {
benchAddNTable(b, 100)
}
func BenchmarkAdd1000(b *testing.B) {
benchAddNTable(b, 1000)
}
func BenchmarkLookup10(b *testing.B) {
benchLookupNTable(b, 10, 12)
}
func BenchmarkLookup100(b *testing.B) {
benchLookupNTable(b, 100, 12)
}
func BenchmarkLookup1000(b *testing.B) {
benchLookupNTable(b, 1000, 12)
}
func BenchmarkShortCode10(b *testing.B) {
benchShortCodeNTable(b, 10, 12)
}
func BenchmarkShortCode100(b *testing.B) {
benchShortCodeNTable(b, 100, 12)
}
func BenchmarkShortCode1000(b *testing.B) {
benchShortCodeNTable(b, 1000, 12)
}

View File

@@ -6,10 +6,10 @@ import (
"regexp"
)
// TarSumRegexp defines a reguler expression to match tarsum identifiers.
// TarSumRegexp defines a regular expression to match tarsum identifiers.
var TarsumRegexp = regexp.MustCompile("tarsum(?:.[a-z0-9]+)?\\+[a-zA-Z0-9]+:[A-Fa-f0-9]+")
// TarsumRegexpCapturing defines a reguler expression to match tarsum identifiers with
// TarsumRegexpCapturing defines a regular expression to match tarsum identifiers with
// capture groups corresponding to each component.
var TarsumRegexpCapturing = regexp.MustCompile("(tarsum)(.([a-z0-9]+))?\\+([a-zA-Z0-9]+):([A-Fa-f0-9]+)")

Some files were not shown because too many files have changed in this diff Show More