If no Mtu value is provided to the docker daemon, get the mtu from the
default route's interface. If there is no default route, default to a
mtu of 1500.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit ff4e58ff56)
Currently digests are not stored on pull, causing a simple re-tag or re-push to send up all layers. Storing the digests on pull will allow subsequent pushes to the same repository to not push up content.
This does not address pushing content to a new repository. When content is pushed to a new repository, the digest will be recalculated. Since only one digest is currently stored, it may cause a new content push to the original repository.
Fixes#13883
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
(cherry picked from commit a98ea87e46)
Tweaking for Hugo
Updating the Dockerfile with new sed; fix broken link on Kitematic
Fixing image pull for Dockerfile
Removing docs targets
Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit f93fee5f48)
The main Dockerfile to was updated - this update brings the
sub-directory specific file inline with it.
Fixes#12866
Signed-off-by: Brian Exelbierd <bex@pobox.com>
(cherry picked from commit 5d51118c7c)
Vendoring in libnetwork 90638ec9cf7fa7b7f5d0e96b0854f136d66bff92
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
(cherry picked from commit 386ab25137)
Adding in other areas per comments
Updating with comments; equalizing generating man page info
Updating with duglin's comments
Doug is right here again;fixing.
Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit eacae64bd8)
This is breaking various setups where the host's rootfs is mount shared
correctly and breaks live migration with bind mounts.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit c9d71317be)
vols.VolumesRW has been initialized so it can't be nil. Furthermore
it's ok to read a nil map.
Signed-off-by: Zefan Li <lizefan@huawei.com>
(cherry picked from commit 8b4c0decfc)
When the daemon is going down trigger immediate
garbage collection of libnetwork resources deleted
like namespace path since there will be no way to
remove them when the daemon restarts.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
(cherry picked from commit c68e7f96f9)
This patch ensures no auth headers are set for v1 registries if there
was a 302 redirect.
This also ensures v2 does not use authTransport.
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 123a0582b2)
Show how to use `systemctl show` and recommend against modifying
system unit files in `/usr` and `/lib`.
Fixes#13796.
Signed-off-by: Eric-Olivier Lamey <eo@lamey.me>
(cherry picked from commit 68bfd9e3ae)
Minor tweak to the quoted/json form and made man page look like the Dockerfile
docs. W/o the `,` people may think there should be a space delimited list.
Signed-off-by: Doug Davis <dug@us.ibm.com>
(cherry picked from commit f4a3e8bef0)
Also add a comment to the ValidatePath func so devs/reviewers
know exactly what its looking for.
Signed-off-by: Doug Davis <dug@us.ibm.com>
(cherry picked from commit 3fcf53db92)
This was added before the libnetwork merge, and then lost. Fixes#13755.
Signed-off-by: Eric-Olivier Lamey <eo@lamey.me>
(cherry picked from commit 5fa60149e2)
Remove reference to experimental releases as it is really a nightly
channel rather than a scheduled release.
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
(cherry picked from commit d8680f7beb)
Using "DEST" for our build artifacts inside individual bundlescripts was already well-established convention, but this officializes it by having `make.sh` itself set the variable and create the directory, also handling CYGWIN oddities in a single central place (instead of letting them spread outward from `hack/make/binary` like was definitely on their roadmap, whether they knew it or not; sneaky oddities).
Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
(cherry picked from commit ac3388367b)
container.config.NetworkDisabled is set for both daemon's
DisableNetwork and --networking=false case. Hence using
this flag instead to fix#13725.
There is an existing integration-test to catch this issue,
but it is working for the wrong reasons.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
(cherry picked from commit 83208a531d)
I added 301 redirects from dockerproject.com to dockerproject.org but may as
well make sure everything is updated anyways.
Signed-off-by: Jessica Frazelle <princess@docker.com>
(cherry picked from commit 7943bce894)
This removes complexity of current implementation and makes the test
correct and assert the right things.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit 4f42097883)
To ensure manifest integrity when pulling by digest, this changeset ensures
that not only the remote digest provided by the registry is verified but also
that the digest provided on the command line is checked, as well. If this check
fails, the pull is cancelled as with an error. Inspection also should that
while layers were being verified against their digests, the error was being
treated as tech preview image signing verification error. This, in fact, is not
a tech preview and opens up the docker daemon to man in the middle attacks that
can be avoided with the v2 registry protocol.
As a matter of cleanliness, the digest package from the distribution project
has been updated to latest version. There were some recent improvements in the
digest package.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
(cherry picked from commit 06612cc0fe)
Refactoring in Docker 1.7 changed the behavior to add this header where as Docker <= 1.6 wouldn't emit this Header on a HTTP 302 redirect.
This closes#13649
Signed-off-by: Jeffrey van Gogh <jvg@google.com>
(cherry picked from commit 65c5105fcc)
fallback to pulling from the hub as per v1 behavior.
Signed-off-by: Richard Scothern <richard.scothern@gmail.com>
(cherry picked from commit 6e4ff1bb13)
Add a paragraph in cli.md mentioning that overlay is not a production
ready graphdriver.
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
(cherry picked from commit 67cb748e26)
Merge user specified devices correctly with default devices.
Otherwise the user specified devices end up without permissions.
Signed-off-by: David R. Jenni <david.r.jenni@gmail.com>
(cherry picked from commit c913c9921b)
add api version experimental
Signed-off-by: Jessica Frazelle <princess@docker.com>
(cherry picked from commit b372f9f224)
Conflicts:
docs/sources/reference/api/docker_remote_api_v1.20.md
Sometimes container.cleanup() can be called from multiple paths
for the same container during error conditions from monitor and
regular startup path. So if the container network has been already
released do not try to release it again.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
(cherry picked from commit 6cdf8623d5)
Continues 11858 by:
- Making sure the exit code is always zero when we ask for help
- Making sure the exit code isn't zero when we print help on error cases
- Making sure both short and long usage go to the same stream (stdout vs stderr)
- Making sure all docker commands support --help
- Test that all cmds send --help to stdout, exit code 0, show full usage, no blank lines at end
- Test that all cmds (that support it) show short usage on bad arg to stderr, no blank line at end
- Test that all cmds complain about a bad option, no blank line at end
- Test that docker (w/o subcmd) does the same stuff mentioned above properly
Signed-off-by: Doug Davis <dug@us.ibm.com>
(cherry picked from commit 8324d7918b)
Add a paragraph in cli.md mentioning that overlay is not a production
ready graphdriver.
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
(cherry picked from commit 67cb748e26)
When using a scanner, log lines over 64K will crash the Copier with
bufio.ErrTooLong. Subsequently, the ioutils.bufReader will grow without
bound as the logs are no longer being flushed to disk.
Signed-off-by: Burke Libbey <burke.libbey@shopify.com>
(cherry picked from commit f779cfc5d8)
* origin/master: (999 commits)
Review feedback: - Match verbiage with other output - Remove dead code and clearer flow
Vendoring in libnetwork 2da2dc055de5a474c8540871ad88a48213b0994f
Restore the stripped registry version number
Use SELinux labels for volumes
apply selinux labels volume patch on volumes refactor
Modify volume mounts SELinux labels on the fly based on :Z or :z
Remove unused code
Remove redundant set header
Return err if we got err on parseForm
script cleaned up
Fix unregister stats on when rm running container
Fix container unmount networkMounts
Windows: Set default exec driver to windows
Fixes title, line wrap, and Adds install area Tibor's comment Updating with the new plugins Entering comments from Seb
Add regression test to make sure we can load old containers with volumes.
Do not force `syscall.Unmount` on container cleanup.
Revert "Add docker exec run a command in privileged mode"
Cleanup container rm funcs
Allow mirroring only for the official index
Registry v2 mirror support.
...
Conflicts:
CHANGELOG.md
VERSION
api/client/commands.go
api/client/utils.go
api/server/server.go
api/server/server_linux.go
builder/shell_parser.go
builder/words
daemon/config.go
daemon/container.go
daemon/daemon.go
daemon/delete.go
daemon/execdriver/execdrivers/execdrivers_linux.go
daemon/execdriver/lxc/driver.go
daemon/execdriver/native/driver.go
daemon/graphdriver/aufs/aufs.go
daemon/graphdriver/driver.go
daemon/logger/syslog/syslog.go
daemon/networkdriver/bridge/driver.go
daemon/networkdriver/portallocator/portallocator.go
daemon/networkdriver/portmapper/mapper.go
daemon/networkdriver/portmapper/mapper_test.go
daemon/volumes.go
docs/Dockerfile
docs/man/docker-create.1.md
docs/man/docker-login.1.md
docs/man/docker-logout.1.md
docs/man/docker-run.1.md
docs/man/docker.1.md
docs/mkdocs.yml
docs/s3_website.json
docs/sources/installation/windows.md
docs/sources/reference/api/docker_remote_api_v1.18.md
docs/sources/reference/api/registry_api_client_libraries.md
docs/sources/reference/builder.md
docs/sources/reference/run.md
docs/sources/release-notes.md
graph/graph.go
graph/push.go
hack/install.sh
hack/vendor.sh
integration-cli/docker_cli_build_test.go
integration-cli/docker_cli_pull_test.go
integration-cli/docker_cli_run_test.go
pkg/archive/changes.go
pkg/broadcastwriter/broadcastwriter.go
pkg/ioutils/readers.go
pkg/ioutils/readers_test.go
pkg/progressreader/progressreader.go
registry/auth.go
vendor/src/github.com/docker/libcontainer/cgroups/fs/cpu.go
vendor/src/github.com/docker/libcontainer/cgroups/fs/devices.go
vendor/src/github.com/docker/libcontainer/cgroups/fs/memory.go
vendor/src/github.com/docker/libcontainer/cgroups/systemd/apply_systemd.go
vendor/src/github.com/docker/libcontainer/container_linux.go
vendor/src/github.com/docker/libcontainer/init_linux.go
vendor/src/github.com/docker/libcontainer/integration/exec_test.go
vendor/src/github.com/docker/libcontainer/integration/utils_test.go
vendor/src/github.com/docker/libcontainer/nsinit/README.md
vendor/src/github.com/docker/libcontainer/process.go
vendor/src/github.com/docker/libcontainer/rootfs_linux.go
vendor/src/github.com/docker/libcontainer/update-vendor.sh
vendor/src/github.com/docker/libnetwork/portallocator/portallocator_test.go
Fixes a regression from the volumes refactor where the vfs graphdriver
was setting labels for volumes to `s0` so that they can both be written
to by the container and shared with other containers.
When moving away from vfs this was never re-introduced.
Since this needs to happen regardless of volume driver, this is
implemented outside of the driver.
Fixes issue where `z` and `Z` labels are not set for bind-mounts.
Don't lock while creating volumes
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This patch is extending the qualifiers on the -v command to allow
an admin to tell the system to relabel, content. There might be a
need for something similar for changing the DAC Permissions.
Signed-off-by: Jessica Frazelle <princess@docker.com>
UnmountVolumes used to also unmount 'specialMounts' but it no longer does after
a recent refactor of volumes. This patch corrects this behavior to include
unmounting of `networkMounts` which replaces `specialMounts` (now dead code).
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
The v2 registry will act as a pull-through cache, and needs to be
handled differently by the client to the v1 registry mirror.
See docker/distribution#459 for details
Configuration
Only one v2 registry can be configured as a mirror. Acceptable configurations
in this chanage are: 0...n v1 mirrors or 1 v2 mirror. A mixture of v1 and v2
mirrors is considered an error.
Pull
If a v2 mirror is configured, all pulls are redirected to that mirror. The
mirror will serve the content locally or attempt a pull from the upstream mirror,
cache it locally, and then serve to the client.
Push
If an image is tagged to a mirror, it will be pushed to the mirror and be
stored locally there. Otherwise, images are pushed to the hub. This is
unchanged behavior.
Signed-off-by: Richard Scothern <richard.scothern@gmail.com>
Having the line break in the middle of a code span caused the line break to appear on the web site version, even though it doesn't appear in the preview.
The other two changes are just small grammar improvements.
Signed-off-by: don <donkirkby@gmail.com>
I'm fairly consistently seeing an error in
DockerSuite.TestContainerApiRestartNotimeoutParam:
docker_api_containers_test.go:969:
c.Assert(status, check.Equals, http.StatusNoContent)
... obtained int = 500
... expected int = 204
And in the daemon logs I see:
INFO[0003] Container 8cf77c20275586b36c5095613159cf73babf92ba42ed4a2954bd55dca6b08971 failed to exit within 0 seconds of SIGTERM - using the force
ERRO[0003] Handler for POST /containers/{name:.*}/restart returned error: Cannot restart container 8cf77c20275586b36c5095613159cf73babf92ba42ed4a2954bd55dca6b08971: [2] Container does not exist: container destroyed
ERRO[0003] HTTP Error err=Cannot restart container 8cf77c20275586b36c5095613159cf73babf92ba42ed4a2954bd55dca6b08971: [2] Container does not exist: container destroyed
statusCode=500
Note the "container destroyed" error message. This is being generatd by
the libcontainer code and bubbled up in container.Kill() as a result of the
call to `container.killPossiblyDeadProcess(9)` on line 439.
See the comment in the code, but what I think is going on is that because we
don't have any timeout on the Stop() call we immediate try to force things to
stop. And by the time we get into libcontainer code the process just finished
stopping due to the initial signal, so this secondary sig-9 fails due to the
container no longer running (ie. its 'destroyed').
Since we can't look for "container destroyed" to just ignore the error, because
some other driver might have different text, I opted to just ignore the error
and keep going - with the assumption that if it couldnt send a sig-9 to the
process then it MUST be because its already dead and not something else.
To reproduce this I just run:
curl -v -X POST http://127.0.0.1:2375/v1.19/containers/8cf77c20275586b36c5095613159cf73babf92ba42ed4a2954bd55dca6b08971/restart
a few times and then it fails with the HTTP 500.
Would like to hear some other ideas on to handle this since I'm not
thrilled with the proposed solution.
Signed-off-by: Doug Davis <dug@us.ibm.com>
* Don't AllocateNetwork when network is disabled
* Don't createNetwork in execdriver when network is disabled
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
We should let user create container even if the container he wants
join is not running, that check should be done at start time.
In this case, the running check is done by getIpcContainer() when
we start container.
Signed-off-by: Qiang Huang <h.huangqiang@huawei.com>
We already vendor distribution under ./vendor, but
because the GOPATH is /go:/go/src/github.com/.../vendor
Go will always compile the source code at /go not in ./vendor.
Apart from the fact that it is very inconvenient during
development, it was also a time-bomb: someone vendors a fix
from upstream distribution, but forgets to update
REGISTRY_COMMIT in the Dockerfile, and the binary doesn't get
the fix.
Signed-off-by: Tibor Vass <tibor@docker.com>
Signed-off-by: Mary Anthony <mary@docker.com>
Adding experimental features into the mix
Signed-off-by: Mary Anthony <mary@docker.com>
renaming file
Signed-off-by: Mary Anthony <mary@docker.com>
Because I just used it somewhere else and it would be nice if I didn't have to copy and paste the code.
Signed-off-by: David Calavera <david.calavera@gmail.com>
The DOCKER_EXPERIMENTAL environment variable drives the activation of
the 'experimental' build tag.
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
Give Docker more time to kill containers before upstart kills Docker.
The default kill timeout is 5 seconds.
This will help decrease the chance of but not eliminate the chance of
orphaned container processes.
Signed-off-by: David Xia <dxia@spotify.com>
If a container was started with a non-root user the container
may not be able to resolve DNS names because of too restrictive
permission in the /etc/resolv.conf container file. This problem
is in how this file gets created in libnetwork and ths PR
attempts to fix the issue by vendoring in the libnetwork code
with the fix.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
As part of this some generic packages like iptables, etchosts and resolvconf
have also been moved to libnetwork. Even though they can still be
consumed in a generic fashion they will reside and be maintained
from within the libnetwork project.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
- Updated Dockerfile to satisfy libnetwork GOPATH requirements.
- Reworked daemon to allocate network resources using libnetwork.
- Reworked remove link code to also update network resources in libnetwork.
- Adjusted the exec driver command population to reflect libnetwork design.
- Adjusted the exec driver create command steps.
- Updated a few test cases to reflect the change in design.
- Removed the dns setup code from docker as resolv.conf is entirely managed
in libnetwork.
- Integrated with lxc exec driver.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
This stops us from erroneously adding "squeeze-lts" to "oldstable" which is now "wheezy", not "squeeze" (but "oldoldstable" _is_ squeeze, hence the new check on `/etc/debian_version` being `6.*` instead, and done as a `case` for the eventual addition of `wheezy-lts`, etc).
Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
This patch removes the need for requestFactories and decorators
by implementing http.RoundTripper transports instead.
It refactors some challenging-to-read code.
NewSession now takes an *http.Client that can already have a
custom Transport, it will add its own auth transport by wrapping
it.
The idea is that callers of http.Client should not bother
setting custom headers for every handler but instead it should
be transparent to the callers of a same context.
This patch is needed for future refactorings of registry,
namely refactoring of the v1 client code.
Signed-off-by: Tibor Vass <tibor@docker.com>
Not 100% sure why our Windows test don't complain about some of these,
I'm guessing it because we have bash as part of some git package, but
either way we really shouldn't require bash to run our tests unless we
really need to - which in these cases we don't
Signed-off-by: Doug Davis <dug@us.ibm.com>
The intention of the user is to download a verified image if explicitly
pulling with a digest and falling back to v1 registry circumvents that
protection.
Signed-off-by: Nuutti Kotivuori <naked@iki.fi>
In cases where this is failing it's ok to have the test take extra time,
but we don't want it to fail because of some race/performance
differences between systems.
So make the timeout go to 30s and double the sleep time in between
checks so it's not pounding the daemon quite so fast.
Originally I couldn't make this test fail (pre-change), but changed
graphdrivers and saw a failure pretty quickly.
This change seems to smooth that out.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Currently check-config.sh just said enable or missing, when I used
a fresh kernel, made check-config.sh happy, still can't start
container. It take me days debuging kernel and Docker and finally
found it's because I enabled some CONFIGs as modules and never
loaded these modules.
So I think it's necessary to let check-config.sh told users which
configs are enabled as modules.
Signed-off-by: Qiang Huang <h.huangqiang@huawei.com>
when the daemon can't download the image from a `docker import` the
error message was lost due to 'err' being redefined with a block by
mistake. This removes the ":" from "... err := " which fixes it.
Signed-off-by: Doug Davis <dug@us.ibm.com>
Fix for #13175.
This change allows user-input timestamps (e.g. to `docker events
--since/--until` or `docker logs --since` to be parsed using
standard RFC3339Nano layout in Go instead of the layout that parses
all timestamps into fixed-length strings (currently buggy).
User inputs need not to be complying to the internal format
(`RFC3339NanoFixed`) anyway.
Added test case for `events --since/--until` with all possible
timestamp input formats.
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
Prior to this patch, the response of
- GET /images/json
- GET /containers/json
- GET /images/(name)/history
display the Created Time as UNIX format which doesn't make sense.
These should be more readable as CLI command `docker inspect` shows.
Due to the case that an older client with a newer version daemon, we
need the version check for now.
Signed-off-by: Hu Keping <hukeping@huawei.com>
TestLogsSince used to rely on time synchronization and precision of
the timestamps. This change provides a less flaky alternative.
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
- noplog driver pkg for '--log-driver=none' (null object pattern)
- centralized factory for log drivers (instead of case/switch)
- logging drivers registers themselves to factory upon import
(easy plug/unplug of drivers in daemon/logdrivers.go)
- daemon now doesn't start with an invalid log driver
- Name() method of loggers is actually now their cli names (made it useful)
- generalized Read() logic, made it unsupported except json-file (preserves
existing behavior)
Spotted some duplication code around processing of legacy json-file
format, didn't touch that and refactored in both places.
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
Previously the cache was only updated once on startup, because the graph
code only check for filesystems on startup. However this breaks the API as it
was supposed and so unit tests.
Fixes#13142
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
Add handler for SIGUSR1 based on feedback regarding when to dump
goroutine stacks. This will also dump goroutine stack traces on SIGQUIT
followed by a hard-exit from the daemon.
Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
Added --since argument to `docker logs` command. Accept unix
timestamps and shows logs only created after the specified date.
Default value is 0 and passing default value or not specifying
the value in the request causes parameter to be ignored (behavior
prior to this change).
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
Previous fix used %q which incorrectly go-escaped things. For example:
```
RUN echoo A \& B C
```
would result in the user seeing:
```
INFO[0000] The command '/bin/sh -c echoo A \\& B\tC' returned a non-zero code: 127
```
Note the double-\ and the \t instead of a tab character
The testcase had to double escape things due to logrus getting in the way
but I'm going to fix that in another PR because its a change to the UX.
Signed-off-by: Doug Davis <dug@us.ibm.com>
If firewalld is not installed (or I suppose not running), firewalld was
producing an error in the daemon init logs, even though firewalld is not
required for iptables stuff to function.
The firewalld library code was also logging directly to logrus instead
of returning errors.
Moved logging code higher up in the stack and changed firewalld code to
return errors where appropriate.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
I believe this was failing because 'nc' wouldn't show the data
it received sometimes. So intead of looking for that data we now
look for the output of the echo that comes after the nc command
successfully runs
Signed-off-by: Doug Davis <dug@us.ibm.com>
Generation based on CAP_LAST_CAP, I hardcoded
capability.CAP_BLOCK_SUSPEND as last for systems which has no
/proc/sys/kernel/cap_last_cap
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
The docker graph call driver.Exists() on initialisation for each filesystem in
the graph. This results will results in a lot `zfs get all` commands. To reduce
this, retrieve all descend filesystem at startup and cache it for later checks
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
instead of let zfs automaticly mount datasets, mount them on demand using mount(2).
This speed up this graph driver in 2 ways:
- less zfs processes needed to start a container
- /proc/mounts get smaller, so zfs userspace tools has less to read (which can
a significant amount of data as the number of layer grows)
This ways it can be also ensured that the correct mountpoint is always used.
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
Previously, we've taken advantage of the fact that libcontainer's `update-vendor.sh` is the same syntax as Docker's `vendor.sh` with some shell magic. This changes that to copy libcontainer's dependencies into this file explicitly so that we can scale to more projects with varying methods of vendoring (assuming they don't use import re-writing, which screws up everyone).
We'll need to stay diligent in making sure this list matches what's in libcontainer's `update-vendor.sh` (minus the not-required codegangsta/cli dep), but that's a fair trade-off for being able to scale our dependency model better (and track new discrete dependencies more directly).
Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
Before this PR the list of commands that were printed for the help
text was statically put into the flags.Usage() function via main.init(). This
made it impossible to modify later on when we add new commands
via plugins.
This PR moves the list of commands (name & description) into a structure
that is sorted and printed dynamically by the Usage func. This will allow
the code to add to the list of commands after the init() func is done
but before the help text is printed for the user.
This just moves code around - it should have no UX impact at all.
Signed-off-by: Doug Davis <dug@us.ibm.com>
* daemon creation wasn't parallel to request buffering
* it was possible that empty volume will be created in
/var/run/docker.sock by some container
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Add tests for mounting into /proc and /sys
These two locations should be prohibited from mounting volumes into
those destinations.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Also, `curl` is smart enough to see when the consumer of the pipe is going slow that it should slow down the transfer, so this gives a reasonable indication of extraction progress too.
Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
This makes the "Buffering to disk" part of `docker push` 70% faster in
my use-case (having already applied #12833).
fsync'ing here serves no valuable purpose: if the drive's operation is
interrupted, so it the program's, and this archive has no value other
than the immediate and transient one.
Signed-off-by: Burke Libbey <burke.libbey@shopify.com>
The lxc code here is doing the exact same thing on calling
execdriver.Terminate, so let's just use that.
Also removes some dead comments originally introduced
50144aeb42 but no longer relevant since we
have restart policies.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This fixes the part of #12996 that I forgot. 👼
This also fixes a minor path issue (there's no `libexec` in Debian), and fixes a minor bug with the `debVersion` parsing.
Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
To account for "/" not working in filenames, we replace it with "_" for our temporary files (that exist only to emulate Bash 4's associative arrays in Bash 3).
Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
`lxc-stop` does not support sending arbitrary signals.
By default, `lxc-stop -n <id>` would send `SIGPWR`.
The lxc driver was always sending `lxc-stop -n <id> -k`, which always
sends `SIGKILL`. In this case `lxc-start` returns an exit code of `0`,
regardless of what the container actually exited with.
Because of this we must send signals directly to the process when we
can.
Also need to set quiet mode on `lxc-start` otherwise it reports an error
on `stderr` when the container exits cleanly (ie, we didn't SIGKILL it),
this error is picked up in the container logs... and isn't really an
error.
Also cleaned up some potential races for waitblocked test.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
When RUN returns with a non-zero return code it prints the command
that was executed as a Go []string:
```
INFO[0000] The command &{[/bin/sh -c noop a1 a2]} returned a non-zero code: 127
```
instead it should look like this:
```
INFO[0000] The command "/bin/sh -c noop a1 a2" returned a non-zero code: 127
```
Signed-off-by: Doug Davis <dug@us.ibm.com>
This fixes an issue where the build output for the "Steps" would look like:
```
Step 1: RUN echo hi echo hi
```
instead of
```
Step 1: RUN echo hi
```
Also, I noticed that there were no checks to make sure invalid Dockerfile
cmd flags were caught on cmds that didn't use cmd flags at all. They would
have been caught on the cmds that had flags, but cmds that didn't bother
to add a new code for flags would have just ignored them. So, I added
checks to each cmd to flag it.
Added testcases for issues.
Signed-off-by: Doug Davis <dug@us.ibm.com>
This change adds a new docker-in-docker dynamic binary make target which
builds a centos container for creating the dynamically linked binary.
To use it, you first must create the static binary and then call the
dind-dynbinary target. You can call it like:
$ hack/make.sh binary dind-dynbinary rpm
This would then package the dynamic binary into the rpm after having
created it in the centos build container. Unfortunately with this approach
you can't create the rpms and the debs with the same command. They have to
be created separately otherwise the wrong version (static vs. dynamic) gets
packaged.
Various RPM fixes including:
- Adding missing RPM dependencies.
- Add sysconfig configuration files to the RPM.
- Add an epoch to silence the fpm warning.
- Remove unnecessary empty package.
Signed-off-by: Patrick Devine <patrick.devine@docker.com>
Signed-off-by: Chad Metcalf <chad@docker.com>
The `--userland-proxy` daemon flag makes it possible to rely on hairpin
NAT and additional iptables routes instead of userland proxy for port
publishing and inter-container communication.
Usage of the userland proxy remains the default as hairpin NAT is
unsupported by older kernels.
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
To help avoid version mismatches between libcontainer and Docker, this updates libcontainer to be the source of truth for which version of logrus the project is using. This should help avoid potential incompatibilities in the future, too. 👍
Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
Adds a `stream` query param to the stats API which allows API users to
only collect one stats entry and disconnect instead of keeping the
connection alive to stream more stats.
Also adds a `--no-stream` flag to `docker stats` which does the same
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Signed-off-by: Daniel Zhang <jmzwcn@gmail.com>
Remove empty line after client.CmdInspect docstring
fix#12706
Signed-off-by: Daniel Zhang <jmzwcn@gmail.com>
Remove empty line after client.CmdInspect docstring
fix#12706
Signed-off-by: Daniel Zhang <jmzwcn@gmail.com>
See https://github.com/docker-library/docker/issues/2 for some of the context around this; essentially, we're looking to create an official `docker` image that includes the Docker CLI (and the dependencies necessary for easy Docker-in-Docker), but it makes sense to use the `docker` namespace for that image. The `docker-dev` official image is already builds of Docker's own `Dockerfile` (but specifically of releases), so this seemed like a good fit. 👍
Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
Turns out that `-f` on a file that's in `.dockerignore` actually does work. No idea why it wasn't when I was doing this before, but oh well! 🤘
Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
With this, `docker pull deb<tab>` will show all `debian:*` tags, as before, but `docker pull -a deb<tab>` will complete directly to just `debian`. 👍
Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
Right now devicemapper mounts thin device using online discards by default
and passes mount option "discard". Generally people discourage usage of
online discards as they can be a drain on performance. Instead it is
recommended to use fstrim once in a while to reclaim the space.
In case of containers, we recommend to keep data volumes separate. So
there might not be lot of rm, unlink operations going on and there might
not be lot of space being freed by containers. So it might not matter
much if we don't reclaim that free space in pool.
User can still pass mount option explicitly using dm.mountopt=discard to
enable discards if they would like to.
So this is more like setting the containers by default for better performance
instead of better space efficiency in pool. And user can change the behavior
if they don't like default behavior.
Reported-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Minor thing but docker cp --help was:
Copy files/folders from a PATH on the container to a HOSTDIR on the host
running the command. Use '-' to write the data
as a tar file to STDOUT.
This changes it to:
Copy files/folders from a PATH on the container to a HOSTDIR on the host
running the command. Use '-' to write the data as a tar file to STDOUT.
The \n made the output look funky.
Signed-off-by: Doug Davis <dug@us.ibm.com>
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Add tests for mounting into /proc and /sys
These two locations should be prohibited from mounting volumes into
those destinations.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
This patch modifies the journald log driver to store the container ID in
a field named CONTAINER_ID, rather than (ab)using the MESSAGE_ID field.
Additionally, this adds the CONTAINER_ID_FULL field containing the
complete container ID and CONTAINER_NAME, containing the container name.
When using the journald log driver, this permits you to see log messages
from a particular container like this:
# journalctl CONTAINER_ID=a9238443e193
Example output from "journalctl -o verbose" includes the following:
CONTAINER_ID=27aae7361e67
CONTAINER_ID_FULL=27aae7361e67e2b4d3864280acd2b80e78daf8ec73786d8b68f3afeeaabbd4c4
CONTAINER_NAME=web
Closes: #12864
Signed-off-by: Lars Kellogg-Stedman <lars@redhat.com>
prioritize the ports with static mapping before dynamic mapping. This removes
the port conflicts when we allocate static port in the reserved range
together with dynamic ones.
When static port is allocated first, Docker will skip those when determining
free ports for dynamic ones.
Signed-off-by: Daniel, Dao Quang Minh <dqminh89@gmail.com>
We encountered a situation where concurrent invocations of the docker daemon on a machine with an older version of iptables led to nondeterministic errors related to simultaenous invocations of iptables.
While this is best resolved by upgrading iptables itself, the particular situation would have been avoided if the docker daemon simply took care not to concurrently invoke iptables. Of course, external processes could also cause iptables to fail in this way, but invoking docker in parallel seems like a pretty common case.
Signed-off-by: Aaron Davidson <aaron@databricks.com>
The `-N` option is not compatible with the `-O` option of wget (see the man page).
The example command now matches the example in the script on http://get.docker.com/.
Signed-off-by: Nik Nyby <nikolas@gnu.org>
This adds support for Dockerfile commands to have options - e.g:
COPY --user=john foo /tmp/
COPY --ignore-mtime foo /tmp/
Supports both booleans and strings.
Signed-off-by: Doug Davis <dug@us.ibm.com>
Before, inspect cont1 cont2 shows:
[{
xxx
}
,{
xxx
}
]
After, it shows:
[
{
xxx
}
,{
xxx
}
]
Because `func (*Encoder) Encode` always followed by a newline character,
so it's difficult to put '}' and ']' one the same line.
To get symmetry, above is our choice.
Signed-off-by: Qiang Huang <h.huangqiang@huawei.com>
The existing page is focused on listing a set of requirements for
proposing a new repository. This information has become outdated and is
duplicated in the `docker-library/official-images` and
`docker-library/docs` GitHub repositories. This PR rewrites the
Official Repositories page to describe what they actually are, and
defers to GitHub/IRC for the subset of users that are interested in
contributing. I also removed the requirement to contact
partners@docker.com and made it optional to reduce the barrier to entry.
Signed-off-by: Peter Salvatore <peter@psftw.com>
Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
docs: crop and optimize images
Cropped and optimized images.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
Testing and updating the Windows material
Signed-off-by: Mary Anthony <mary@docker.com>
checkpoint
Signed-off-by: Mary Anthony <mary@docker.com>
Fitting the Windows specific material into the contributor guide
Signed-off-by: Mary Anthony <mary@docker.com>
Entering James comments
Signed-off-by: Mary Anthony <mary@docker.com>
When a container has errors on removal, it gets flagged as dead.
If you `docker rm -f` a dead container the container is dereffed from
the daemon and doesn't show up on `docker ps` anymore... except that the
container JSON file may still be lingering around and becomes undead
when you restart the daemon.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
in https://docs.docker.com/articles/ambassador_pattern_linking/
svendowideit/ambassador images is from docker-ut built using this script
and uses socat but socat complains as follows
socat: error while loading shared libraries: libreadline.so.5: cannot open shared object file: No such file or directory
socat: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory
socat: error while loading shared libraries: libcrypto.so.1.0.0: cannot open shared object file: No such file or directory
socat: error while loading shared libraries: libtinfo.so.5: cannot open shared object file: No such file or directory
/usr/lib/x86_64-linux-gnu/lib{crypto,ssl}.so* lib are symlinks so removing -P option from cp
adding libreadline.so and libtinfo.so
Signed-off-by: Jinsoo Park <cellpjs@gmail.com>
update libssl.so path
Signed-off-by: Jinsoo Park <cellpjs@gmail.com>
Remove mkimage-unittest.sh
Signed-off-by: Jinsoo Park <cellpjs@gmail.com>
ParseRestartPolicy is useful function for third party go programs to use
so that they can parse the restart policy in the same way that Docker
does
Signed-off-by: Darren Shepherd <darren@rancher.com>
statsCollector.publishers must be protected to prevent
modifications during the iteration in run().
Being locked for a long time is bad, so pairs of containers &
publishers (pointers) are copied to release the lock fast.
Signed-off-by: Anton Tiurin <noxiouz@yandex.ru>
This only happens with the old git http dumb protocol, but that's what we use in our integration tests.
We check the Content-Type header advertised in http requests to make sure the http transport is the git smart transport:
See this commit as a reference:
4656bf47fc
Signed-off-by: David Calavera <david.calavera@gmail.com>
Add tests on:
- changes.go
- archive.go
- wrap.go
Should fix#11603 as the coverage is now 81.2% on the ``pkg/archive``
package. There is still room for improvement though :).
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
Several parts of the codebase didn't use the correct path sanitisation
wrappers. Now that the wrappers have been exposed, use those.
Signed-off-by: Aleksa Sarai <cyphar@cyphar.com> (github: cyphar)
Due to the importance of path safety, the internal sanitisation wrappers
for volumes and containers should be exposed so other parts of Docker
can benefit from proper path sanitisation.
Signed-off-by: Aleksa Sarai <cyphar@cyphar.com> (github: cyphar)
If memory cgroup is mounted, memory limit is always supported,
no need to check if these files are exist.
Signed-off-by: Qiang Huang <h.huangqiang@huawei.com>
This also moves `exec -i` test to _unix_test.go because it seems to need a
pty to reliably reproduce the behavior.
Signed-off-by: Daniel, Dao Quang Minh <dqminh89@gmail.com>
Since docker test suite is now using gocheck, ``defer
deleteContainer(…)`` is not needed anymore.
Fixes#12705
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
From the Bash manual's `set -e` description:
(https://www.gnu.org/software/bash/manual/bashref.html#index-set)
> Exit immediately if a pipeline (see Pipelines), which may consist of a
> single simple command (see Simple Commands), a list (see Lists), or a
> compound command (see Compound Commands) returns a non-zero status.
> The shell does not exit if the command that fails is part of the
> command list immediately following a while or until keyword, part of
> the test in an if statement, part of any command executed in a && or
> || list except the command following the final && or ||, any command
> in a pipeline but the last, or if the command’s return status is being
> inverted with !. If a compound command other than a subshell returns a
> non-zero status because a command failed while -e was being ignored,
> the shell does not exit.
Additionally, further down:
> If a compound command or shell function executes in a context where -e
> is being ignored, none of the commands executed within the compound
> command or function body will be affected by the -e setting, even if
> -e is set and a command returns a failure status. If a compound
> command or shell function sets -e while executing in a context where
> -e is ignored, that setting will not have any effect until the
> compound command or the command containing the function call
> completes.
Thus, the only way to have our `.integration-daemon-stop` script
actually run appropriately to clean up our daemon on test/script failure
is to use `trap ... EXIT`, which we traditionally avoid because it does
not have any stacking capabilities, but in this case is a reasonable
compromise because it's going to be the only script using it (for now,
at least; we can evaluate more complex solutions in the future if they
actually become necessary).
The alternatives were much less reasonable. One is to have the entire
complex chains in any script wanting to use `.integration-daemon-start`
/ `.integration-daemon-stop` be chained together with `&&` in an `if`
block, which is untenable. The other I could think of was taking the
body of these scripts out into separate scripts, essentially meaning
we'd need two files for each of these, which further complicates the
maintenance.
Add to that the fact that our `trap ... EXIT` is scoped to the enclosing
subshell (`( ... )`) and we're in even more reasonable territory with
this pattern.
Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
We need this, so client can get error from stream and not from status
code, which is already 200, because write to ResponseWriter was occured.
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
When docker pull a non-existent repo, daemon will report "image xxx not found"
with an error code 500, which should be 404.
This commit add 404 "not found" mapping and refactor httpError function.
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
Currently `docker inspect -f` use json.Unmarshal() unmarshal
to interface, it will store all JSON numbers in float64, so
we use `docker inspect 4f0d73b75a0d | grep Memory` and
`docker inspect -f {{.HostConfig.Memory}} 4f0d73b75a0d` will
get different values.
Signed-off-by: Qiang Huang <h.huangqiang@huawei.com>
- Mount struct now called volumeMount
- Merged volume creation for each volume type (volumes-from, binds, normal
volumes) so this only happens in once place
- Simplified container copy of volumes (for when `docker cp` is a
volume)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Parse the history output to locate the size fields and check
whether they are the correct format or not. Use Column name SIZE
to mark start and end indices of the size fields
Fixes#12578
Signed-off-by: Ankush Agarwal <ankushagarwal11@gmail.com>
https://www.kali.org/ is a Debian derivative. This script completes
succesfully using the Debian install path
Signed-off-by: Andrew Martin <sublimino@gmail.com>
If device is being reactivated before it could go away and deferred
deactivation is scheduled on it, cancel it.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Deferred reove functionality was added to library later. So in old version
of library it did not report deferred_remove field.
Create a new function which also gets deferred_remove field and it will be
called only on newer version of library.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
If a device has been scheduled for deferred deactivation and container
is started again and we need to activate device again, we need to cancel
the deferred deactivation which is already scheduled on the device.
Create a method for the same.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
This will help with debugging as one could just do "docker info" and figure
out of deferred removal is enabled or not.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Provide a new command line knob dm.deferred_device_removal which will enable
deferred device deactivation if driver and library support it.
This patch also checks for library support and driver version.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
A lot of time device mapper devices leak across mount namespace which docker
does not know about and when docker tries to deactivate/delete device,
operation fails as device is open in some mount namespace.
Create a mechanism where one can defer the device deactivation/deletion
so that docker operation does not fail and device automatically goes
away when last reference to it is dropped.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
libdm started offering deferred remove functionality from version
1.02.89. As docker still builds against older libdm, define a tag
libdm_no_deferred_remove to determine whether we are compiling
against new libdm or older one and enable/disable deferred remove
functionality accordingly.
Signed-off-by: Vincent Batts <vbatts@redhat.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
When ever something vendor/ changes the go dependencies have to downloaded again,
which requires internet access and there for is potential slow. COPY and go install is much faster, while the git urls does not change not this often.
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
Thanks to @dmcgowan for noticing.
Added a testcase to make sure Save() can create the dir and then
read from it.
Signed-off-by: Doug Davis <dug@us.ibm.com>
Though I am not clear on the intent of the sentence if it means that existing data in the base image is copied into the new volume then the additions I propose make that more clear than the present language.
Signed-off-by: Kent Johnson <kentoj@gmail.com>
This PR does the following:
- migrated ~/.dockerfg to ~/.docker/config.json. The data is migrated
but the old file remains in case its needed
- moves the auth json in that fie into an "auth" property so we can add new
top-level properties w/o messing with the auth stuff
- adds support for an HttpHeaders property in ~/.docker/config.json
which adds these http headers to all msgs from the cli
In a follow-on PR I'll move the config file process out from under
"registry" since it not specific to that any more. I didn't do it here
because I wanted the diff to be smaller so people can make sure I didn't
break/miss any auth code during my edits.
Signed-off-by: Doug Davis <dug@us.ibm.com>
It is simplifies code and lead to next refactoring step, where daemon
will be incorporated to some structure which represents API.
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
When firewalld (or iptables service) restarts/reloads,
all previously added docker firewall rules are flushed.
With firewalld we can react to its Reloaded() [1]
D-Bus signal and recreate the firewall rules.
Also when firewalld gets restarted (stopped & started)
we can catch the NameOwnerChanged signal [2].
To specify which signals we want to react to we use AddMatch [3].
Libvirt has been doing this for quite a long time now.
Docker changes firewall rules on basically 3 places.
1) daemon/networkdriver/portmapper/mapper.go - port mappings
Portmapper fortunatelly keeps list of mapped ports,
so we can easily recreate firewall rules on firewalld restart/reload
New ReMapAll() function does that
2) daemon/networkdriver/bridge/driver.go
When setting a bridge, basic firewall rules are created.
This is done at once during start, it's parametrized and nowhere
tracked so how can one know what and how to set it again when
there's been firewalld restart/reload ?
The only solution that came to my mind is using of closures [4],
i.e. I keep list of references to closures (anonymous functions
together with a referencing environment) and when there's firewalld
restart/reload I re-call them in the same order.
3) links/links.go - linking containers
Link is added in Enable() and removed in Disable().
In Enable() we add a callback function, which creates the link,
that's OK so far.
It'd be ideal if we could remove the same function from
the list in Disable(). Unfortunatelly that's not possible AFAICT,
because we don't know the reference to that function
at that moment, so we can only add a reference to function,
which removes the link. That means that after creating and
removing a link there are 2 functions in the list,
one adding and one removing the link and after
firewalld restart/reload both are called.
It works, but it's far from ideal.
[1] https://jpopelka.fedorapeople.org/firewalld/doc/firewalld.dbus.html#FirewallD1.Signals.Reloaded
[2] http://dbus.freedesktop.org/doc/dbus-specification.html#bus-messages-name-owner-changed
[3] http://dbus.freedesktop.org/doc/dbus-specification.html#message-bus-routing-match-rules
[4] https://en.wikipedia.org/wiki/Closure_%28computer_programming%29
Signed-off-by: Jiri Popelka <jpopelka@redhat.com>
Firewalld [1] is a firewall managing daemon with D-Bus interface.
What sort of problem are we trying to solve with this ?
Firewalld internally also executes iptables/ip6tables to change firewall settings.
It might happen on systems where both docker and firewalld are running
concurrently, that both of them try to call iptables at the same time.
The result is that the second one fails because the first one is holding a xtables lock.
One workaround is to use --wait/-w option in both
docker & firewalld when calling iptables.
It's already been done in both upstreams:
b315c380f4b3b451d6f8
But it'd still be better if docker used firewalld when it's running.
Other problem the firewalld support would solve is that
iptables/firewalld service's restart flushes all firewall rules
previously added by docker.
See next patch for possible solution.
This patch utilizes firewalld's D-Bus interface.
If firewalld is running, we call direct.passthrough() [2] method instead
of executing iptables directly.
direct.passthrough() takes the same arguments as iptables tool itself
and passes them through to iptables tool.
It might be better to use other methods, like direct.addChain and
direct.addRule [3] so it'd be more intergrated with firewalld, but
that'd make the patch much bigger.
If firewalld is not running, everything works as before.
[1] http://www.firewalld.org/
[2] https://jpopelka.fedorapeople.org/firewalld/doc/firewalld.dbus.html#FirewallD1.direct.Methods.passthrough
[3] https://jpopelka.fedorapeople.org/firewalld/doc/firewalld.dbus.html#FirewallD1.direct.Methods.addChainhttps://jpopelka.fedorapeople.org/firewalld/doc/firewalld.dbus.html#FirewallD1.direct.Methods.addRule
Signed-off-by: Jiri Popelka <jpopelka@redhat.com>
Two main things
- Create a real struct Info for all of the data with the proper types
- Add test for REST API get info
Signed-off-by: Hu Keping <hukeping@huawei.com>
THe port tests in integration-cli tests just for the port-mapping as
seen by Docker daemon. But it doesn't perform a more indepth testing by
checking for the exposed port on the host.
This change helps to fill that gap.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
Single value labels do not work in 1.6 and multi-label instructions only work when separated by non-EOL whitespace.
I also added an example snip from the inspect output with the labels that are included in this guide.
Signed-off-by: Jeff Nickoloff <jeff@allingeek.com>
- Compose teamhad forgotten some documentation
- Updated ENV for Distribution also
- Forgot one of the readability sections
Signed-off-by: Mary Anthony <mary@docker.com>
Changed method declaration. Fixed all calls to dockerCmd
method to reflect the change.
resolves#12355
Signed-off-by: bobby abbott <ttobbaybbob@gmail.com>
- sort inspect out
- update output fields
- format output
- add doc about go template
- other minor fix
Signed-off-by: Qiang Huang <h.huangqiang@huawei.com>
Went through the man pages to update for the
v2 instance. Checked against the commands.
Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit b6d55ebcbc)
This patch changes two things
1. Set facility to LOG_DAEMON
2. Remove ": " from tag so that the tag + pid become a single column in
the log
Signed-off-by: Darren Shepherd <darren@rancher.com>
(cherry picked from commit 05641ccffc)
This patch changes two things
1. Set facility to LOG_DAEMON
2. Remove ": " from tag so that the tag + pid become a single column in
the log
Signed-off-by: Darren Shepherd <darren@rancher.com>
Without adding the user to the group you're going to hit nasty TLS errors. Figured I'd save the next guy the hassle.
Problem more accurately described here:
https://github.com/docker/docker/issues/5314
Signed-off-by: Jason Divock <jdivock@box.com>
Once the job has failed and is respawned, the status becomes `docker
respawn/post-start` after subsequent failures (as opposed to `docker
stop/post-start`), so the post-start script needs to take this into
account.
I could not find specific documentation on the job transitioning to the
`respawn/post-start` state, but this was observed on Ubuntu 14.04.2.
Signed-off-by: Lewis Marshall <lewis@lmars.net>
(cherry picked from commit 302e3834a0)
sockRequest now makes the status code available in the returned
values. This helps avoid string checking for non-HttpStatusOK(=200)
yet successful error messages.
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
Check test correctness of untar by comparing destination with
source. For part 2, it checkes hashes of source and destination
files or the target files of symbolic links.
This is a supplement to the #11601 fix.
Signed-off-by: Yestin Sun <sunyi0804@gmail.com>
Instead of seeding/polluting the global random instance,
creating a local `rand.Random` instance which provides the same
level of randomness.
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
Before this, a storage driver would be defaulted to based on the
priority list, and only print a warning if there is state from other
drivers.
This meant a reordering of priority list would "break" users in an
upgrade of docker, such that there images in the prior driver's state
were now invisible.
With this change, prior state is scanned, and if present that driver is
preferred.
As such, we can reorder the priority list, and after an upgrade,
existing installs with prior drivers can have a contiguous experience,
while fresh installs may default to a driver in the new priority list.
Ref: https://github.com/docker/docker/pull/11962#issuecomment-88274858
Signed-off-by: Vincent Batts <vbatts@redhat.com>
Everything else was gone from this file except these utils which are
being used in other files and can't yet be removed.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This test is already being skipped, and is also fully tested by
`TestRunContainerWithRmFlagExitCodeNotEqualToZero`
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Signed-off-by: Megan Kostick <mkostick@us.ibm.com>
Alphabetize FSMagic list to make more human-readable.
Signed-off-by: Megan Kostick <mkostick@us.ibm.com>
Check whether the swap limit capabilities are disabled or not only when memory swap is set to greater than 0.
Signed-off-by: David Calavera <david.calavera@gmail.com>
This is a symlink to the latest "bundle" that was assembled. For example, if `VERSION` is currently `1.5.0-dev`, then `bundles/latest` will be a symlink to `bundles/1.5.0-dev` after an attempted build.
One interesting property of this is that after a successful `binary` build, we can `./bundles/latest/binary/docker -v` and get back something like `Docker version 1.5.0-dev, build 3ff6723-dirty`.
Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
Testcase TestBuildResourceConstraintsAreUsed run build without
--no-cache, so if you run this test twice, it will fail the
second time.
TESTFLAGS='-v -run ^TestBuildResourceConstraintsAreUsed$' ./hack/make.sh binary test-integration-cli
[PASSED]
TESTFLAGS='-v -run ^TestBuildResourceConstraintsAreUsed$' ./hack/make.sh binary test-integration-cli
[FAIL]
Because we'll use cID to inspect field and will get empty cID
if we have cache.
Signed-off-by: Qiang Huang <h.huangqiang@huawei.com>
After finding our initial thinking on env. space versus arg list space
was wrong, we need to solve this by using a pipe between the caller and
child to marshall the (potentially very large) options array to the
archiver.
Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
Updates most of the instances of HTTP urls in the engine's
comments. Does not account for any use in the code itself,
documentation, contrib, or project files.
Signed-off-by: Eric Windisch <eric@windisch.us>
using standard unix postfixes add unit test for display
also change doc for memory usage display change
for example GiB will be GB
Signed-off-by: Sun Jianbo <wonderflow@zju.edu.cn>
If an image has been tagged to multiple repos and tags, 'docker
rmi -f IMAGE_ID' will just untag one random repo instead of
untagging all and deleting the image. This patch implement
this. This commit is composed of:
*untag all names and delete the image
*add test to this feature
*modify commandline/cli.md to explain this
Signed-off-by: Deng Guangxing <dengguangxing@huawei.com>
This provides an override for forcing the daemon to still attempt
running the devicemapper driver even when udev sync is not supported.
Intended to be a very clear impairment for those choosing to use it. If
udev sync is false, there will still be an error in the daemon logs,
even when the override is in place. The docs have an explicit WARNING.
Including link to the docs for users that encounter this daemon error
during an upgrade.
Signed-off-by: Vincent Batts <vbatts@redhat.com>
This will assure that the install script will not
begin executing until after it has been downloaded should
it be utilized in a 'curl | bash' workflow.
Signed-off-by: Eric Windisch <eric@windisch.us>
(cherry picked from commit fa961ce046)
As reported in #11294, the Docker daemon will execute contains it
shouldn't run in the event that a requested tag is present in an image's
ID. This leads to the wrong image being started up silently.
This change reduces the risk of such a collision by using the short ID
iff the actual revOrTag looks like a short ID (not that it necessarily
is).
Signed-off-by: Richard Burnison <rburnison@ebay.com>
This will assure that the install script will not
begin executing until after it has been downloaded should
it be utilized in a 'curl | bash' workflow.
Signed-off-by: Eric Windisch <eric@windisch.us>
This patch include the following fixs:
- fix image name error when docker ps
- fix docker events test failure: use the exact image name for filter
- fix docker build CI test failure due to "docker events" change
Because of change of daemon log behavior. Now we record
the exact Image name as you typed. So docker run -d busybux sh
and docker run -d busybox:latest are not the same in the log.
So it will affect the docker events. So change the related CI
Signed-off-by: Liu Hua <sdu.liu@huawei.com>
This also seemed to be checking the ordering of the events, which
doesn't seem like something we sould be interested in this particular
test.
Added check to make sure the filtered events have the expected ID's.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
`TestInfoEnsureSucceeds` is supposed to check existence of all
expected fields that are going to be shown in `docker info` command.
If this list was complete, it could have helped catching the missing
`"Logging Driver:"` regression.
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
This flag is passed to the daemon CLI. In my opinion, "Container's
logging driver" is not accurate and refers to 'one container'.
Also the `syslog` driver was missing from the list. Having the list
of all logging drivers won't scale here (should be <80 chars per line)
and we have `rotation` driver coming up in the pipeline as well (gh11485).
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
When we tag an Image with several names and we run one of them,
The "create" job will log this event with
+job log(create, containerID, Imagename).
And the "Imagename" is always the first one (sorted). It is the
same to "start/stop/rm" jobs. So use the correct name instand.
This PR refer to #10479
Signed-off-by: Liu Hua <sdu.liu@huawei.com>
Preventing the test execution to pollute the deterministic runtime environment
by seeding the global rand.Random.
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
Check test correctness of untar by comparing destination with
source. For part one, it only compares the directories.
This is a supplement to the #11601 fix.
Signed-off-by: Yestin Sun <yestin.sun@polyera.com>
Corrected integer size passed to Windows
Corrected DisableEcho / SetRawTerminal to not modify state
Cleaned up and made routines more idiomatic
Corrected raw mode state bits
Removed duplicate IsTerminal
Corrected off-by-one error
Minor idiomatic change
Signed-off-by: Brendan Dixon <brendand@microsoft.com>
Updated Windows installation documentation with newest
screencasts and Chocolatey instructions to install windows
client CLI.
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
registry/SearchResults was missing the "is_automated" field.
I added it back in.
Pull this 'table' removal one from the others because it fixed
a bug too
Signed-off-by: Doug Davis <dug@us.ibm.com>
* when no containers are returned, go test would then aborts with:
panic: runtime error: index out of range
Signed-off-by: Todd Whiteman <todd.whiteman@joyent.com>
Right now we try device removal at the interval of 10ms and keep on trying
till either device is removed or 10 seconds are over. That means if device
is busy, we will try 1000 times in those 10 seconds.
Sounds too high a frequency of deivce removal retrial. All the logs are
filled easily. I think it is a good idea to slow down a bit and retry at
the interval of 100ms instead of 10ms.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
During device removal, we are first waiting for device to close() in a tight
loop for 10 seconds. I am not sure why do we need it. First of all we come
here once the umount() is successful so device should be free. For some reason
of device is temporarily busy, then removeDevice() logic retries device removal
logic in a loop for 10 seconds and that should cover it. Can't see why one
more 10 seoncds loop is required before attempting device removal.
One loop should be able to cover all the temporary device busy conditions and
if condition is not temporary then 10 seconds loop is not going to help anyway.
So instead of two loops of 10 seconds each, I am converting it to a single
loop of 20 seconds. May be 10 second loop is good enough but for now I am
keeping it 20 seconds to avoid any regressions.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Currently in device removal path (device deactivation), we wait
for 10 seconds for devive to actually go away. waitRemove().
In current code this is not required. If dm removal task has completed
and one has done the wait on udev cookie, then device is gone and there
is no need to write another loop to wait for device removal.
This patch removes the waitRemove() which waits for 10 seconds after
device removal. This seems unnecessary.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
UdevWait() is deferred and takes uint cookie as an argument. As arguments
to deferred functions are calculated at the time of call, it is possible
that any update to cookie later by libdm are not taken into account when
UdevWait() is called. Hence use a pointer to uint as argument to UdevWait()
function.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
devmapper graph driver retries device removal 1000 times in case of failure
and if this fills up console with 1000 messages (when daemon is running in
debug mode). So remove these debug messages.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
There are issues with libdm logging. Right now if docker daemon is run
in debug mode, logging by libdm is too verbose. And if a device can't
be removed, thousands of messages fill the console and one can not see
what's going on.
This patch removes devicemapper.LogInitVerbose() call as that call will
only work if docker was not registering its own log handler with libdm.
For some reason docker registers one with libdm and libdm hands over
all the messages to docker (including debug ones). And now it is up to
devmapper backend to figure out which ones should go to console and
which ones should not.
So by default log only fatal messages from libdm. One can easily modify
the code to change it for debugging purposes.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
add a pair of accent mark around dockerfile commands
in docker-commit, as the same thing in docker-import.
Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
Some bullet lists didn't render as bullet-lists because
of a missing newline.
Also added missing "label" filter for `docker ps` and
slightly re-worded the header above the supported filters.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
I have seen a lot of people try to do this and reach out to me on how to mount
/dev/snd because it is returning "not a device node". The docs imply you can
_just_ mount /dev/snd and that is not the case. This fixes that. It also allows
for coolness if you want to mount say /dev/usb.
Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)
Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <jess@docker.com> (github: jfrazelle)
Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
When working with Go channels you must not set it to nil or else the
channel will block forever. It will not panic reading from a nil chan
but it blocks. The correct way to do this is to create the channel then
close it as the correct results to the caller will be returned.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Fixes: #11552
- scalable SVG
- uses colours from the rest of the page
- emphasizes that boot2docker is the ordinary Linux setup in a VM
- Add new graphic for architecture diagram
- Add windows boot2docker diagram
- Add windows diagram to documentation
- Remove old PNGs; replaced with SVGs
- Add redirects for new SVG versions of installation diagrams
Signed-off-by: Nick Irvine <nfirvine@nfirvine.com>
I can never get it to work for me when its just 3 seconds.
With this change it generates the OOM message around 17 seconds, but
I increased the timeout to 30 for people with slower machines
Signed-off-by: Doug Davis <dug@us.ibm.com>
This makes `registry.Service` a first class type and does not use jobs
to interact with this type.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Syslog was a heading-2, but should be heading-3;
changed the headings to heading-4 to match the
"network settings" section.
Also changed "Log driver" to "logging driver" for JSON.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Closing activationLock only if it's not closed already. This is needed
only because integration tests using docker code directly and doesn't
care about global state.
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
Update registry package to use the v2 registry api from distribution. Update interfaces to directly take in digests.
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
It's about time to let folks not hit 'vfs', when 'overlay' is supported
on their kernel. Especially now that v3.18.y is a long-term kernel.
Signed-off-by: Vincent Batts <vbatts@redhat.com>
Currently the progress reader won't close properly by not setting the close size.
fixes#11849
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
Api requesting port for daemon before init_networkdriver called.
Problem is that now initialization of api depends on initialization of
daemon and their intializations runs in parallel. Proper fix will be
just do it sequentially. For now I don't want refactor it, because it
can bring additional problems in 1.6.0.
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
Do not remove container if any of the resource could not be cleaned up. We
don't want to leak resources.
Two new states have been created. RemovalInProgress and Dead. Once container
is Dead, it can not be started/restarted. Dead container signifies the
container where we tried to remove it but removal failed. User now needs to
figure out what went wrong, corrent the situation and try cleanup again.
RemovalInProgress signifies that container is already being removed. Only
one removal can be in progress.
Also, do not allow start of a container if it is already dead or removal is
in progress.
Also extend existing force option (-f) to docker rm to not return an error
and remove container from user view even if resource cleanup failed.
This will allow a user to get back to old behavior where resources
might leak but atleast user will be able to make progress.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Docker does not know about our named cpuacct,cpu,cpuset cgroup
hierarchy with multiple subsystems in it. So to use them with docker
in integration-cli test TestRunWithCpuset inside docker container
we need to add symlinks to them in hack/dind script.
Example:
old version of parser will do:
cat /proc/1/cgroup
11:cpu,cpuacct,name=my_cpu_cpuacct:/
...
and create and mount this hierarchy to directory
/cgroup/cpu,cpuacct,name=my_cpu_cpuacct/
so docker cannot find it because it has strange name
in new parser directory will be same as on host
/cgroup/my_cpu_cpuacct
and have symlinks for docker to find it
/cgroup/cpu -> /cgroup/my_cpu_cpuacct
/cgroup/cpuacct -> /cgroup/my_cpu_cpuacct
in other case if where is no name
cat /proc/1/cgroup
11:cpu,cpuacct:/
...
mount will be same for both parsers
/cgroup/cpu,cpuacct
and new one will also create symlinks
/cgroup/cpu -> /cgroup/cpu,cpuacct
/cgroup/cpuacct -> /cgroup/cpu,cpuacct
Signed-off-by: Pavel Tikhomirov <ptikhomirov@parallels.com>
This has a few hacks in it but it ensures that the bridge driver does
not use global state in the mappers, atleast as much as possible at this
point without further refactoring. Some of the exported fields are
hacks to handle the daemon port mapping but this results in a much
cleaner approach and completely remove the global state from the mapper
and allocator.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Once the job has failed and is respawned, the status becomes `docker
respawn/post-start` after subsequent failures (as opposed to `docker
stop/post-start`), so the post-start script needs to take this into
account.
I could not find specific documentation on the job transitioning to the
`respawn/post-start` state, but this was observed on Ubuntu 14.04.2.
Signed-off-by: Lewis Marshall <lewis@lmars.net>
If job "acceptconnections" is called before "serveapi" the API Accept()
method will hang forever waiting for activation. This is due to the fact
that when "acceptconnections" ran the activation channel was nil.
Signed-off-by: Darren Shepherd <darren@rancher.com>
When buffering to file add support for compressing the tar contents. Since digest should be computed while writing buffer, include digest creation during buffer.
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
Installs and configures bash completion for Docker.
Note that bash completion still has to be initialized by a custom
.bashrc file.
Signed-off-by: Harald Albers <github@albersweb.de>
Generally, when using Remote API to push images there needs a http Header
X-Registry-Auth.
For compatibility if there was no authConfig header, everything will be
okay if a proper JSON-http-body was applied.
But when both X-Registry-Auth Header and the Body are missing, due to
the function of decode JSON, it will return an EOF error which was not
very clear to user.
So I think we can make the respone error be more nice.
Signed-off-by: Hu Keping <hukeping@huawei.com>
Boot2Docker experience is updated now that we have a Docker
client on Windows. Instead of running `boot2docker ssh`, users
can also use boot2docker on Windows Command Prompt (`cmd.exe`)
and PowerShell.
Updated documentation and screenshots, added a few details,
reorganized sections by importance, fixed a few errors.
Remaining: the video link in the Demonstration section needs
to be updated once I shoot a new video.
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
dockerize-image tool takes a virtual disk image file
and creates a Docker image based on it. You can
specify a base Docker image to make this tool create
an image that will contain only filesystem diff
instead of full filesystem.
See tools usage for details.
Signed-off-by: Maxim Kulkin <maxim.kulkin@gmail.com>
These images was just sitting around and referenced from
nowhere, nor they seemed any useful.
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
The git model uses `upstream master` to refer to the branch on
the remote repository, and `upstream/master` to refer to the
local cache of the upstream branch.
I did not explain the difference in the docs (that seemed a bit
excessive), but I did clarify the instructions so that it refers
to the correct concept in each place.
Signed-off-by: Katrina Owen <katrina.owen@gmail.com>
This change makes `monitorTtySize` work correctly on windows by polling
into win32 API to get terminal size (because there's no SIGWINCH on
windows) and send it to the engine over Remove API properly.
Average getttysize syscall takes around 30-40 ms on an average windows
machine as far as I can tell, therefore in a `for` loop, checking every
250ms if size has changed or not.
I'm not sure if there's a better way to do it on windows, if so,
somebody please send a link 'cause I could not find.
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
This change fixes a bug where stdout/stderr handles are not identified
correctly.
Previously we used to set the window size to fixed size to fit the default
tty size on the host (80x24). Now the attach/exec commands can correctly
get the terminal size from windows.
We still do not `monitorTtySize()` correctly on windows and update the tty
size on the host-side, in order to fix that we'll provide a
platform-specific `monitorTtySize` implementation in the future.
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
We now advise people to configure docker group and
add to sudo. Mac shouldn't use sudo. Removed sudo
from command examples. Left in installation to be removed
in installation doc sweep -- removing requires finer
grain control.
Signed-off-by: Mary Anthony <mary@docker.com>
We now have instructions in our Unix installs about setting up
docker group to avoid sudo. Also, Mac/Windows shouldn't use
sudo. So, I've removed sudo from our examples and added a
section at the top reminding them that if they have to use
sudo to run docker they can change that.
Signed-off-by: Mary Anthony <mary@docker.com>
Set cpuset to "0" so that it works on single core machines.
W/o this (and set to "1") we'll see something like this error
when running:
System error: write /cgroup/cpuset/docker/66689499bbd08cd8dccc9b7bfd1d6b34e85d73ce8c84d3c69b5e91944322da60/docker/79d7c548b58c85c4cfad6cd01eb7c3b30db254d1014c496137edd93ddc528a6f/cpuset.cpus: invalid argument"
Signed-off-by: Doug Davis <dug@us.ibm.com>
This patch causes `The image you are pulling has been verified` status
message to be produced also when the repository is pulled for the first
time.
Signed-off-by: Michal Minar <miminar@redhat.com>
Since `dirperm1` requires a more recent aufs patch than many current OS release,
we cant remove #783 completely. This documents that docker will apply `dirperm1`
automatically for systems that support it
Signed-off-by: Daniel, Dao Quang Minh <dqminh89@gmail.com>
Automatically detect support for aufs `dirperm1` option and apply it.
`dirperm1` tells aufs to check the permission bits of the directory on the
topmost branch and ignore the permission bits on all lower branches.
It can be used to fix aufs' permission bug (i.e., upper layer having
broader mask than the lower layer).
More information about the bug can be found at https://github.com/docker/docker/issues/783
`dirperm1` man page is at: http://aufs.sourceforge.net/aufs3/man.html
Signed-off-by: Daniel, Dao Quang Minh <dqminh89@gmail.com>
c79b9bab54 (Remove engine.Status and replace it with standard go error)
cause a regression that create container won't get any warnings, we still
need this to send useful informations to user.
Signed-off-by: Qiang Huang <h.huangqiang@huawei.com>
We removed it, because upstream removed it. But now it will be coming
back, so work with it either way.
Signed-off-by: Vincent Batts <vbatts@redhat.com>
The validation script from #10681 is too pedantic, and does not handle
well situations like:
```
cat <<EOF # or <<-EOF
Whether the leading whitespace is stripped out or not by bash
it should still be considered as valid.
EOF
```
This reverts commit 4e65c1c319.
Signed-off-by: Tibor Vass <tibor@docker.com>
Having the list in one spot makes it easier for people to see what's
avaiable instead of having to scan all of the docs and extract the info.
Signed-off-by: Doug Davis <dug@us.ibm.com>
Fixes#10958 by moving utils.daemon to pkg.pidfile.
Test cases were also added.
Updated the daemon to use the new pidfile.
Signed-off-by: Rick Wieman <git@rickw.nl>
Continuation of: #11660, working on issue #11626.
Wrapped portmapper global state into a struct. Now portallocator and
portmapper have no global state (except configuration, and a default
instance).
Unfortunately, removing the global default instances will break
```api/server/server.go:1539```, and ```daemon/daemon.go:832```, which
both call the global portallocator directly. Fixing that would be a much
bigger change, so for now, have postponed that.
Signed-off-by: Paul Bellamy <paul.a.bellamy@gmail.com>
For positerity (largely of packagers) lets leave around the generated
version files that happen during build.
They're already ignored in git, and recreated on every build.
Signed-off-by: Vincent Batts <vbatts@redhat.com>
This disables recently added ANSI emulation feature in certain Windows
shells (like ConEmu) where ANSI output is emulated by default with builtin
functionality in the shell.
MSYS (mingw) runs in cmd.exe window and it doesn't support emulation.
Cygwin doesn't even pass terminal handles to docker.exe as far as I can
tell, stdin/stdout/stderr handles are behaving like non-TTY. Therefore not
even including that in the check.
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
The problem is when I create container though REST api, set memory limit
in hostConfig, the memory limit didn't work. Because when we DecodeEnv,
we got hostConfig part of Env like this:
{"Binds":["/:/tmp"],"CpuShares":512,"CpusetCpus":"0,1","Devices":[],"Memory":1.6777216e+07,"MemorySwap":0}
And we cannot unmarshal number 1.6777216e+07 into Go value of type int64,
so we got 0.
We can fix this by setting Decoder as UseNumber().
Signed-off-by: Qiang Huang <h.huangqiang@huawei.com>
They say we should only use the BTRFS_LIB_VERSION
They will no longer support this since it had to be managed manually
Docker-DCO-1.1-Signed-off-by: Dan Walsh <dwalsh@redhat.com> (github: rhatdan)
container.
docker run -v /dev:/dev should stop mounting other default mounts in i
libcontainer otherwise directories and devices like /dev/ptx get mishandled.
We want to be able to run libvirtd for launching vms and it needs
access to the hosts /dev. This is a key componant of OpenStack.
Docker-DCO-1.1-Signed-off-by: Dan Walsh <dwalsh@redhat.com> (github: rhatdan)
The previous state assumed that the HOSTPATH argument referred to a
file. As clarified by moxiegirl in PR #11305, it is a directory.
Adjusted completion to reflect this.
Signed-off-by: Harald Albers <github@albersweb.de>
on overlay fs, the mtime of directories changes in a container where new
files are added in an upper layer (e.g. '/etc'). This flags the
directory as a change where there was none.
Closes#9874
Signed-off-by: Vincent Batts <vbatts@redhat.com>
Created a validation that detects all trailing whitespaces from every
text file that isn't *.go, *.md, vendor/*,
docs/theme/mkdocs/tipuesearch*
Removed trailing whitespaces from every text file except from vendor/*
builder/parser/testfiles*, docs/theme/mkdocs/tipuesearch* and *.md
Signed-off-by: André Martins <martins@noironetworks.com>
Fixes#9981
Allows a volume which was created by docker (ie, in
/var/lib/docker/vfs/dir) to be used as a Bind argument via the container
start API and overwrite an existing volume.
For example:
```bash
docker create -v /foo --name one
docker create -v /foo --name two
```
This allows the volume from `one` to be passed into the container start
API as a bind to `two`, and it will overwrite it.
This was possible before 7107898d5c
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2015-02-10 20:46:37 -05:00
1292 changed files with 81462 additions and 38064 deletions
Docker follows the timeless, highly efficient and totally unfair system
known as [Benevolent dictator for
life](http://en.wikipedia.org/wiki/Benevolent_Dictator_for_Life), with
life](https://en.wikipedia.org/wiki/Benevolent_Dictator_for_Life), with
yours truly, Solomon Hykes, in the role of BDFL. This means that all
decisions are made, by default, by Solomon. Since making every decision
myself would be highly un-scalable, in practice decisions are spread
@@ -113,7 +113,7 @@ It is the responsibility of the subsystem maintainers to process patches affecti
manner.
* If the change affects areas of the code which are not part of a subsystem,
or if subsystem maintainers are unable to reach a timely decision, it must be approved by
or if subsystem maintainers are unable to reach a timely decision, it must be approved by
the core maintainers.
* If the change affects the UI or public APIs, or if it represents a major change in architecture,
@@ -131,7 +131,7 @@ for each.
"""
# Triage
[Rules.review.states.0-triage]
[Rules.review.states.0-needs-triage]
# Maintainers are expected to triage new incoming pull requests by removing
# the `0-triage` label and adding the correct labels (e.g. `1-design-review`)
@@ -149,7 +149,7 @@ for each.
1-design-review = "general case"
# Design review
[Rules.review.states.1-design-review]
[Rules.review.states.1-needs-design-review]
# Maintainers are expected to comment on the design of the pull request.
# Review of documentation is expected only in the context of design validation,
@@ -166,7 +166,7 @@ for each.
2-code-review = "general case"
# Code review
[Rules.review.states.2-code-review]
[Rules.review.states.2-needs-code-review]
# Maintainers are expected to review the code and ensure that it is good
# quality and in accordance with the documentation in the PR.
@@ -184,7 +184,7 @@ for each.
3-docs-review = "general case"
# Docs review
[Rules.review.states.3-docs-review]
[Rules.review.states.3-needs-docs-review]
# Maintainers are expected to review the documentation in its bigger context,
# ensuring consistency, completeness, validity, and breadth of coverage across
@@ -193,6 +193,11 @@ for each.
# They should ask for any editorial change that makes the documentation more
# consistent and easier to understand.
#
# Changes and additions to docs must be reviewed and approved (LGTM'd) by a minimum of
# two docs sub-project maintainers. If the docs change originates with a docs
# maintainer, only one additional LGTM is required (since we assume a docs maintainer
# approves of their own PR).
#
# Once documentation is approved (see below), a maintainer should make sure to remove this
# label and add the next one.
@@ -200,14 +205,9 @@ for each.
2-code-review = "requires more code changes"
1-design-review = "raises design concerns"
4-merge = "general case"
# Docs approval
[Rules.review.docs-approval]
# Changes and additions to docs must be reviewed and approved (LGTM'd) by a minimum of two docs sub-project maintainers.
# If the docs change originates with a docs maintainer, only one additional LGTM is required (since we assume a docs maintainer approves of their own PR).
# Merge
[Rules.review.states.4-merge]
[Rules.review.states.4-needs-merge]
# Maintainers are expected to merge this pull request as soon as possible.
# They can ask for a rebase, or carry the pull request themselves.
@@ -268,7 +268,7 @@ made through a pull request.
# The chief architect is responsible for the overall integrity of the technical architecture
# across all subsystems, and the consistency of APIs and UI.
#
#
# Changes to UI, public APIs and overall architecture (for example a plugin system) must
# be approved by the chief architect.
"Chief Architect" = "shykes"
@@ -296,7 +296,9 @@ made through a pull request.
[Org.Operators.security]
people = [
"erw"
"erw",
"diogomonica",
"nathanmccauley"
]
[Org.Operators."monthly meetings"]
@@ -312,13 +314,29 @@ made through a pull request.
"jfrazelle",
"crosbymichael"
]
[Org.Operators.community]
people = [
"theadactyl"
]
# The chief maintainer is responsible for all aspects of quality for the project including
# code reviews, usability, stability, security, performance, etc.
# code reviews, usability, stability, security, performance, etc.
# The most important function of the chief maintainer is to lead by example. On the first
# day of a new maintainer, the best advice should be "follow the C.M.'s example and you'll
# be fine".
"Chief Maintainer" = "crosbymichael"
# The community manager is responsible for serving the project community, including users,
# contributors and partners. This involves:
# - facilitating communication between maintainers, contributors and users
# - organizing contributor and maintainer events
# - helping new contributors get involved
# - anything the project community needs to be successful
#
# The community manager is a point of contact for any contributor who has questions, concerns
# or feedback about project operations.
"Community Manager" = "theadactyl"
[Org."Core maintainers"]
@@ -339,14 +357,15 @@ made through a pull request.
people = [
"unclejack",
"crosbymichael",
"erikh",
"estesp",
"icecrime",
"jfrazelle",
"lk4d4",
"runcom",
"tibor",
"unclejack",
"vbatts",
"vieux",
"vishh"
@@ -359,80 +378,81 @@ made through a pull request.
# has a dedicated group of maintainers, which are dedicated to that subsytem and responsible
# for its quality.
# This "cellular division" is the primary mechanism for scaling maintenance of the project as it grows.
#
#
# The maintainers of each subsytem are responsible for:
#
#
# 1. Exposing a clear road map for improving their subsystem.
# 2. Deliver prompt feedback and decisions on pull requests affecting their subsystem.
# 3. Be available to anyone with questions, bug reports, criticism etc.
#on their component. This includes IRC, GitHub requests and the mailing
#list.
#on their component. This includes IRC, GitHub requests and the mailing
#list.
# 4. Make sure their subsystem respects the philosophy, design and
#road map of the project.
#road map of the project.
#
# #### How to review patches to your subsystem
#
#
# Accepting pull requests:
#
#- If the pull request appears to be ready to merge, give it a `LGTM`, which
# stands for "Looks Good To Me".
#- If the pull request has some small problems that need to be changed, make
# a comment adressing the issues.
#- If the changes needed to a PR are small, you can add a "LGTM once the
# following comments are adressed..." this will reduce needless back and
# forth.
#- If the PR only needs a few changes before being merged, any MAINTAINER can
# make a replacement PR that incorporates the existing commits and fixes the
# problems before a fast track merge.
#
#
#- If the pull request appears to be ready to merge, give it a `LGTM`, which
# stands for "Looks Good To Me".
#- If the pull request has some small problems that need to be changed, make
# a comment adressing the issues.
#- If the changes needed to a PR are small, you can add a "LGTM once the
# following comments are adressed..." this will reduce needless back and
# forth.
#- If the PR only needs a few changes before being merged, any MAINTAINER can
# make a replacement PR that incorporates the existing commits and fixes the
# problems before a fast track merge.
#
# Closing pull requests:
#
#- If a PR appears to be abandoned, after having attempted to contact the
# original contributor, then a replacement PR may be made. Once the
# replacement PR is made, any contributor may close the original one.
#- If you are not sure if the pull request implements a good feature or you
# do not understand the purpose of the PR, ask the contributor to provide
# more documentation. If the contributor is not able to adequately explain
# the purpose of the PR, the PR may be closed by any MAINTAINER.
#- If a MAINTAINER feels that the pull request is sufficiently architecturally
# flawed, or if the pull request needs significantly more design discussion
# before being considered, the MAINTAINER should close the pull request with
# a short explanation of what discussion still needs to be had. It is
# important not to leave such pull requests open, as this will waste both the
# MAINTAINER's time and the contributor's time. It is not good to string a
# contributor on for weeks or months, having them make many changes to a PR
# that will eventually be rejected.
#
#- If a PR appears to be abandoned, after having attempted to contact the
# original contributor, then a replacement PR may be made. Once the
# replacement PR is made, any contributor may close the original one.
#- If you are not sure if the pull request implements a good feature or you
# do not understand the purpose of the PR, ask the contributor to provide
# more documentation. If the contributor is not able to adequately explain
# the purpose of the PR, the PR may be closed by any MAINTAINER.
#- If a MAINTAINER feels that the pull request is sufficiently architecturally
# flawed, or if the pull request needs significantly more design discussion
# before being considered, the MAINTAINER should close the pull request with
# a short explanation of what discussion still needs to be had. It is
# important not to leave such pull requests open, as this will waste both the
# MAINTAINER's time and the contributor's time. It is not good to string a
# contributor on for weeks or months, having them make many changes to a PR
# that will eventually be rejected.
[Org.Subsystems.Documentation]
people = [
"fredlf",
"james",
"sven",
"moxiegirl",
"thaJeztah",
"jamtur01",
"spf13",
"mary"
"sven"
]
[Org.Subsystems.libcontainer]
people = [
"crosbymichael",
"vmarmol",
"mpatel",
"jnagal",
"lk4d4"
"lk4d4",
"mpatel",
"vmarmol"
]
[Org.Subsystems.registry]
people = [
"dmcg",
"dmp42",
"vbatts",
"joffrey",
"jlhawn",
"samalba",
"sday",
"jlhawn",
"dmcg"
"vbatts"
]
[Org.Subsystems."build tools"]
@@ -471,11 +491,27 @@ made through a pull request.
[Org.Subsystem.builder]
people = [
"duglin",
"erikh",
"tibor",
"duglin"
"tibor"
]
[Org.Curators]
# The curators help ensure that incoming issues and pull requests are properly triaged and
# that our various contribution and reviewing processes are respected. With their knowledge of
# the repository activity, they can also guide contributors to relevant material or
# discussions.
#
# They are neither code nor docs reviewers, so they are never expected to merge. They can
# however:
# - close an issue or pull request when it's an exact duplicate
# - close an issue or pull request when it's inappropriate or off-topic
people = [
"thajeztah"
]
[people]
@@ -500,11 +536,21 @@ made through a pull request.
Email = "ben@firshman.co.uk"
GitHub = "bfirsh"
[people.cpuguy83]
Name = "Brian Goff"
Email = "cpuguy83@gmail.com"
Github = "cpuguy83"
[people.crosbymichael]
Name = "Michael Crosby"
Email = "crosbymichael@gmail.com"
GitHub = "crosbymichael"
[people.diogomonica]
Name = "Diogo Monica"
Email = "diogo@docker.com"
GitHub = "diogomonica"
[people.duglin]
Name = "Doug Davis"
Email = "dug@us.ibm.com"
@@ -552,7 +598,7 @@ made through a pull request.
[people.jfrazelle]
Name = "Jessie Frazelle"
Email = "jess@docker.com"
Email = "j@docker.com"
GitHub = "jfrazelle"
[people.jlhawn]
@@ -560,21 +606,26 @@ made through a pull request.
Email = "josh.hawn@docker.com"
Github = "jlhawn"
[people.joffrey]
Name = "Joffrey Fuhrer"
Email = "joffrey@docker.com"
Github = "shin-"
[people.lk4d4]
Name = "Alexander Morozov"
Email = "lk4d4@docker.com"
GitHub = "lk4d4"
[people.mary]
[people.moxiegirl]
Name = "Mary Anthony"
Email = "mary.anthony@docker.com"
GitHub = "moxiegirl"
[people.nathanmccauley]
Name = "Nathan McCauley"
Email = "nathan.mccauley@docker.com"
GitHub = "nathanmccauley"
[people.runcom]
Name = "Antonio Murdaca"
Email = "me@runcom.ninja"
GitHub = "runcom"
[people.sday]
Name = "Stephen Day"
Email = "stephen.day@docker.com"
@@ -584,17 +635,27 @@ made through a pull request.
returnfmt.Errorf("Error checking context is accessible: '%s'. Please check permissions and try again.",err)
}
options:=&archive.TarOptions{
Compression:archive.Uncompressed,
ExcludePatterns:excludes,
IncludeFiles:includes,
}
context,err=archive.TarWithOptions(root,options)
iferr!=nil{
returnerr
}
}
// windows: show error message about modified file permissions
// FIXME: this is not a valid warning when the daemon is running windows. should be removed once docker engine for windows can build.
ifruntime.GOOS=="windows"{
fmt.Fprintln(cli.err,`SECURITY WARNING: You are building a Docker image from Windows against a Linux Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.`)
}
varbodyio.Reader
// Setup an upload progress bar
// FIXME: ProgressReader shouldn't be this annoying to use
// Package client provides a command-line interface for Docker.
//
// Run "docker help SUBCOMMAND" or "docker SUBCOMMAND --help" to see more information on any Docker subcommand, including the full list of options supported for the subcommand.
// See https://docs.docker.com/installation/ for instructions on installing Docker.
packageclient
import"fmt"
// An StatusError reports an unsuccessful exit by a command.
flAuthor:=cmd.String([]string{"a","#author","-author"},"","Author (e.g., \"John Hannibal Smith <hannibal@a-team.com>\")")
flChanges:=opts.NewListOpts(nil)
cmd.Var(&flChanges,[]string{"c","-change"},"Apply Dockerfile instruction to the created image")
// FIXME: --run is deprecated, it will be replaced with inline Dockerfile commands.
flConfig:=cmd.String([]string{"#run","#-run"},"","This option is deprecated and will be removed in a future version in favor of inline Dockerfile-compatible commands")
// CmdCp copies files/folders from a path on the container to a directory on the host running the command.
//
// If HOSTDIR is '-', the data is written as a tar file to STDOUT.
//
// Usage: docker cp CONTAINER:PATH HOSTDIR
func(cli*DockerCli)CmdCp(args...string)error{
cmd:=cli.Subcmd("cp","CONTAINER:PATH HOSTDIR|-","Copy files/folders from a PATH on the container to a HOSTDIR on the host\nrunning the command. Use '-' to write the data as a tar file to STDOUT.",true)
// CmdImport creates an empty filesystem image, imports the contents of the tarball into the image, and optionally tags the image.
//
// The URL argument is the address of a tarball (.tar, .tar.gz, .tgz, .bzip, .tar.xz, .txz) file. If the URL is '-', then the tar file is read from STDIN.
cmd:=cli.Subcmd("import","URL|- [REPOSITORY[:TAG]]","Create an empty filesystem image and import the contents of the\ntarball (.tar, .tar.gz, .tgz, .bzip, .tar.xz, .txz) into it, then\noptionally tag it.",true)
flChanges:=opts.NewListOpts(nil)
cmd.Var(&flChanges,[]string{"c","-change"},"Apply Dockerfile instruction to the created image")
cmd.Require(flag.Min,1)
cmd.ParseFlags(args,true)
var(
v=url.Values{}
src=cmd.Arg(0)
repository=cmd.Arg(1)
)
v.Set("fromSrc",src)
v.Set("repo",repository)
for_,change:=rangeflChanges.GetAll(){
v.Add("changes",change)
}
ifcmd.NArg()==3{
fmt.Fprintf(cli.err,"[DEPRECATED] The format 'URL|- [REPOSITORY [TAG]]' has been deprecated. Please use URL|- [REPOSITORY[:TAG]]\n")
// CmdLogin logs in or registers a user to a Docker registry service.
//
// If no server is specified, the user will be logged into or registered to the registry's index server.
//
// Usage: docker login SERVER
func(cli*DockerCli)CmdLogin(args...string)error{
cmd:=cli.Subcmd("login","[SERVER]","Register or log in to a Docker registry server, if no server is\nspecified \""+registry.IndexServerAddress()+"\" is the default.",true)
// CmdLogout logs a user out from a Docker registry.
//
// If no server is specified, the user will be logged out from the registry's index server.
//
// Usage: docker logout [SERVER]
func(cli*DockerCli)CmdLogout(args...string)error{
cmd:=cli.Subcmd("logout","[SERVER]","Log out from a Docker registry, if no server is\nspecified \""+registry.IndexServerAddress()+"\" is the default.",true)
// If a private port is specified, it also shows the public-facing port that is NATed to the private port.
//
// Usage: docker port CONTAINER [PRIVATE_PORT[/PROTO]]
func(cli*DockerCli)CmdPort(args...string)error{
cmd:=cli.Subcmd("port","CONTAINER [PRIVATE_PORT[/PROTO]]","List port mappings for the CONTAINER, or lookup the public-facing port that\nis NAT-ed to the PRIVATE_PORT",true)
// A running container is stopped by first sending SIGTERM and then SIGKILL if the container fails to stop within a grace period (the default is 10 seconds).
// Available version information is shown for: client Docker version, client API version, client Go version, client Git commit, client OS/Arch, server Docker version, server API version, server Go version, server Git commit, and server OS/Arch.
This image's tags contain the dependencies for building Docker `.deb`s for each of the Debian-based platforms Docker targets.
To add new tags, see [`contrib/builder/deb` in https://github.com/docker/docker](https://github.com/docker/docker/tree/master/contrib/builder/deb), specifically the `generate.sh` script, whose usage is described in a comment at the top of the file.
This image's tags contain the dependencies for building Docker `.rpm`s for each of the RPM-based platforms Docker targets.
To add new tags, see [`contrib/builder/rpm` in https://github.com/docker/docker](https://github.com/docker/docker/tree/master/contrib/builder/rpm), specifically the `generate.sh` script, whose usage is described in a comment at the top of the file.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.