Compare commits

...

65 Commits

Author SHA1 Message Date
Sebastiaan van Stijn
52d6ad2a68 Merge pull request #66 from thaJeztah/18.09_backport_fix-dm-errmsg
[18.09] backport: gd/dm: fix error message
2018-10-04 21:28:22 +02:00
Sebastiaan van Stijn
6e5ed2ccce Merge pull request #67 from thaJeztah/18.09_backport_windows-network-plugin-miss-fix
[18.09] Fix long startup on windows, with non-hns governed Hyper-V networks
2018-10-03 23:27:28 +02:00
Simon Ferquel
54bd14a3fe Fix long startup on windows, with non-hns governed Hyper-V networks
Similar to a related issue where previously, private Hyper-V networks
would each add 15 secs to the daemon startup, non-hns governed internal
networks are reported by hns as network type "internal" which is not
mapped to any network plugin (and thus we get the same plugin load retry
loop as before).

This issue hits Docker for Desktop because we setup such a network for
the Linux VM communication.

Signed-off-by: Simon Ferquel <simon.ferquel@docker.com>
(cherry picked from commit 6a1a4f9721)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-03 15:24:34 +02:00
Kir Kolyshkin
c9ddc6effc gd/dm: fix error message
The parameter name was wrong, which may mislead a user.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit c378fb774e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-03 14:01:13 +02:00
Andrew Hsu
e44436c31f Merge pull request #62 from thaJeztah/18.09_backport_tweak_error_message
[18.09] backport: tweak bind mount errors
2018-09-28 14:13:41 -07:00
Andrew Hsu
34b3cf4b0c Merge pull request #56 from thaJeztah/18.09_backport_more_permissive_daeon_conf_dir
[18.09] backport loosen permissions on /etc/docker directory
2018-09-28 11:42:01 -07:00
Andrew Hsu
51618f7a83 Merge pull request #63 from tiborvass/18.09-vndr-buildkit
[18.09] vendor buildkit to 8f4dff0d16ea91cb43315d5f5aa4b27f4fe4e1f2
2018-09-28 10:57:56 -07:00
Sebastiaan van Stijn
b499acc0e8 Tweak bind mount errors
These messages were enhanced to include the path that was
missing (in df6af282b9), but
also changed the first part of the message.

This change complicates running e2e tests with mixed versions
of the engine.

Looking at the full error message, "mount" is a bit redundant
as well, because the error message already indicates this is
about a "mount";

    docker run --rm --mount type=bind,source=/no-such-thing,target=/foo busybox
    docker: Error response from daemon: invalid mount config for type "bind": bind mount source path does not exist: /no-such-thing.

Removing the "mount" part from the error message, because
it was redundant, and makes cross-version testing easier :)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 574db7a537)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-09-28 14:35:55 +02:00
Tibor Vass
67541d5841 vendor buildkit to 8f4dff0d16ea91cb43315d5f5aa4b27f4fe4e1f2
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit e161a8d1e9)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-27 22:46:57 +00:00
Eli Uriegas
989fab3c71 Merge pull request #61 from tiborvass/18.09-remove-docker-prefix-containerd
[18.09] Remove 'docker-' prefix for containerd and runc binaries
2018-09-26 11:45:50 -07:00
Tibor Vass
6bf8dfc4d8 fix daemon tests that were using wrong containerd socket
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 52b60f705c)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-25 23:09:25 +00:00
Tibor Vass
e090646d47 hack/make: remove 'docker-' prefix when copying binaries
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 361412c79e)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-25 23:09:25 +00:00
Tibor Vass
b3bb2aabb8 Remove 'docker-' prefix for containerd and runc binaries
This allows to run the daemon in environments that have upstream containerd installed.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 34eede0296)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-24 22:35:36 +00:00
Andrew Hsu
e69efe2ef5 Merge pull request #51 from thaJeztah/18.09_backport_fix-libcontainerd-startup-error
[18.09] backport: Add fail fast path when containerd fails on startup
2018-09-22 00:11:43 -07:00
Andrew Hsu
ccab609365 Merge pull request #60 from tiborvass/18.09-remove-boltdb
[18.09] Remove boltdb dependency
2018-09-22 00:11:17 -07:00
Andrew Hsu
0a6866b839 Merge pull request #59 from tonistiigi/buildkit-1809
[18.09] Backport Buildkit fixes for 18.09
2018-09-21 21:59:27 -07:00
Tibor Vass
cce1763d57 vendor: remove boltdb dependency which is superseded by bbolt
This also brings in these PRs from swarmkit:
- https://github.com/docker/swarmkit/pull/2691
- https://github.com/docker/swarmkit/pull/2744
- https://github.com/docker/swarmkit/pull/2732
- https://github.com/docker/swarmkit/pull/2729
- https://github.com/docker/swarmkit/pull/2748

Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-22 01:24:11 +00:00
Tibor Vass
3d67dd0465 builder: vendor buildkit to 39404586a50d1b9d0fb1c578cf0f4de7bdb7afe5
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit d0f00bc1fb)
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2018-09-21 17:06:25 -07:00
Tibor Vass
73e2f72a7c builder: use buildkit's GC for build cache
This allows users to configure the buildkit GC.

The following enables the default GC:
```
{
  "builder": {
    "gc": {
      "enabled": true
    }
  }
}
```

The default GC policy has a simple config:
```
{
  "builder": {
    "gc": {
      "enabled": true,
      "defaultKeepStorage": "30GB"
    }
  }
}
```

A custom GC policy can be used instead by specifying a list of cache prune rules:
```
{
  "builder": {
    "gc": {
      "enabled": true,
      "policy": [
        {"keepStorage": "512MB", "filter": ["unused-for=1400h"]]},
        {"keepStorage": "30GB", "all": true}
      ]
    }
  }
}
```

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 4a776d0ca7)
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2018-09-21 17:06:25 -07:00
Anda Xu
2926a45be6 add support of registry-mirrors and insecure-registries to buildkit
Signed-off-by: Anda Xu <anda.xu@docker.com>
(cherry picked from commit 171d51c861)
(cherry picked from commit a72752b2f74467333b4ebe21c6c474eb0c2b99e0)
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2018-09-21 17:06:25 -07:00
Anda Xu
b73fd4d936 update vendor
Signed-off-by: Anda Xu <anda.xu@docker.com>
(cherry picked from commit 308701fac6)
(cherry picked from commit b48afc216f46c8e786560b807528699012e1627b)
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2018-09-21 17:06:25 -07:00
Tibor Vass
bb2adc4496 daemon/images: removed "found leaked image layer" warning, because it is expected now with buildkit
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 5aa222d0fe)
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2018-09-21 17:06:25 -07:00
Tonis Tiigi
b501aa82d5 vendor: update bolt to bbolt
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2018-09-21 17:06:25 -07:00
Tonis Tiigi
46a703bb3b vendor: add bbolt v1.3.1-etcd.8
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2018-09-21 17:06:25 -07:00
Andrew Hsu
ff9340ca2c Merge pull request #52 from thaJeztah/18.09_backport_fix-TestServiceWithDefaultAddressPoolInit
[18.09] backport TestServiceWithDefaultAddressPoolInit: avoid panic
2018-09-21 11:19:57 -07:00
Andrew Hsu
90a90ae2e1 Merge pull request #57 from AntaresS/cherry-37871
[18.09] backport fixing daemon won't start when "runtimes" option defined in both config file and cli
2018-09-21 09:54:31 -07:00
Anda Xu
66ed41aec8 fixed the dockerd won't start bug when 'runtimes' field is defined in both daemon config file and cli flags
Signed-off-by: Anda Xu <anda.xu@docker.com>
(cherry picked from commit 8392d0930b)
2018-09-20 10:54:47 -07:00
Andrew Hsu
ea2e2c5427 Merge pull request #50 from AntaresS/cherry-pick-moby
[18.09] backport propagate the dockerd cgroup-parent config to buildkitd
2018-09-18 16:36:12 -07:00
Anda Xu
a5d731edec create newBuildKit function separately in daemon_unix.go and daemon_windows.go for cross platform build
Signed-off-by: Anda Xu <anda.xu@docker.com>
(cherry picked from commit 66ac92cdc6)
2018-09-18 11:19:51 -07:00
Sebastiaan van Stijn
fc576226b2 Loosen permissions on /etc/docker directory
The `/etc/docker` directory is used both by the dockerd daemon
and the docker cli (if installed on the saem host as the daemon).

In situations where the `/etc/docker` directory does not exist,
and an initial `key.json` (legacy trust key) is generated (at the
default location), the `/etc/docker/` directory was created with
0700 permissions, making the directory only accessible by `root`.

Given that the `0600` permissions on the key itself already protect
it from being used by other users, the permissions of `/etc/docker`
can be less restrictive.

This patch changes the permissions for the directory to `0755`, so
that the CLI (if executed as non-root) can also access this directory.

> **NOTE**: "strictly", this patch is only needed for situations where no _custom_
> location for the trustkey is specified (not overridden with `--deprecated-key-path`),
> but setting the permissions only for the "default" case would make
> this more complicated.

```bash
make binary shell

make install

ls -la /etc/ | grep docker

dockerd
^C

ls -la /etc/ | grep docker
drwxr-xr-x 2 root root    4096 Sep 14 12:11 docker
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit cecd981717)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-09-18 12:34:56 +02:00
Tibor Vass
c24fd7a2c3 Merge pull request #55 from thaJeztah/18.09_backport_fix-progress-panic
[18.09] backport pkg/progress: work around closing closed channel panic
2018-09-17 11:43:41 -07:00
Sebastiaan van Stijn
5fb0a7ced7 Merge pull request #53 from thaJeztah/18.09_backport_buildkit-cli-control
[18.09] backport always hornor client side to choose which builder to use with DOCKER_…
2018-09-17 12:34:28 +02:00
Tibor Vass
2c26eac566 pkg/progress: work around closing closed channel panic
I could not reproduce the panic in #37735, so here's a bandaid.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 7dac70324d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-09-17 12:28:09 +02:00
Anda Xu
5badfb40eb always hornor client side to choose which builder to use with DOCKER_BUILDKIT env var regardless the server setup
Signed-off-by: Anda Xu <anda.xu@docker.com>
(cherry picked from commit 5d931705e3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-09-14 17:29:47 +02:00
Kir Kolyshkin
f43fc6650c TestServiceWithDefaultAddressPoolInit: avoid panic
Saw this in moby ci:

> 00:22:07.582 === RUN   TestServiceWithDefaultAddressPoolInit
> 00:22:08.887 --- FAIL: TestServiceWithDefaultAddressPoolInit (1.30s)
> 00:22:08.887 	daemon.go:290: [d905878b35bb9] waiting for daemon to start
> 00:22:08.887 	daemon.go:322: [d905878b35bb9] daemon started
> 00:22:08.888 panic: runtime error: index out of range [recovered]
> 00:22:08.889 	panic: runtime error: index out of range
> 00:22:08.889
> 00:22:08.889 goroutine 360 [running]:
> 00:22:08.889 testing.tRunner.func1(0xc42069d770)
> 00:22:08.889 	/usr/local/go/src/testing/testing.go:742 +0x29d
> 00:22:08.890 panic(0x85d680, 0xb615f0)
> 00:22:08.890 	/usr/local/go/src/runtime/panic.go:502 +0x229
> 00:22:08.890 github.com/docker/docker/integration/network.TestServiceWithDefaultAddressPoolInit(0xc42069d770)
> 00:22:08.891 	/go/src/github.com/docker/docker/integration/network/service_test.go:348 +0xb53
> .....

Apparently `out.IPAM.Config[0]` is not there, so to avoid panic, let's
check the size of `out.IPAM.Config` first.

Fixes: f7ad95cab9

[v2: add logging of data returned by NetworkInspect()]
[v3: use assert.Assert to fail immediately]

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 69d3a8936b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-09-14 15:22:43 +02:00
Derek McGowan
85361af1f7 Add fail fast path when containerd fails on startup
Prevents looping of startup errors such as containerd
not being found on the path.

Signed-off-by: Derek McGowan <derek@mcgstyle.net>
(cherry picked from commit ce0b0b72bc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-09-14 15:20:07 +02:00
Anda Xu
ee40a9ebcd update vendor
Signed-off-by: Anda Xu <anda.xu@docker.com>
(cherry picked from commit 54b3af4c7d)
2018-09-13 16:42:13 -07:00
Anda Xu
e8620110fc propagate the dockerd cgroup-parent config to buildkitd
Signed-off-by: Anda Xu <anda.xu@docker.com>
(cherry picked from commit d52485c2f9)
2018-09-13 16:36:57 -07:00
Andrew Hsu
e988001872 Merge pull request #46 from kolyshkin/18.09-backport-pr37771
[18.09] backport #37771 "vendor: update tar-split"
2018-09-12 18:16:16 -07:00
Andrew Hsu
6531bac59b Merge pull request #48 from kolyshkin/18.09-backport-logs-follow
[18.09] backport "daemon.ContainerLogs(): fix resource leak on follow"
2018-09-12 18:13:56 -07:00
Kir Kolyshkin
2a82480df9 TestFollowLogsProducerGone: add
This should test that
 - all the messages produced are delivered (i.e. not lost)
 - followLogs() exits

Loosely based on the test having the same name by Brian Goff, see
https://gist.github.com/cpuguy83/e538793de18c762608358ee0eaddc197

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit f845d76d04)
2018-09-06 18:39:22 -07:00
Kir Kolyshkin
84a5b528ae daemon.ContainerLogs(): fix resource leak on follow
When daemon.ContainerLogs() is called with options.follow=true
(as in "docker logs --follow"), the "loggerutils.followLogs()"
function never returns (even then the logs consumer is gone).
As a result, all the resources associated with it (including
an opened file descriptor for the log file being read, two FDs
for a pipe, and two FDs for inotify watch) are never released.

If this is repeated (such as by running "docker logs --follow"
and pressing Ctrl-C a few times), this results in DoS caused by
either hitting the limit of inotify watches, or the limit of
opened files. The only cure is daemon restart.

Apparently, what happens is:

1. logs producer (a container) is gone, calling (*LogWatcher).Close()
for all its readers (daemon/logger/jsonfilelog/jsonfilelog.go:175).

2. WatchClose() is properly handled by a dedicated goroutine in
followLogs(), cancelling the context.

3. Upon receiving the ctx.Done(), the code in followLogs()
(daemon/logger/loggerutils/logfile.go#L626-L638) keeps to
send messages _synchronously_ (which is OK for now).

4. Logs consumer is gone (Ctrl-C is pressed on a terminal running
"docker logs --follow"). Method (*LogWatcher).Close() is properly
called (see daemon/logs.go:114). Since it was called before and
due to to once.Do(), nothing happens (which is kinda good, as
otherwise it will panic on closing a closed channel).

5. A goroutine (see item 3 above) keeps sending log messages
synchronously to the logWatcher.Msg channel. Since the
channel reader is gone, the channel send operation blocks forever,
and resource cleanup set up in defer statements at the beginning
of followLogs() never happens.

Alas, the fix is somewhat complicated:

1. Distinguish between close from logs producer and logs consumer.
To that effect,
 - yet another channel is added to LogWatcher();
 - {Watch,}Close() are renamed to {Watch,}ProducerGone();
 - {Watch,}ConsumerGone() are added;

*NOTE* that ProducerGone()/WatchProducerGone() pair is ONLY needed
in order to stop ConsumerLogs(follow=true) when a container is stopped;
otherwise we're not interested in it. In other words, we're only
using it in followLogs().

2. Code that was doing (logWatcher*).Close() is modified to either call
ProducerGone() or ConsumerGone(), depending on the context.

3. Code that was waiting for WatchClose() is modified to wait for
either ConsumerGone() or ProducerGone(), or both, depending on the
context.

4. followLogs() are modified accordingly:
 - context cancellation is happening on WatchProducerGone(),
and once it's received the FileWatcher is closed and waitRead()
returns errDone on EOF (i.e. log rotation handling logic is disabled);
 - due to this, code that was writing synchronously to logWatcher.Msg
can be and is removed as the code above it handles this case;
 - function returns once ConsumerGone is received, freeing all the
resources -- this is the bugfix itself.

While at it,

1. Let's also remove the ctx usage to simplify the code a bit.
It was introduced by commit a69a59ffc7 ("Decouple removing the
fileWatcher from reading") in order to fix a bug. The bug was actually
a deadlock in fsnotify, and the fix was just a workaround. Since then
the fsnofify bug has been fixed, and a new fsnotify was vendored in.
For more details, please see
https://github.com/moby/moby/pull/27782#issuecomment-416794490

2. Since `(*filePoller).Close()` is fixed to remove all the files
being watched, there is no need to explicitly call
fileWatcher.Remove(name) anymore, so get rid of the extra code.

Should fix https://github.com/moby/moby/issues/37391

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 916eabd459)
2018-09-06 18:39:22 -07:00
Brian Goff
511741735e daemon/logger/loggerutils: add TestFollowLogsClose
This test case checks that followLogs() exits once the reader is gone.
Currently it does not (i.e. this test is supposed to fail) due to #37391.

[kolyshkin@: test case Brian Goff, changelog and all bugs are by me]
Source: https://gist.github.com/cpuguy83/e538793de18c762608358ee0eaddc197

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit d37a11bfba)
2018-09-06 18:39:22 -07:00
Kir Kolyshkin
2b8bc86679 daemon.ContainerLogs: minor debug logging cleanup
This code has many return statements, for some of them the
"end logs" or "end stream" message was not printed, giving
the impression that this "for" loop never ended.

Make sure that "begin logs" is to be followed by "end logs".

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 2e4c2a6bf9)
2018-09-06 18:39:22 -07:00
Kir Kolyshkin
4e2dbfa1af pkg/filenotify/poller: fix Close()
The code in Close() that removes the watches was not working,
because it first sets `w.closed = true` and then calls w.close(),
which starts with
```
        if w.closed {
                return errPollerClosed
	}
```

Fix by setting w.closed only after calling w.remove() for all the
files being watched.

While at it, remove the duplicated `delete(w.watches, name)` code.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit fffa8958d0)
2018-09-06 18:39:21 -07:00
Kir Kolyshkin
3a3bfcbf47 pkg/filenotify/poller: close file asap
There is no need to wait for up to 200ms in order to close
the file descriptor once the chClose is received.

This commit might reduce the chances for occasional "The process
cannot access the file because it is being used by another process"
error on Windows, where an opened file can't be removed.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit dfbb64ea7d)
2018-09-06 18:39:21 -07:00
Kir Kolyshkin
7be43586af pkg/filenotify: poller.Add: fix fd leaks on err
In case of errors, the file descriptor is never closed. Fix it.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 88bcf1573c)
2018-09-06 18:39:21 -07:00
Kir Kolyshkin
d7085abec2 vendor: update tar-split
To include https://github.com/vbatts/tar-split/pull/48 which
fixes the issue of creating an image with >8GB file in it.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 92e7543903)
2018-09-06 17:24:39 -07:00
Kir Kolyshkin
fc1d808c44 integration/build: add TestBuildHugeFile
Add a test case for creating a 8GB file inside a container.
Due to a bug in tar-split this was failing in Docker 18.06.

The file being created is sparse, so there's not much I/O
happening or disk space being used -- meaning the test is
fast and does not require a lot of disk space.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit b3165f5b2d)
2018-09-06 17:24:36 -07:00
Andrew Hsu
7485ef7e46 Merge pull request #44 from andrewhsu/sup
[18.09] Fix supervisor healthcheck throttling
2018-09-05 11:14:31 -07:00
Andrew Hsu
d2ecc7bad1 Merge pull request #43 from andrewhsu/tls
[18.09] client: dial tls on Dialer if tls config is set
2018-09-05 06:43:23 -07:00
Derek McGowan
f121eccf29 Fix supervisor healthcheck throttling
Fix default case causing the throttling to not be used.
Ensure that nil client condition is handled.

Signed-off-by: Derek McGowan <derek@mcgstyle.net>
(cherry picked from commit c3e3293843)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2018-09-05 06:59:52 +00:00
Andrew Hsu
00a9cf39ed Merge pull request #42 from tiborvass/18.09-cp-buildkit
[18.09] Buildkit cherry-picks
2018-09-04 18:46:24 -07:00
Tonis Tiigi
c2d0053207 client: dial tls on Dialer if tls config is set
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 5974fc2540)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2018-09-05 01:17:31 +00:00
Tibor Vass
4c35d81147 vendor buildkit to fix a couple of bugs
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit effa24bf48)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-04 15:52:24 +00:00
Tonis Tiigi
28150fc70c builder: implement ref checker
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 354c241041)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-04 15:02:28 +00:00
Tibor Vass
d2c3163642 builder: fix pruning all cache
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit d47435a004)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-04 15:02:28 +00:00
Tibor Vass
3153708f13 builder: add prune options to the API
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 8ff7847d1c)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-04 15:02:28 +00:00
Anda Xu
2f94f10342 allow features option live reloadable
Signed-off-by: Anda Xu <anda.xu@docker.com>
(cherry picked from commit 58a75cebdd)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-04 15:01:46 +00:00
Andrew Hsu
b8a4fe5f8f Merge pull request #37 from thaJeztah/18.09_backport_fix_prefix_matching
[18.09] backport: fix regression when filtering container names using a leading slash
2018-08-31 21:15:00 -07:00
Madhu Venugopal
648704522b Merge pull request #41 from kolyshkin/18.09-backport-pr37739
[18.09] backport "fix relabeling local volume source dir"
2018-08-31 19:01:01 -07:00
Kir Kolyshkin
4032b6778d Fix relabeling local volume source dir
In case a volume is specified via Mounts API, and SELinux is enabled,
the following error happens on container start:

> $ docker volume create testvol
> $ docker run --rm --mount source=testvol,target=/tmp busybox true
> docker: Error response from daemon: error setting label on mount
> source '': no such file or directory.

The functionality to relabel the source of a local mount specified via
Mounts API was introduced in commit 5bbf5cc and later broken by commit
e4b6adc, which removed setting mp.Source field.

With the current data structures, the host dir is already available in
v.Mountpoint, so let's just use it.

Fixes: e4b6adc
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2018-08-30 17:34:59 -07:00
Sebastiaan van Stijn
5fa80da2d3 Fix regression when filtering container names using a leading slash
Commit 5c8da2e967 updated the filtering behavior
to match container-names without having to specify the leading slash.

This change caused a regression in situations where a regex was provided as
filter, using an explicit leading slash (`--filter name=^/mycontainername`).

This fix changes the filters to match containers both with, and without the
leading slash, effectively making the leading slash optional when filtering.

With this fix, filters with and without a leading slash produce the same result:

    $ docker ps --filter name=^a
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    21afd6362b0c        busybox             "sh"                2 minutes ago       Up 2 minutes                            a2
    56e53770e316        busybox             "sh"                2 minutes ago       Up 2 minutes                            a1

    $ docker ps --filter name=^/a
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    21afd6362b0c        busybox             "sh"                2 minutes ago       Up 2 minutes                            a2
    56e53770e316        busybox             "sh"                3 minutes ago       Up 3 minutes                            a1

    $ docker ps --filter name=^b
    CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS              PORTS               NAMES
    b69003b6a6fe        busybox             "sh"                About a minute ago   Up About a minute                       b1

    $ docker ps --filter name=^/b
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    b69003b6a6fe        busybox             "sh"                56 seconds ago      Up 54 seconds                           b1

    $ docker ps --filter name=/a
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    21afd6362b0c        busybox             "sh"                3 minutes ago       Up 3 minutes                            a2
    56e53770e316        busybox             "sh"                4 minutes ago       Up 4 minutes                            a1

    $ docker ps --filter name=a
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    21afd6362b0c        busybox             "sh"                3 minutes ago       Up 3 minutes                            a2
    56e53770e316        busybox             "sh"                4 minutes ago       Up 4 minutes                            a1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6f9b5ba810)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-08-29 12:54:06 +02:00
Tibor Vass
be371291bc Merge pull request #35 from tiborvass/18.09-fix-network-buildkit
[18.09] builder: fix bridge networking when using buildkit
2018-08-23 06:21:34 -07:00
Tibor Vass
1d531ff64f builder: fix bridge networking when using buildkit
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit dc7e472db9)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-08-23 05:32:51 +00:00
403 changed files with 18662 additions and 8822 deletions

View File

@@ -88,7 +88,7 @@ func (b *Backend) Build(ctx context.Context, config backend.BuildConfig) (string
}
// PruneCache removes all cached build sources
func (b *Backend) PruneCache(ctx context.Context) (*types.BuildCachePruneReport, error) {
func (b *Backend) PruneCache(ctx context.Context, opts types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error) {
eg, ctx := errgroup.WithContext(ctx)
var fsCacheSize uint64
@@ -102,9 +102,10 @@ func (b *Backend) PruneCache(ctx context.Context) (*types.BuildCachePruneReport,
})
var buildCacheSize int64
var cacheIDs []string
eg.Go(func() error {
var err error
buildCacheSize, err = b.buildkit.Prune(ctx)
buildCacheSize, cacheIDs, err = b.buildkit.Prune(ctx, opts)
if err != nil {
return errors.Wrap(err, "failed to prune build cache")
}
@@ -115,7 +116,7 @@ func (b *Backend) PruneCache(ctx context.Context) (*types.BuildCachePruneReport,
return nil, err
}
return &types.BuildCachePruneReport{SpaceReclaimed: fsCacheSize + uint64(buildCacheSize)}, nil
return &types.BuildCachePruneReport{SpaceReclaimed: fsCacheSize + uint64(buildCacheSize), CachesDeleted: cacheIDs}, nil
}
// Cancel cancels the build by ID

View File

@@ -14,7 +14,7 @@ type Backend interface {
Build(context.Context, backend.BuildConfig) (string, error)
// Prune build cache
PruneCache(context.Context) (*types.BuildCachePruneReport, error)
PruneCache(context.Context, types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error)
Cancel(context.Context, string) error
}

View File

@@ -7,15 +7,19 @@ import (
// buildRouter is a router to talk with the build controller
type buildRouter struct {
backend Backend
daemon experimentalProvider
routes []router.Route
builderVersion types.BuilderVersion
backend Backend
daemon experimentalProvider
routes []router.Route
features *map[string]bool
}
// NewRouter initializes a new build router
func NewRouter(b Backend, d experimentalProvider, bv types.BuilderVersion) router.Router {
r := &buildRouter{backend: b, daemon: d, builderVersion: bv}
func NewRouter(b Backend, d experimentalProvider, features *map[string]bool) router.Router {
r := &buildRouter{
backend: b,
daemon: d,
features: features,
}
r.initRoutes()
return r
}
@@ -32,3 +36,18 @@ func (r *buildRouter) initRoutes() {
router.NewPostRoute("/build/cancel", r.postCancel),
}
}
// BuilderVersion derives the default docker builder version from the config
// Note: it is valid to have BuilderVersion unset which means it is up to the
// client to choose which builder to use.
func BuilderVersion(features map[string]bool) types.BuilderVersion {
var bv types.BuilderVersion
if v, ok := features["buildkit"]; ok {
if v {
bv = types.BuilderBuildKit
} else {
bv = types.BuilderV1
}
}
return bv
}

View File

@@ -18,6 +18,7 @@ import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/ioutils"
@@ -161,7 +162,26 @@ func parseVersion(s string) (types.BuilderVersion, error) {
}
func (br *buildRouter) postPrune(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
report, err := br.backend.PruneCache(ctx)
if err := httputils.ParseForm(r); err != nil {
return err
}
filters, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return errors.Wrap(err, "could not parse filters")
}
ksfv := r.FormValue("keep-storage")
ks, err := strconv.Atoi(ksfv)
if err != nil {
return errors.Wrapf(err, "keep-storage is in bytes and expects an integer, got %v", ksfv)
}
opts := types.BuildCachePruneOptions{
All: httputils.BoolValue(r, "all"),
Filters: filters,
KeepStorage: int64(ks),
}
report, err := br.backend.PruneCache(ctx, opts)
if err != nil {
return err
}
@@ -230,11 +250,6 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
return errdefs.InvalidParameter(errors.New("squash is only supported with experimental mode"))
}
// check if the builder feature has been enabled from daemon as well.
if buildOptions.Version == types.BuilderBuildKit && br.builderVersion != "" && br.builderVersion != types.BuilderBuildKit {
return errdefs.InvalidParameter(errors.New("buildkit is not enabled on daemon"))
}
out := io.Writer(output)
if buildOptions.SuppressOutput {
out = notVerboseBuffer

View File

@@ -2,30 +2,29 @@ package system // import "github.com/docker/docker/api/server/router/system"
import (
"github.com/docker/docker/api/server/router"
"github.com/docker/docker/api/types"
buildkit "github.com/docker/docker/builder/builder-next"
"github.com/docker/docker/builder/builder-next"
"github.com/docker/docker/builder/fscache"
)
// systemRouter provides information about the Docker system overall.
// It gathers information about host, daemon and container events.
type systemRouter struct {
backend Backend
cluster ClusterBackend
routes []router.Route
fscache *fscache.FSCache // legacy
builder *buildkit.Builder
builderVersion types.BuilderVersion
backend Backend
cluster ClusterBackend
routes []router.Route
fscache *fscache.FSCache // legacy
builder *buildkit.Builder
features *map[string]bool
}
// NewRouter initializes a new system router
func NewRouter(b Backend, c ClusterBackend, fscache *fscache.FSCache, builder *buildkit.Builder, bv types.BuilderVersion) router.Router {
func NewRouter(b Backend, c ClusterBackend, fscache *fscache.FSCache, builder *buildkit.Builder, features *map[string]bool) router.Router {
r := &systemRouter{
backend: b,
cluster: c,
fscache: fscache,
builder: builder,
builderVersion: bv,
backend: b,
cluster: c,
fscache: fscache,
builder: builder,
features: features,
}
r.routes = []router.Route{

View File

@@ -8,6 +8,7 @@ import (
"time"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/server/router/build"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/events"
"github.com/docker/docker/api/types/filters"
@@ -26,7 +27,8 @@ func optionsHandler(ctx context.Context, w http.ResponseWriter, r *http.Request,
}
func (s *systemRouter) pingHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if bv := s.builderVersion; bv != "" {
builderVersion := build.BuilderVersion(*s.features)
if bv := builderVersion; bv != "" {
w.Header().Set("Builder-Version", string(bv))
}
_, err := w.Write([]byte{'O', 'K'})

View File

@@ -1513,6 +1513,31 @@ definitions:
aux:
$ref: "#/definitions/ImageID"
BuildCache:
type: "object"
properties:
ID:
type: "string"
Parent:
type: "string"
Type:
type: "string"
Description:
type: "string"
InUse:
type: "boolean"
Shared:
type: "boolean"
Size:
type: "integer"
CreatedAt:
type: "integer"
LastUsedAt:
type: "integer"
x-nullable: true
UsageCount:
type: "integer"
ImageID:
type: "object"
description: "Image ID or Digest"
@@ -3823,10 +3848,10 @@ definitions:
$ref: "#/definitions/Runtime"
default:
runc:
path: "docker-runc"
path: "runc"
example:
runc:
path: "docker-runc"
path: "runc"
runc-master:
path: "/go/bin/runc"
custom:
@@ -6358,6 +6383,29 @@ paths:
produces:
- "application/json"
operationId: "BuildPrune"
parameters:
- name: "keep-storage"
in: "query"
description: "Amount of disk space in bytes to keep for cache"
type: "integer"
format: "int64"
- name: "all"
in: "query"
type: "boolean"
description: "Remove all types of build cache"
- name: "filters"
in: "query"
type: "string"
description: |
A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters:
- `unused-for=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h')
- `id=<id>`
- `parent=<id>`
- `type=<string>`
- `description=<string>`
- `inuse`
- `shared`
- `private`
responses:
200:
description: "No error"
@@ -6365,6 +6413,11 @@ paths:
type: "object"
title: "BuildPruneResponse"
properties:
CachesDeleted:
type: "array"
items:
description: "ID of build cache object"
type: "string"
SpaceReclaimed:
description: "Disk space reclaimed in bytes"
type: "integer"
@@ -7199,6 +7252,10 @@ paths:
type: "array"
items:
$ref: "#/definitions/Volume"
BuildCache:
type: "array"
items:
$ref: "#/definitions/BuildCache"
example:
LayersSize: 1092588
Images:

View File

@@ -543,6 +543,7 @@ type ImagesPruneReport struct {
// BuildCachePruneReport contains the response for Engine API:
// POST "/build/prune"
type BuildCachePruneReport struct {
CachesDeleted []string
SpaceReclaimed uint64
}
@@ -592,14 +593,21 @@ type BuildResult struct {
// BuildCache contains information about a build cache record
type BuildCache struct {
ID string
Mutable bool
InUse bool
Size int64
ID string
Parent string
Type string
Description string
InUse bool
Shared bool
Size int64
CreatedAt time.Time
LastUsedAt *time.Time
UsageCount int
Parent string
Description string
}
// BuildCachePruneOptions hold parameters to prune the build cache
type BuildCachePruneOptions struct {
All bool
KeepStorage int64
Filters filters.Args
}

View File

@@ -34,6 +34,7 @@ import (
"github.com/moby/buildkit/util/flightcontrol"
"github.com/moby/buildkit/util/imageutil"
"github.com/moby/buildkit/util/progress"
"github.com/moby/buildkit/util/resolver"
"github.com/moby/buildkit/util/tracing"
digest "github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/identity"
@@ -51,6 +52,7 @@ type SourceOpt struct {
DownloadManager distribution.RootFSDownloadManager
MetadataStore metadata.V2MetadataService
ImageStore image.Store
ResolverOpt resolver.ResolveOptionsFunc
}
type imageSource struct {
@@ -71,11 +73,16 @@ func (is *imageSource) ID() string {
return source.DockerImageScheme
}
func (is *imageSource) getResolver(ctx context.Context) remotes.Resolver {
return docker.NewResolver(docker.ResolverOptions{
func (is *imageSource) getResolver(ctx context.Context, rfn resolver.ResolveOptionsFunc, ref string) remotes.Resolver {
opt := docker.ResolverOptions{
Client: tracing.DefaultClient,
Credentials: is.getCredentialsFromSession(ctx),
})
}
if rfn != nil {
opt = rfn(ref)
}
r := docker.NewResolver(opt)
return r
}
func (is *imageSource) getCredentialsFromSession(ctx context.Context) func(string) (string, string, error) {
@@ -118,7 +125,7 @@ func (is *imageSource) resolveRemote(ctx context.Context, ref string, platform *
dt []byte
}
res, err := is.g.Do(ctx, ref, func(ctx context.Context) (interface{}, error) {
dgst, dt, err := imageutil.Config(ctx, ref, is.getResolver(ctx), is.ContentStore, platform)
dgst, dt, err := imageutil.Config(ctx, ref, is.getResolver(ctx, is.ResolverOpt, ref), is.ContentStore, platform)
if err != nil {
return nil, err
}
@@ -181,7 +188,7 @@ func (is *imageSource) Resolve(ctx context.Context, id source.Identifier) (sourc
p := &puller{
src: imageIdentifier,
is: is,
resolver: is.getResolver(ctx),
resolver: is.getResolver(ctx, is.ResolverOpt, imageIdentifier.Reference.String()),
platform: platform,
}
return p, nil
@@ -516,6 +523,15 @@ func (p *puller) Snapshot(ctx context.Context) (cache.ImmutableRef, error) {
return nil, err
}
// TODO: handle windows layers for cross platform builds
if p.src.RecordType != "" && cache.GetRecordType(ref) == "" {
if err := cache.SetRecordType(ref, p.src.RecordType); err != nil {
ref.Release(context.TODO())
return nil, err
}
}
return ref, nil
}

View File

@@ -5,10 +5,10 @@ import (
"os"
"path/filepath"
"github.com/boltdb/bolt"
"github.com/docker/docker/layer"
"github.com/docker/docker/pkg/ioutils"
"github.com/pkg/errors"
bolt "go.etcd.io/bbolt"
"golang.org/x/sync/errgroup"
)

View File

@@ -7,7 +7,6 @@ import (
"strings"
"sync"
"github.com/boltdb/bolt"
"github.com/containerd/containerd/mount"
"github.com/containerd/containerd/snapshots"
"github.com/docker/docker/daemon/graphdriver"
@@ -16,6 +15,7 @@ import (
"github.com/moby/buildkit/snapshot"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
bolt "go.etcd.io/bbolt"
)
var keyParent = []byte("parent")
@@ -110,6 +110,10 @@ func (s *snapshotter) chainID(key string) (layer.ChainID, bool) {
return "", false
}
func (s *snapshotter) GetLayer(key string) (layer.Layer, error) {
return s.getLayer(key, true)
}
func (s *snapshotter) getLayer(key string, withCommitted bool) (layer.Layer, error) {
s.mu.Lock()
l, ok := s.refs[key]

View File

@@ -13,32 +13,53 @@ import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/builder"
"github.com/docker/docker/daemon/config"
"github.com/docker/docker/daemon/images"
"github.com/docker/docker/pkg/streamformatter"
"github.com/docker/docker/pkg/system"
"github.com/docker/libnetwork"
controlapi "github.com/moby/buildkit/api/services/control"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/control"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/session"
"github.com/moby/buildkit/solver/llbsolver"
"github.com/moby/buildkit/util/entitlements"
"github.com/moby/buildkit/util/resolver"
"github.com/moby/buildkit/util/tracing"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
grpcmetadata "google.golang.org/grpc/metadata"
)
var errMultipleFilterValues = errors.New("filters expect only one value")
var cacheFields = map[string]bool{
"id": true,
"parent": true,
"type": true,
"description": true,
"inuse": true,
"shared": true,
"private": true,
// fields from buildkit that are not exposed
"mutable": false,
"immutable": false,
}
func init() {
llbsolver.AllowNetworkHostUnstable = true
}
// Opt is option struct required for creating the builder
type Opt struct {
SessionManager *session.Manager
Root string
Dist images.DistributionServices
NetworkController libnetwork.NetworkController
SessionManager *session.Manager
Root string
Dist images.DistributionServices
NetworkController libnetwork.NetworkController
DefaultCgroupParent string
ResolverOpt resolver.ResolveOptionsFunc
BuilderConfig config.BuilderConfig
}
// Builder can build using BuildKit backend
@@ -86,48 +107,69 @@ func (b *Builder) DiskUsage(ctx context.Context) ([]*types.BuildCache, error) {
var items []*types.BuildCache
for _, r := range duResp.Record {
items = append(items, &types.BuildCache{
ID: r.ID,
Mutable: r.Mutable,
InUse: r.InUse,
Size: r.Size_,
ID: r.ID,
Parent: r.Parent,
Type: r.RecordType,
Description: r.Description,
InUse: r.InUse,
Shared: r.Shared,
Size: r.Size_,
CreatedAt: r.CreatedAt,
LastUsedAt: r.LastUsedAt,
UsageCount: int(r.UsageCount),
Parent: r.Parent,
Description: r.Description,
})
}
return items, nil
}
// Prune clears all reclaimable build cache
func (b *Builder) Prune(ctx context.Context) (int64, error) {
func (b *Builder) Prune(ctx context.Context, opts types.BuildCachePruneOptions) (int64, []string, error) {
ch := make(chan *controlapi.UsageRecord)
eg, ctx := errgroup.WithContext(ctx)
validFilters := make(map[string]bool, 1+len(cacheFields))
validFilters["unused-for"] = true
for k, v := range cacheFields {
validFilters[k] = v
}
if err := opts.Filters.Validate(validFilters); err != nil {
return 0, nil, err
}
pi, err := toBuildkitPruneInfo(opts)
if err != nil {
return 0, nil, err
}
eg.Go(func() error {
defer close(ch)
return b.controller.Prune(&controlapi.PruneRequest{}, &pruneProxy{
return b.controller.Prune(&controlapi.PruneRequest{
All: pi.All,
KeepDuration: int64(pi.KeepDuration),
KeepBytes: pi.KeepBytes,
Filter: pi.Filter,
}, &pruneProxy{
streamProxy: streamProxy{ctx: ctx},
ch: ch,
})
})
var size int64
var cacheIDs []string
eg.Go(func() error {
for r := range ch {
size += r.Size_
cacheIDs = append(cacheIDs, r.ID)
}
return nil
})
if err := eg.Wait(); err != nil {
return 0, err
return 0, nil, err
}
return size, nil
return size, cacheIDs, nil
}
// Build executes a build request
@@ -467,3 +509,41 @@ func toBuildkitExtraHosts(inp []string) (string, error) {
}
return strings.Join(hosts, ","), nil
}
func toBuildkitPruneInfo(opts types.BuildCachePruneOptions) (client.PruneInfo, error) {
var unusedFor time.Duration
unusedForValues := opts.Filters.Get("unused-for")
switch len(unusedForValues) {
case 0:
case 1:
var err error
unusedFor, err = time.ParseDuration(unusedForValues[0])
if err != nil {
return client.PruneInfo{}, errors.Wrap(err, "unused-for filter expects a duration (e.g., '24h')")
}
default:
return client.PruneInfo{}, errMultipleFilterValues
}
bkFilter := make([]string, 0, opts.Filters.Len())
for cacheField := range cacheFields {
values := opts.Filters.Get(cacheField)
switch len(values) {
case 0:
bkFilter = append(bkFilter, cacheField)
case 1:
bkFilter = append(bkFilter, cacheField+"=="+values[0])
default:
return client.PruneInfo{}, errMultipleFilterValues
}
}
return client.PruneInfo{
All: opts.All,
KeepDuration: unusedFor,
KeepBytes: opts.KeepStorage,
Filter: bkFilter,
}, nil
}

View File

@@ -6,14 +6,19 @@ import (
"path/filepath"
"github.com/containerd/containerd/content/local"
"github.com/docker/docker/api/types"
"github.com/docker/docker/builder/builder-next/adapters/containerimage"
"github.com/docker/docker/builder/builder-next/adapters/snapshot"
containerimageexp "github.com/docker/docker/builder/builder-next/exporter"
"github.com/docker/docker/builder/builder-next/imagerefchecker"
mobyworker "github.com/docker/docker/builder/builder-next/worker"
"github.com/docker/docker/daemon/config"
"github.com/docker/docker/daemon/graphdriver"
units "github.com/docker/go-units"
"github.com/moby/buildkit/cache"
"github.com/moby/buildkit/cache/metadata"
registryremotecache "github.com/moby/buildkit/cache/remotecache/registry"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/control"
"github.com/moby/buildkit/exporter"
"github.com/moby/buildkit/frontend"
@@ -21,7 +26,7 @@ import (
"github.com/moby/buildkit/frontend/gateway"
"github.com/moby/buildkit/frontend/gateway/forwarder"
"github.com/moby/buildkit/snapshot/blobmapping"
"github.com/moby/buildkit/solver/boltdbcachestorage"
"github.com/moby/buildkit/solver/bboltcachestorage"
"github.com/moby/buildkit/worker"
"github.com/pkg/errors"
)
@@ -69,9 +74,20 @@ func newController(rt http.RoundTripper, opt Opt) (*control.Controller, error) {
MetadataStore: md,
})
layerGetter, ok := sbase.(imagerefchecker.LayerGetter)
if !ok {
return nil, errors.Errorf("snapshotter does not implement layergetter")
}
refChecker := imagerefchecker.New(imagerefchecker.Opt{
ImageStore: dist.ImageStore,
LayerGetter: layerGetter,
})
cm, err := cache.NewManager(cache.ManagerOpt{
Snapshotter: snapshotter,
MetadataStore: md,
Snapshotter: snapshotter,
MetadataStore: md,
PruneRefChecker: refChecker,
})
if err != nil {
return nil, err
@@ -85,12 +101,13 @@ func newController(rt http.RoundTripper, opt Opt) (*control.Controller, error) {
MetadataStore: dist.V2MetadataService,
ImageStore: dist.ImageStore,
ReferenceStore: dist.ReferenceStore,
ResolverOpt: opt.ResolverOpt,
})
if err != nil {
return nil, err
}
exec, err := newExecutor(root, opt.NetworkController)
exec, err := newExecutor(root, opt.DefaultCgroupParent, opt.NetworkController)
if err != nil {
return nil, err
}
@@ -109,17 +126,23 @@ func newController(rt http.RoundTripper, opt Opt) (*control.Controller, error) {
return nil, err
}
cacheStorage, err := boltdbcachestorage.NewStore(filepath.Join(opt.Root, "cache.db"))
cacheStorage, err := bboltcachestorage.NewStore(filepath.Join(opt.Root, "cache.db"))
if err != nil {
return nil, err
}
gcPolicy, err := getGCPolicy(opt.BuilderConfig, root)
if err != nil {
return nil, errors.Wrap(err, "could not get builder GC policy")
}
wopt := mobyworker.Opt{
ID: "moby",
SessionManager: opt.SessionManager,
MetadataStore: md,
ContentStore: store,
CacheManager: cm,
GCPolicy: gcPolicy,
Snapshotter: snapshotter,
Executor: exec,
ImageSource: src,
@@ -148,7 +171,48 @@ func newController(rt http.RoundTripper, opt Opt) (*control.Controller, error) {
WorkerController: wc,
Frontends: frontends,
CacheKeyStorage: cacheStorage,
ResolveCacheImporterFunc: registryremotecache.ResolveCacheImporterFunc(opt.SessionManager),
ResolveCacheImporterFunc: registryremotecache.ResolveCacheImporterFunc(opt.SessionManager, opt.ResolverOpt),
// TODO: set ResolveCacheExporterFunc for exporting cache
})
}
func getGCPolicy(conf config.BuilderConfig, root string) ([]client.PruneInfo, error) {
var gcPolicy []client.PruneInfo
if conf.GC.Enabled {
var (
defaultKeepStorage int64
err error
)
if conf.GC.DefaultKeepStorage != "" {
defaultKeepStorage, err = units.RAMInBytes(conf.GC.DefaultKeepStorage)
if err != nil {
return nil, errors.Wrapf(err, "could not parse '%s' as Builder.GC.DefaultKeepStorage config", conf.GC.DefaultKeepStorage)
}
}
if conf.GC.Policy == nil {
gcPolicy = mobyworker.DefaultGCPolicy(root, defaultKeepStorage)
} else {
gcPolicy = make([]client.PruneInfo, len(conf.GC.Policy))
for i, p := range conf.GC.Policy {
b, err := units.RAMInBytes(p.KeepStorage)
if err != nil {
return nil, err
}
if b == 0 {
b = defaultKeepStorage
}
gcPolicy[i], err = toBuildkitPruneInfo(types.BuildCachePruneOptions{
All: p.All,
KeepStorage: b,
Filters: p.Filter,
})
if err != nil {
return nil, err
}
}
}
}
return gcPolicy, nil
}

View File

@@ -3,41 +3,46 @@
package buildkit
import (
"fmt"
"os"
"path/filepath"
"strconv"
"sync"
"github.com/docker/libnetwork"
"github.com/moby/buildkit/executor"
"github.com/moby/buildkit/executor/runcexecutor"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/util/network"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
specs "github.com/opencontainers/runtime-spec/specs-go"
)
const networkName = "bridge"
func newExecutor(root string, net libnetwork.NetworkController) (executor.Executor, error) {
// FIXME: fix bridge networking
_ = bridgeProvider{}
func newExecutor(root, cgroupParent string, net libnetwork.NetworkController) (executor.Executor, error) {
networkProviders := map[pb.NetMode]network.Provider{
pb.NetMode_UNSET: &bridgeProvider{NetworkController: net},
pb.NetMode_HOST: network.NewHostProvider(),
pb.NetMode_NONE: network.NewNoneProvider(),
}
return runcexecutor.New(runcexecutor.Opt{
Root: filepath.Join(root, "executor"),
CommandCandidates: []string{"docker-runc", "runc"},
}, nil)
Root: filepath.Join(root, "executor"),
CommandCandidates: []string{"runc"},
DefaultCgroupParent: cgroupParent,
}, networkProviders)
}
type bridgeProvider struct {
libnetwork.NetworkController
}
func (p *bridgeProvider) NewInterface() (network.Interface, error) {
func (p *bridgeProvider) New() (network.Namespace, error) {
n, err := p.NetworkByName(networkName)
if err != nil {
return nil, err
}
iface := &lnInterface{ready: make(chan struct{})}
iface := &lnInterface{ready: make(chan struct{}), provider: p}
iface.Once.Do(func() {
go iface.init(p.NetworkController, n)
})
@@ -45,33 +50,13 @@ func (p *bridgeProvider) NewInterface() (network.Interface, error) {
return iface, nil
}
func (p *bridgeProvider) Release(iface network.Interface) error {
go func() {
if err := p.release(iface); err != nil {
logrus.Errorf("%s", err)
}
}()
return nil
}
func (p *bridgeProvider) release(iface network.Interface) error {
li, ok := iface.(*lnInterface)
if !ok {
return errors.Errorf("invalid interface %T", iface)
}
err := li.sbx.Delete()
if err1 := li.ep.Delete(true); err1 != nil && err == nil {
err = err1
}
return err
}
type lnInterface struct {
ep libnetwork.Endpoint
sbx libnetwork.Sandbox
sync.Once
err error
ready chan struct{}
err error
ready chan struct{}
provider *bridgeProvider
}
func (iface *lnInterface) init(c libnetwork.NetworkController, n libnetwork.Network) {
@@ -99,14 +84,26 @@ func (iface *lnInterface) init(c libnetwork.NetworkController, n libnetwork.Netw
iface.ep = ep
}
func (iface *lnInterface) Set(pid int) error {
func (iface *lnInterface) Set(s *specs.Spec) {
<-iface.ready
if iface.err != nil {
return iface.err
return
}
// attach netns to bridge within the container namespace, using reexec in a prestart hook
s.Hooks = &specs.Hooks{
Prestart: []specs.Hook{{
Path: filepath.Join("/proc", strconv.Itoa(os.Getpid()), "exe"),
Args: []string{"libnetwork-setkey", iface.sbx.ContainerID(), iface.provider.NetworkController.ID()},
}},
}
return iface.sbx.SetKey(fmt.Sprintf("/proc/%d/ns/net", pid))
}
func (iface *lnInterface) Remove(pid int) error {
return nil
func (iface *lnInterface) Close() error {
<-iface.ready
err := iface.sbx.Delete()
if iface.err != nil {
// iface.err takes precedence over cleanup errors
return iface.err
}
return err
}

View File

@@ -10,7 +10,7 @@ import (
"github.com/moby/buildkit/executor"
)
func newExecutor(_ string, _ libnetwork.NetworkController) (executor.Executor, error) {
func newExecutor(_, _ string, _ libnetwork.NetworkController) (executor.Executor, error) {
return &winExecutor{}, nil
}

View File

@@ -0,0 +1,96 @@
package imagerefchecker
import (
"sync"
"github.com/docker/docker/image"
"github.com/docker/docker/layer"
"github.com/moby/buildkit/cache"
)
// LayerGetter abstracts away the snapshotter
type LayerGetter interface {
GetLayer(string) (layer.Layer, error)
}
// Opt represents the options needed to create a refchecker
type Opt struct {
LayerGetter LayerGetter
ImageStore image.Store
}
// New creates new image reference checker that can be used to see if a reference
// is being used by any of the images in the image store
func New(opt Opt) cache.ExternalRefCheckerFunc {
return func() (cache.ExternalRefChecker, error) {
return &checker{opt: opt, layers: lchain{}, cache: map[string]bool{}}, nil
}
}
type lchain map[layer.DiffID]lchain
func (c lchain) add(ids []layer.DiffID) {
if len(ids) == 0 {
return
}
id := ids[0]
ch, ok := c[id]
if !ok {
ch = lchain{}
c[id] = ch
}
ch.add(ids[1:])
}
func (c lchain) has(ids []layer.DiffID) bool {
if len(ids) == 0 {
return true
}
ch, ok := c[ids[0]]
return ok && ch.has(ids[1:])
}
type checker struct {
opt Opt
once sync.Once
layers lchain
cache map[string]bool
}
func (c *checker) Exists(key string) bool {
if c.opt.ImageStore == nil {
return false
}
c.once.Do(c.init)
if b, ok := c.cache[key]; ok {
return b
}
l, err := c.opt.LayerGetter.GetLayer(key)
if err != nil || l == nil {
c.cache[key] = false
return false
}
ok := c.layers.has(diffIDs(l))
c.cache[key] = ok
return ok
}
func (c *checker) init() {
imgs := c.opt.ImageStore.Map()
for _, img := range imgs {
c.layers.add(img.RootFS.DiffIDs)
}
}
func diffIDs(l layer.Layer) []layer.DiffID {
p := l.Parent()
if p == nil {
return []layer.DiffID{l.DiffID()}
}
return append(diffIDs(p), l.DiffID())
}

View File

@@ -0,0 +1,51 @@
package worker
import (
"math"
"github.com/moby/buildkit/client"
)
const defaultCap int64 = 2e9 // 2GB
// tempCachePercent represents the percentage ratio of the cache size in bytes to temporarily keep for a short period of time (couple of days)
// over the total cache size in bytes. Because there is no perfect value, a mathematically pleasing one was chosen.
// The value is approximately 13.8
const tempCachePercent = math.E * math.Pi * math.Phi
// DefaultGCPolicy returns a default builder GC policy
func DefaultGCPolicy(p string, defaultKeepBytes int64) []client.PruneInfo {
keep := defaultKeepBytes
if defaultKeepBytes == 0 {
keep = detectDefaultGCCap(p)
}
tempCacheKeepBytes := int64(math.Round(float64(keep) / 100. * float64(tempCachePercent)))
const minTempCacheKeepBytes = 512 * 1e6 // 512MB
if tempCacheKeepBytes < minTempCacheKeepBytes {
tempCacheKeepBytes = minTempCacheKeepBytes
}
return []client.PruneInfo{
// if build cache uses more than 512MB delete the most easily reproducible data after it has not been used for 2 days
{
Filter: []string{"type==source.local,type==exec.cachemount,type==source.git.checkout"},
KeepDuration: 48 * 3600, // 48h
KeepBytes: tempCacheKeepBytes,
},
// remove any data not used for 60 days
{
KeepDuration: 60 * 24 * 3600, // 60d
KeepBytes: keep,
},
// keep the unshared build cache under cap
{
KeepBytes: keep,
},
// if previous policies were insufficient start deleting internal data to keep build cache under cap
{
All: true,
KeepBytes: keep,
},
}
}

View File

@@ -0,0 +1,17 @@
// +build !windows
package worker
import (
"syscall"
)
func detectDefaultGCCap(root string) int64 {
var st syscall.Statfs_t
if err := syscall.Statfs(root, &st); err != nil {
return defaultCap
}
diskSize := int64(st.Bsize) * int64(st.Blocks) // nolint unconvert
avail := diskSize / 10
return (avail/(1<<30) + 1) * 1e9 // round up
}

View File

@@ -0,0 +1,7 @@
// +build windows
package worker
func detectDefaultGCCap(root string) int64 {
return defaultCap
}

View File

@@ -46,6 +46,7 @@ import (
type Opt struct {
ID string
Labels map[string]string
GCPolicy []client.PruneInfo
SessionManager *session.Manager
MetadataStore *metadata.Store
Executor executor.Executor
@@ -130,9 +131,18 @@ func (w *Worker) Platforms() []ocispec.Platform {
return []ocispec.Platform{platforms.DefaultSpec()}
}
// GCPolicy returns automatic GC Policy
func (w *Worker) GCPolicy() []client.PruneInfo {
return w.Opt.GCPolicy
}
// LoadRef loads a reference by ID
func (w *Worker) LoadRef(id string) (cache.ImmutableRef, error) {
return w.CacheManager.Get(context.TODO(), id)
func (w *Worker) LoadRef(id string, hidden bool) (cache.ImmutableRef, error) {
var opts []cache.RefOption
if hidden {
opts = append(opts, cache.NoUpdateLastUsed)
}
return w.CacheManager.Get(context.TODO(), id, opts...)
}
// ResolveOp converts a LLB vertex into a LLB operation
@@ -176,8 +186,8 @@ func (w *Worker) DiskUsage(ctx context.Context, opt client.DiskUsageInfo) ([]*cl
}
// Prune deletes reclaimable build cache
func (w *Worker) Prune(ctx context.Context, ch chan client.UsageInfo, info client.PruneInfo) error {
return w.CacheManager.Prune(ctx, ch, info)
func (w *Worker) Prune(ctx context.Context, ch chan client.UsageInfo, info ...client.PruneInfo) error {
return w.CacheManager.Prune(ctx, ch, info...)
}
// Exporter returns exporter by name

View File

@@ -12,7 +12,6 @@ import (
"sync"
"time"
"github.com/boltdb/bolt"
"github.com/docker/docker/builder"
"github.com/docker/docker/builder/remotecontext"
"github.com/docker/docker/pkg/archive"
@@ -23,6 +22,7 @@ import (
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/tonistiigi/fsutil"
bolt "go.etcd.io/bbolt"
"golang.org/x/sync/singleflight"
)

View File

@@ -4,19 +4,34 @@ import (
"context"
"encoding/json"
"fmt"
"net/url"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/pkg/errors"
)
// BuildCachePrune requests the daemon to delete unused cache data
func (cli *Client) BuildCachePrune(ctx context.Context) (*types.BuildCachePruneReport, error) {
func (cli *Client) BuildCachePrune(ctx context.Context, opts types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error) {
if err := cli.NewVersionError("1.31", "build prune"); err != nil {
return nil, err
}
report := types.BuildCachePruneReport{}
serverResp, err := cli.post(ctx, "/build/prune", nil, nil, nil)
query := url.Values{}
if opts.All {
query.Set("all", "1")
}
query.Set("keep-storage", fmt.Sprintf("%d", opts.KeepStorage))
filters, err := filters.ToJSON(opts.Filters)
if err != nil {
return nil, errors.Wrap(err, "prune could not marshal filters option")
}
query.Set("filters", filters)
serverResp, err := cli.post(ctx, "/build/prune", query, nil, nil)
if err != nil {
return nil, err
}

View File

@@ -413,7 +413,7 @@ func (cli *Client) SetCustomHTTPHeaders(headers map[string]string) {
func (cli *Client) Dialer() func(context.Context) (net.Conn, error) {
return func(ctx context.Context) (net.Conn, error) {
if transport, ok := cli.client.Transport.(*http.Transport); ok {
if transport.DialContext != nil {
if transport.DialContext != nil && transport.TLSClientConfig == nil {
return transport.DialContext(ctx, cli.proto, cli.addr)
}
}

View File

@@ -86,7 +86,7 @@ type DistributionAPIClient interface {
// ImageAPIClient defines API client methods for the images
type ImageAPIClient interface {
ImageBuild(ctx context.Context, context io.Reader, options types.ImageBuildOptions) (types.ImageBuildResponse, error)
BuildCachePrune(ctx context.Context) (*types.BuildCachePruneReport, error)
BuildCachePrune(ctx context.Context, opts types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error)
BuildCancel(ctx context.Context, id string) error
ImageCreate(ctx context.Context, parentReference string, options types.ImageCreateOptions) (io.ReadCloser, error)
ImageHistory(ctx context.Context, image string) ([]image.HistoryResponseItem, error)

View File

@@ -10,6 +10,7 @@ import (
"strings"
"time"
containerddefaults "github.com/containerd/containerd/defaults"
"github.com/docker/distribution/uuid"
"github.com/docker/docker/api"
apiserver "github.com/docker/docker/api/server"
@@ -27,7 +28,6 @@ import (
swarmrouter "github.com/docker/docker/api/server/router/swarm"
systemrouter "github.com/docker/docker/api/server/router/system"
"github.com/docker/docker/api/server/router/volume"
"github.com/docker/docker/api/types"
buildkit "github.com/docker/docker/builder/builder-next"
"github.com/docker/docker/builder/dockerfile"
"github.com/docker/docker/builder/fscache"
@@ -141,22 +141,25 @@ func (cli *DaemonCli) start(opts *daemonOptions) (err error) {
ctx, cancel := context.WithCancel(context.Background())
if cli.Config.ContainerdAddr == "" && runtime.GOOS != "windows" {
opts, err := cli.getContainerdDaemonOpts()
if err != nil {
cancel()
return fmt.Errorf("Failed to generate containerd options: %v", err)
if !systemContainerdRunning() {
opts, err := cli.getContainerdDaemonOpts()
if err != nil {
cancel()
return fmt.Errorf("Failed to generate containerd options: %v", err)
}
r, err := supervisor.Start(ctx, filepath.Join(cli.Config.Root, "containerd"), filepath.Join(cli.Config.ExecRoot, "containerd"), opts...)
if err != nil {
cancel()
return fmt.Errorf("Failed to start containerd: %v", err)
}
cli.Config.ContainerdAddr = r.Address()
// Try to wait for containerd to shutdown
defer r.WaitTimeout(10 * time.Second)
} else {
cli.Config.ContainerdAddr = containerddefaults.DefaultAddress
}
r, err := supervisor.Start(ctx, filepath.Join(cli.Config.Root, "containerd"), filepath.Join(cli.Config.ExecRoot, "containerd"), opts...)
if err != nil {
cancel()
return fmt.Errorf("Failed to start containerd: %v", err)
}
cli.Config.ContainerdAddr = r.Address()
// Try to wait for containerd to shutdown
defer r.WaitTimeout(10 * time.Second)
}
defer cancel()
@@ -253,14 +256,14 @@ type routerOptions struct {
sessionManager *session.Manager
buildBackend *buildbackend.Backend
buildCache *fscache.FSCache // legacy
features *map[string]bool
buildkit *buildkit.Builder
builderVersion types.BuilderVersion
daemon *daemon.Daemon
api *apiserver.Server
cluster *cluster.Cluster
}
func newRouterOptions(config *config.Config, daemon *daemon.Daemon) (routerOptions, error) {
func newRouterOptions(config *config.Config, d *daemon.Daemon) (routerOptions, error) {
opts := routerOptions{}
sm, err := session.NewManager()
if err != nil {
@@ -281,39 +284,35 @@ func newRouterOptions(config *config.Config, daemon *daemon.Daemon) (routerOptio
return opts, errors.Wrap(err, "failed to create fscache")
}
manager, err := dockerfile.NewBuildManager(daemon.BuilderBackend(), sm, buildCache, daemon.IdentityMapping())
manager, err := dockerfile.NewBuildManager(d.BuilderBackend(), sm, buildCache, d.IdentityMapping())
if err != nil {
return opts, err
}
cgroupParent := newCgroupParent(config)
bk, err := buildkit.New(buildkit.Opt{
SessionManager: sm,
Root: filepath.Join(config.Root, "buildkit"),
Dist: daemon.DistributionServices(),
NetworkController: daemon.NetworkController(),
SessionManager: sm,
Root: filepath.Join(config.Root, "buildkit"),
Dist: d.DistributionServices(),
NetworkController: d.NetworkController(),
DefaultCgroupParent: cgroupParent,
ResolverOpt: d.NewResolveOptionsFunc(),
BuilderConfig: config.Builder,
})
if err != nil {
return opts, err
}
bb, err := buildbackend.NewBackend(daemon.ImageService(), manager, buildCache, bk)
bb, err := buildbackend.NewBackend(d.ImageService(), manager, buildCache, bk)
if err != nil {
return opts, errors.Wrap(err, "failed to create buildmanager")
}
var bv types.BuilderVersion
if v, ok := config.Features["buildkit"]; ok {
if v {
bv = types.BuilderBuildKit
} else {
bv = types.BuilderV1
}
}
return routerOptions{
sessionManager: sm,
buildBackend: bb,
buildCache: buildCache,
buildkit: bk,
builderVersion: bv,
daemon: daemon,
features: d.Features(),
daemon: d,
}, nil
}
@@ -486,9 +485,9 @@ func initRouter(opts routerOptions) {
checkpointrouter.NewRouter(opts.daemon, decoder),
container.NewRouter(opts.daemon, decoder),
image.NewRouter(opts.daemon.ImageService()),
systemrouter.NewRouter(opts.daemon, opts.cluster, opts.buildCache, opts.buildkit, opts.builderVersion),
systemrouter.NewRouter(opts.daemon, opts.cluster, opts.buildCache, opts.buildkit, opts.features),
volume.NewRouter(opts.daemon.VolumesService()),
build.NewRouter(opts.buildBackend, opts.daemon, opts.builderVersion),
build.NewRouter(opts.buildBackend, opts.daemon, opts.features),
sessionrouter.NewRouter(opts.sessionManager),
swarmrouter.NewRouter(opts.cluster),
pluginrouter.NewRouter(opts.daemon.PluginManager()),
@@ -666,3 +665,8 @@ func validateAuthzPlugins(requestedPlugins []string, pg plugingetter.PluginGette
}
return nil
}
func systemContainerdRunning() bool {
_, err := os.Lstat(containerddefaults.DefaultAddress)
return err == nil
}

View File

@@ -13,6 +13,7 @@ import (
"github.com/containerd/containerd/runtime/v1/linux"
"github.com/docker/docker/cmd/dockerd/hack"
"github.com/docker/docker/daemon"
"github.com/docker/docker/daemon/config"
"github.com/docker/docker/libcontainerd/supervisor"
"github.com/docker/libnetwork/portallocator"
"golang.org/x/sys/unix"
@@ -107,3 +108,18 @@ func wrapListeners(proto string, ls []net.Listener) []net.Listener {
}
return ls
}
func newCgroupParent(config *config.Config) string {
cgroupParent := "docker"
useSystemd := daemon.UsingSystemd(config)
if useSystemd {
cgroupParent = "system.slice"
}
if config.CgroupParent != "" {
cgroupParent = config.CgroupParent
}
if useSystemd {
cgroupParent = cgroupParent + ":" + "docker" + ":"
}
return cgroupParent
}

View File

@@ -6,6 +6,7 @@ import (
"os"
"path/filepath"
"github.com/docker/docker/daemon/config"
"github.com/docker/docker/libcontainerd/supervisor"
"github.com/sirupsen/logrus"
"golang.org/x/sys/windows"
@@ -83,3 +84,7 @@ func allocateDaemonPort(addr string) error {
func wrapListeners(proto string, ls []net.Listener) []net.Listener {
return ls
}
func newCgroupParent(config *config.Config) string {
return ""
}

View File

@@ -31,7 +31,7 @@ bundle_files(){
echo $BUNDLE/binary-daemon/$f
fi
done
for f in docker-containerd docker-containerd-ctr docker-containerd-shim docker-init docker-runc; do
for f in containerd ctr containerd-shim docker-init runc; do
echo $BUNDLE/binary-daemon/$f
done
if [ -d $BUNDLE/dynbinary-client ]; then

View File

@@ -123,7 +123,7 @@ func (daemon *Daemon) containerAttach(c *container.Container, cfg *stream.Attach
return logger.ErrReadLogsNotSupported{}
}
logs := cLog.ReadLogs(logger.ReadConfig{Tail: -1})
defer logs.Close()
defer logs.ConsumerGone()
LogLoop:
for {

View File

@@ -3,7 +3,6 @@ package cluster // import "github.com/docker/docker/daemon/cluster"
import (
"context"
"fmt"
"net"
"path/filepath"
"runtime"
"strings"
@@ -14,6 +13,7 @@ import (
"github.com/docker/docker/daemon/cluster/executor/container"
lncluster "github.com/docker/libnetwork/cluster"
swarmapi "github.com/docker/swarmkit/api"
"github.com/docker/swarmkit/manager/allocator/cnmallocator"
swarmnode "github.com/docker/swarmkit/node"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
@@ -115,12 +115,6 @@ func (n *nodeRunner) start(conf nodeStartConfig) error {
joinAddr = conf.RemoteAddr
}
var defaultAddrPool []*net.IPNet
for _, address := range conf.DefaultAddressPool {
if _, b, err := net.ParseCIDR(address); err == nil {
defaultAddrPool = append(defaultAddrPool, b)
}
}
// Hostname is not set here. Instead, it is obtained from
// the node description that is reported periodically
swarmnodeConfig := swarmnode.Config{
@@ -128,11 +122,13 @@ func (n *nodeRunner) start(conf nodeStartConfig) error {
ListenControlAPI: control,
ListenRemoteAPI: conf.ListenAddr,
AdvertiseRemoteAPI: conf.AdvertiseAddr,
DefaultAddrPool: defaultAddrPool,
SubnetSize: int(conf.SubnetSize),
JoinAddr: joinAddr,
StateDir: n.cluster.root,
JoinToken: conf.joinToken,
NetworkConfig: &cnmallocator.NetworkConfig{
DefaultAddrPool: conf.DefaultAddressPool,
SubnetSize: conf.SubnetSize,
},
JoinAddr: joinAddr,
StateDir: n.cluster.root,
JoinToken: conf.joinToken,
Executor: container.NewExecutor(
n.cluster.config.Backend,
n.cluster.config.PluginBackend,

22
daemon/config/builder.go Normal file
View File

@@ -0,0 +1,22 @@
package config
import "github.com/docker/docker/api/types/filters"
// BuilderGCRule represents a GC rule for buildkit cache
type BuilderGCRule struct {
All bool `json:",omitempty"`
Filter filters.Args `json:",omitempty"`
KeepStorage string `json:",omitempty"`
}
// BuilderGCConfig contains GC config for a buildkit builder
type BuilderGCConfig struct {
Enabled bool `json:",omitempty"`
Policy []BuilderGCRule `json:",omitempty"`
DefaultKeepStorage string `json:",omitempty"`
}
// BuilderConfig contains config for the builder
type BuilderConfig struct {
GC BuilderGCConfig `json:",omitempty"`
}

View File

@@ -55,6 +55,7 @@ var flatOptions = map[string]bool{
"runtimes": true,
"default-ulimits": true,
"features": true,
"builder": true,
}
// skipValidateOptions contains configuration keys
@@ -62,6 +63,17 @@ var flatOptions = map[string]bool{
// for unknown flag validation.
var skipValidateOptions = map[string]bool{
"features": true,
"builder": true,
}
// skipDuplicates contains configuration keys that
// will be skipped when checking duplicated
// configuration field defined in both daemon
// config file and from dockerd cli flags.
// This allows some configurations to be merged
// during the parsing.
var skipDuplicates = map[string]bool{
"runtimes": true,
}
// LogConfig represents the default log configuration.
@@ -215,6 +227,8 @@ type CommonConfig struct {
// Features contains a list of feature key value pairs indicating what features are enabled or disabled.
// If a certain feature doesn't appear in this list then it's unset (i.e. neither true nor false).
Features map[string]bool `json:"features,omitempty"`
Builder BuilderConfig `json:"builder,omitempty"`
}
// IsValueSet returns true if a configuration value
@@ -491,13 +505,13 @@ func findConfigurationConflicts(config map[string]interface{}, flags *pflag.Flag
duplicatedConflicts := func(f *pflag.Flag) {
// search option name in the json configuration payload if the value is a named option
if namedOption, ok := f.Value.(opts.NamedOption); ok {
if optsValue, ok := config[namedOption.Name()]; ok {
if optsValue, ok := config[namedOption.Name()]; ok && !skipDuplicates[namedOption.Name()] {
conflicts = append(conflicts, printConflict(namedOption.Name(), f.Value.String(), optsValue))
}
} else {
// search flag name in the json configuration payload
for _, name := range []string{f.Name, f.Shorthand} {
if value, ok := config[name]; ok {
if value, ok := config[name]; ok && !skipDuplicates[name] {
conflicts = append(conflicts, printConflict(name, f.Value.String(), value))
break
}

View File

@@ -9,6 +9,7 @@ import (
"context"
"fmt"
"io/ioutil"
"math/rand"
"net"
"os"
"path"
@@ -23,6 +24,8 @@ import (
"github.com/containerd/containerd"
"github.com/containerd/containerd/defaults"
"github.com/containerd/containerd/pkg/dialer"
"github.com/containerd/containerd/remotes/docker"
"github.com/docker/distribution/reference"
"github.com/docker/docker/api/types"
containertypes "github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/swarm"
@@ -36,6 +39,8 @@ import (
"github.com/docker/docker/daemon/logger"
"github.com/docker/docker/daemon/network"
"github.com/docker/docker/errdefs"
"github.com/moby/buildkit/util/resolver"
"github.com/moby/buildkit/util/tracing"
"github.com/sirupsen/logrus"
// register graph drivers
_ "github.com/docker/docker/daemon/graphdriver/register"
@@ -136,6 +141,62 @@ func (daemon *Daemon) HasExperimental() bool {
return daemon.configStore != nil && daemon.configStore.Experimental
}
// Features returns the features map from configStore
func (daemon *Daemon) Features() *map[string]bool {
return &daemon.configStore.Features
}
// NewResolveOptionsFunc returns a call back function to resolve "registry-mirrors" and
// "insecure-registries" for buildkit
func (daemon *Daemon) NewResolveOptionsFunc() resolver.ResolveOptionsFunc {
return func(ref string) docker.ResolverOptions {
var (
registryKey = "docker.io"
mirrors = make([]string, len(daemon.configStore.Mirrors))
m = map[string]resolver.RegistryConf{}
)
// must trim "https://" or "http://" prefix
for i, v := range daemon.configStore.Mirrors {
v = strings.TrimPrefix(v, "https://")
v = strings.TrimPrefix(v, "http://")
mirrors[i] = v
}
// set "registry-mirrors"
m[registryKey] = resolver.RegistryConf{Mirrors: mirrors}
// set "insecure-registries"
for _, v := range daemon.configStore.InsecureRegistries {
v = strings.TrimPrefix(v, "http://")
m[v] = resolver.RegistryConf{
PlainHTTP: true,
}
}
def := docker.ResolverOptions{
Client: tracing.DefaultClient,
}
parsed, err := reference.ParseNormalizedNamed(ref)
if err != nil {
return def
}
host := reference.Domain(parsed)
c, ok := m[host]
if !ok {
return def
}
if len(c.Mirrors) > 0 {
def.Host = func(string) (string, error) {
return c.Mirrors[rand.Intn(len(c.Mirrors))], nil
}
}
def.PlainHTTP = c.PlainHTTP
return def
}
}
func (daemon *Daemon) restore() error {
containers := make(map[string]*container.Container)

View File

@@ -54,11 +54,11 @@ import (
const (
// DefaultShimBinary is the default shim to be used by containerd if none
// is specified
DefaultShimBinary = "docker-containerd-shim"
DefaultShimBinary = "containerd-shim"
// DefaultRuntimeBinary is the default runtime to be used by
// containerd if none is specified
DefaultRuntimeBinary = "docker-runc"
DefaultRuntimeBinary = "runc"
// See https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git/tree/kernel/sched/sched.h?id=8cd9234c64c584432f6992fe944ca9e46ca8ea76#n269
linuxMinCPUShares = 2
@@ -76,7 +76,7 @@ const (
// DefaultRuntimeName is the default runtime to be used by
// containerd if none is specified
DefaultRuntimeName = "docker-runc"
DefaultRuntimeName = "runc"
)
type containerGetter interface {

View File

@@ -323,7 +323,8 @@ func (daemon *Daemon) initNetworkController(config *config.Config, activeSandbox
// discover and add HNS networks to windows
// network that exist are removed and added again
for _, v := range hnsresponse {
if strings.ToLower(v.Type) == "private" {
networkTypeNorm := strings.ToLower(v.Type)
if networkTypeNorm == "private" || networkTypeNorm == "internal" {
continue // workaround for HNS reporting unsupported networks
}
var n libnetwork.Network

View File

@@ -27,7 +27,7 @@ type directLVMConfig struct {
var (
errThinpPercentMissing = errors.New("must set both `dm.thinp_percent` and `dm.thinp_metapercent` if either is specified")
errThinpPercentTooBig = errors.New("combined `dm.thinp_percent` and `dm.thinp_metapercent` must not be greater than 100")
errMissingSetupDevice = errors.New("must provide device path in `dm.setup_device` in order to configure direct-lvm")
errMissingSetupDevice = errors.New("must provide device path in `dm.directlvm_device` in order to configure direct-lvm")
)
func validateLVMConfig(cfg directLVMConfig) error {

View File

@@ -205,8 +205,6 @@ func (i *ImageService) LayerDiskUsage(ctx context.Context) (int64, error) {
if err == nil {
if _, ok := layerRefs[l.ChainID()]; ok {
allLayersSize += size
} else {
logrus.Warnf("found leaked image layer %v", l.ChainID())
}
} else {
logrus.Warnf("failed to get diff size for layer %v", l.ChainID())

View File

@@ -146,7 +146,8 @@ func (daemon *Daemon) filterByNameIDMatches(view container.View, ctx *listContex
continue
}
for _, eachName := range idNames {
if ctx.filters.Match("name", strings.TrimPrefix(eachName, "/")) {
// match both on container name with, and without slash-prefix
if ctx.filters.Match("name", eachName) || ctx.filters.Match("name", strings.TrimPrefix(eachName, "/")) {
matches[id] = true
}
}
@@ -429,7 +430,7 @@ func includeContainerInList(container *container.Snapshot, ctx *listContext) ite
}
// Do not include container if the name doesn't match
if !ctx.filters.Match("name", strings.TrimPrefix(container.Name, "/")) {
if !ctx.filters.Match("name", container.Name) && !ctx.filters.Match("name", strings.TrimPrefix(container.Name, "/")) {
return excludeContainer
}

View File

@@ -4,7 +4,6 @@ import (
"io/ioutil"
"os"
"path/filepath"
"strings"
"testing"
"github.com/docker/docker/api/types"
@@ -35,6 +34,7 @@ func TestMain(m *testing.M) {
// work against it. It takes in a pointer to Daemon so that
// minor operations are not repeated by the caller
func setupContainerWithName(t *testing.T, name string, daemon *Daemon) *container.Container {
t.Helper()
var (
id = uuid.New()
computedImageID = digest.FromString(id)
@@ -46,6 +46,9 @@ func setupContainerWithName(t *testing.T, name string, daemon *Daemon) *containe
c := container.NewBaseContainer(id, cRoot)
// these are for passing includeContainerInList
if name[0] != '/' {
name = "/" + name
}
c.Name = name
c.Running = true
c.HostConfig = &containertypes.HostConfig{}
@@ -68,7 +71,7 @@ func setupContainerWithName(t *testing.T, name string, daemon *Daemon) *containe
func containerListContainsName(containers []*types.Container, name string) bool {
for _, container := range containers {
for _, containerName := range container.Names {
if strings.TrimPrefix(containerName, "/") == name {
if containerName == name {
return true
}
}
@@ -110,16 +113,33 @@ func TestNameFilter(t *testing.T) {
containerList, err := d.Containers(&types.ContainerListOptions{
Filters: filters.NewArgs(filters.Arg("name", "^a")),
})
assert.Assert(t, err == nil)
assert.NilError(t, err)
assert.Assert(t, is.Len(containerList, 2))
assert.Assert(t, containerListContainsName(containerList, one.Name))
assert.Assert(t, containerListContainsName(containerList, two.Name))
// Same as above but with slash prefix should produce the same result
containerListWithPrefix, err := d.Containers(&types.ContainerListOptions{
Filters: filters.NewArgs(filters.Arg("name", "^/a")),
})
assert.NilError(t, err)
assert.Assert(t, is.Len(containerListWithPrefix, 2))
assert.Assert(t, containerListContainsName(containerListWithPrefix, one.Name))
assert.Assert(t, containerListContainsName(containerListWithPrefix, two.Name))
// Same as above but make sure it works for exact names
containerList, err = d.Containers(&types.ContainerListOptions{
Filters: filters.NewArgs(filters.Arg("name", "b1")),
})
assert.Assert(t, err == nil)
assert.NilError(t, err)
assert.Assert(t, is.Len(containerList, 1))
assert.Assert(t, containerListContainsName(containerList, three.Name))
// Same as above but with slash prefix should produce the same result
containerListWithPrefix, err = d.Containers(&types.ContainerListOptions{
Filters: filters.NewArgs(filters.Arg("name", "/b1")),
})
assert.NilError(t, err)
assert.Assert(t, is.Len(containerListWithPrefix, 1))
assert.Assert(t, containerListContainsName(containerListWithPrefix, three.Name))
}

View File

@@ -93,21 +93,12 @@ func (a *pluginAdapterWithRead) ReadLogs(config ReadConfig) *LogWatcher {
dec := logdriver.NewLogEntryDecoder(stream)
for {
select {
case <-watcher.WatchClose():
return
default:
}
var buf logdriver.LogEntry
if err := dec.Decode(&buf); err != nil {
if err == io.EOF {
return
}
select {
case watcher.Err <- errors.Wrap(err, "error decoding log message"):
case <-watcher.WatchClose():
}
watcher.Err <- errors.Wrap(err, "error decoding log message")
return
}
@@ -125,11 +116,10 @@ func (a *pluginAdapterWithRead) ReadLogs(config ReadConfig) *LogWatcher {
return
}
// send the message unless the consumer is gone
select {
case watcher.Msg <- msg:
case <-watcher.WatchClose():
// make sure the message we consumed is sent
watcher.Msg <- msg
case <-watcher.WatchConsumerGone():
return
}
}

View File

@@ -174,7 +174,7 @@ func TestAdapterReadLogs(t *testing.T) {
t.Fatal("timeout waiting for message channel to close")
}
lw.Close()
lw.ProducerGone()
lw = lr.ReadLogs(ReadConfig{Follow: true})
for _, x := range testMsg {

View File

@@ -165,7 +165,7 @@ func (s *journald) Close() error {
s.mu.Lock()
s.closed = true
for reader := range s.readers.readers {
reader.Close()
reader.ProducerGone()
}
s.mu.Unlock()
return nil
@@ -299,7 +299,7 @@ func (s *journald) followJournal(logWatcher *logger.LogWatcher, j *C.sd_journal,
// Wait until we're told to stop.
select {
case cursor = <-newCursor:
case <-logWatcher.WatchClose():
case <-logWatcher.WatchConsumerGone():
// Notify the other goroutine that its work is done.
C.close(pfd[1])
cursor = <-newCursor

View File

@@ -166,13 +166,14 @@ func ValidateLogOpt(cfg map[string]string) error {
return nil
}
// Close closes underlying file and signals all readers to stop.
// Close closes underlying file and signals all the readers
// that the logs producer is gone.
func (l *JSONFileLogger) Close() error {
l.mu.Lock()
l.closed = true
err := l.writer.Close()
for r := range l.readers {
r.Close()
r.ProducerGone()
delete(l.readers, r)
}
l.mu.Unlock()

View File

@@ -50,11 +50,10 @@ func BenchmarkJSONFileLoggerReadLogs(b *testing.B) {
}()
lw := jsonlogger.(*JSONFileLogger).ReadLogs(logger.ReadConfig{Follow: true})
watchClose := lw.WatchClose()
for {
select {
case <-lw.Msg:
case <-watchClose:
case <-lw.WatchProducerGone():
return
case err := <-chError:
if err != nil {

View File

@@ -166,7 +166,7 @@ func (d *driver) Close() error {
d.closed = true
err := d.logfile.Close()
for r := range d.readers {
r.Close()
r.ProducerGone()
delete(d.readers, r)
}
d.mu.Unlock()

View File

@@ -104,33 +104,50 @@ type LogWatcher struct {
// For sending log messages to a reader.
Msg chan *Message
// For sending error messages that occur while while reading logs.
Err chan error
closeOnce sync.Once
closeNotifier chan struct{}
Err chan error
producerOnce sync.Once
producerGone chan struct{}
consumerOnce sync.Once
consumerGone chan struct{}
}
// NewLogWatcher returns a new LogWatcher.
func NewLogWatcher() *LogWatcher {
return &LogWatcher{
Msg: make(chan *Message, logWatcherBufferSize),
Err: make(chan error, 1),
closeNotifier: make(chan struct{}),
Msg: make(chan *Message, logWatcherBufferSize),
Err: make(chan error, 1),
producerGone: make(chan struct{}),
consumerGone: make(chan struct{}),
}
}
// Close notifies the underlying log reader to stop.
func (w *LogWatcher) Close() {
// ProducerGone notifies the underlying log reader that
// the logs producer (a container) is gone.
func (w *LogWatcher) ProducerGone() {
// only close if not already closed
w.closeOnce.Do(func() {
close(w.closeNotifier)
w.producerOnce.Do(func() {
close(w.producerGone)
})
}
// WatchClose returns a channel receiver that receives notification
// when the watcher has been closed. This should only be called from
// one goroutine.
func (w *LogWatcher) WatchClose() <-chan struct{} {
return w.closeNotifier
// WatchProducerGone returns a channel receiver that receives notification
// once the logs producer (a container) is gone.
func (w *LogWatcher) WatchProducerGone() <-chan struct{} {
return w.producerGone
}
// ConsumerGone notifies that the logs consumer is gone.
func (w *LogWatcher) ConsumerGone() {
// only close if not already closed
w.consumerOnce.Do(func() {
close(w.consumerGone)
})
}
// WatchConsumerGone returns a channel receiver that receives notification
// when the log watcher consumer is gone.
func (w *LogWatcher) WatchConsumerGone() <-chan struct{} {
return w.consumerGone
}
// Capability defines the list of capabilities that a driver can implement

View File

@@ -488,7 +488,7 @@ func tailFiles(files []SizeReaderAt, watcher *logger.LogWatcher, createDecoder m
go func() {
select {
case <-ctx.Done():
case <-watcher.WatchClose():
case <-watcher.WatchConsumerGone():
cancel()
}
}()
@@ -546,22 +546,9 @@ func followLogs(f *os.File, logWatcher *logger.LogWatcher, notifyRotate chan int
}
defer func() {
f.Close()
fileWatcher.Remove(name)
fileWatcher.Close()
}()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
go func() {
select {
case <-logWatcher.WatchClose():
fileWatcher.Remove(name)
cancel()
case <-ctx.Done():
return
}
}()
var retries int
handleRotate := func() error {
f.Close()
@@ -596,7 +583,9 @@ func followLogs(f *os.File, logWatcher *logger.LogWatcher, notifyRotate chan int
case fsnotify.Rename, fsnotify.Remove:
select {
case <-notifyRotate:
case <-ctx.Done():
case <-logWatcher.WatchProducerGone():
return errDone
case <-logWatcher.WatchConsumerGone():
return errDone
}
if err := handleRotate(); err != nil {
@@ -618,7 +607,9 @@ func followLogs(f *os.File, logWatcher *logger.LogWatcher, notifyRotate chan int
return errRetry
}
return err
case <-ctx.Done():
case <-logWatcher.WatchProducerGone():
return errDone
case <-logWatcher.WatchConsumerGone():
return errDone
}
}
@@ -664,23 +655,11 @@ func followLogs(f *os.File, logWatcher *logger.LogWatcher, notifyRotate chan int
if !until.IsZero() && msg.Timestamp.After(until) {
return
}
// send the message, unless the consumer is gone
select {
case logWatcher.Msg <- msg:
case <-ctx.Done():
logWatcher.Msg <- msg
for {
msg, err := decodeLogLine()
if err != nil {
return
}
if !since.IsZero() && msg.Timestamp.Before(since) {
continue
}
if !until.IsZero() && msg.Timestamp.After(until) {
return
}
logWatcher.Msg <- msg
}
case <-logWatcher.WatchConsumerGone():
return
}
}
}

View File

@@ -4,6 +4,8 @@ import (
"bufio"
"context"
"io"
"io/ioutil"
"os"
"strings"
"testing"
"time"
@@ -74,3 +76,128 @@ func TestTailFiles(t *testing.T) {
assert.Assert(t, string(msg.Line) == "Where we're going we don't need roads.", string(msg.Line))
}
}
func TestFollowLogsConsumerGone(t *testing.T) {
lw := logger.NewLogWatcher()
f, err := ioutil.TempFile("", t.Name())
assert.NilError(t, err)
defer func() {
f.Close()
os.Remove(f.Name())
}()
makeDecoder := func(rdr io.Reader) func() (*logger.Message, error) {
return func() (*logger.Message, error) {
return &logger.Message{}, nil
}
}
followLogsDone := make(chan struct{})
var since, until time.Time
go func() {
followLogs(f, lw, make(chan interface{}), makeDecoder, since, until)
close(followLogsDone)
}()
select {
case <-lw.Msg:
case err := <-lw.Err:
assert.NilError(t, err)
case <-followLogsDone:
t.Fatal("follow logs finished unexpectedly")
case <-time.After(10 * time.Second):
t.Fatal("timeout waiting for log message")
}
lw.ConsumerGone()
select {
case <-followLogsDone:
case <-time.After(20 * time.Second):
t.Fatal("timeout waiting for followLogs() to finish")
}
}
func TestFollowLogsProducerGone(t *testing.T) {
lw := logger.NewLogWatcher()
f, err := ioutil.TempFile("", t.Name())
assert.NilError(t, err)
defer os.Remove(f.Name())
var sent, received, closed int
makeDecoder := func(rdr io.Reader) func() (*logger.Message, error) {
return func() (*logger.Message, error) {
if closed == 1 {
closed++
t.Logf("logDecode() closed after sending %d messages\n", sent)
return nil, io.EOF
} else if closed > 1 {
t.Fatal("logDecode() called after closing!")
return nil, io.EOF
}
sent++
return &logger.Message{}, nil
}
}
var since, until time.Time
followLogsDone := make(chan struct{})
go func() {
followLogs(f, lw, make(chan interface{}), makeDecoder, since, until)
close(followLogsDone)
}()
// read 1 message
select {
case <-lw.Msg:
received++
case err := <-lw.Err:
assert.NilError(t, err)
case <-followLogsDone:
t.Fatal("followLogs() finished unexpectedly")
case <-time.After(10 * time.Second):
t.Fatal("timeout waiting for log message")
}
// "stop" the "container"
closed = 1
lw.ProducerGone()
// should receive all the messages sent
readDone := make(chan struct{})
go func() {
defer close(readDone)
for {
select {
case <-lw.Msg:
received++
if received == sent {
return
}
case err := <-lw.Err:
assert.NilError(t, err)
}
}
}()
select {
case <-readDone:
case <-time.After(30 * time.Second):
t.Fatalf("timeout waiting for log messages to be read (sent: %d, received: %d", sent, received)
}
t.Logf("messages sent: %d, received: %d", sent, received)
// followLogs() should be done by now
select {
case <-followLogsDone:
case <-time.After(30 * time.Second):
t.Fatal("timeout waiting for followLogs() to finish")
}
select {
case <-lw.WatchConsumerGone():
t.Fatal("consumer should not have exited")
default:
}
}

View File

@@ -110,14 +110,16 @@ func (daemon *Daemon) ContainerLogs(ctx context.Context, containerName string, c
}
}()
}
// set up some defers
defer logs.Close()
// signal that the log reader is gone
defer logs.ConsumerGone()
// close the messages channel. closing is the only way to signal above
// that we're doing with logs (other than context cancel i guess).
defer close(messageChan)
lg.Debug("begin logs")
defer lg.Debugf("end logs (%v)", ctx.Err())
for {
select {
// i do not believe as the system is currently designed any error
@@ -132,14 +134,12 @@ func (daemon *Daemon) ContainerLogs(ctx context.Context, containerName string, c
}
return
case <-ctx.Done():
lg.Debugf("logs: end stream, ctx is done: %v", ctx.Err())
return
case msg, ok := <-logs.Msg:
// there is some kind of pool or ring buffer in the logger that
// produces these messages, and a possible future optimization
// might be to use that pool and reuse message objects
if !ok {
lg.Debug("end logs")
return
}
m := msg.AsLogMessage() // just a pointer conversion, does not copy data

View File

@@ -45,6 +45,7 @@ func (daemon *Daemon) Reload(conf *config.Config) (err error) {
daemon.reloadDebug(conf, attributes)
daemon.reloadMaxConcurrentDownloadsAndUploads(conf, attributes)
daemon.reloadShutdownTimeout(conf, attributes)
daemon.reloadFeatures(conf, attributes)
if err := daemon.reloadClusterDiscovery(conf, attributes); err != nil {
return err
@@ -322,3 +323,13 @@ func (daemon *Daemon) reloadNetworkDiagnosticPort(conf *config.Config, attribute
return nil
}
// reloadFeatures updates configuration with enabled/disabled features
func (daemon *Daemon) reloadFeatures(conf *config.Config, attributes map[string]string) {
// update corresponding configuration
// note that we allow features option to be entirely unset
daemon.configStore.Features = conf.Features
// prepare reload event attributes with updatable configurations
attributes["features"] = fmt.Sprintf("%v", daemon.configStore.Features)
}

View File

@@ -17,7 +17,7 @@ import (
// TODO: this should use more of libtrust.LoadOrCreateTrustKey which may need
// a refactor or this function to be moved into libtrust
func loadOrCreateTrustKey(trustKeyPath string) (libtrust.PrivateKey, error) {
err := system.MkdirAll(filepath.Dir(trustKeyPath), 0700, "")
err := system.MkdirAll(filepath.Dir(trustKeyPath), 0755, "")
if err != nil {
return nil, err
}

View File

@@ -210,6 +210,8 @@ func (daemon *Daemon) registerMountPoints(container *container.Container, hostCo
mp.Name = v.Name
mp.Driver = v.Driver
// need to selinux-relabel local mounts
mp.Source = v.Mountpoint
if mp.Driver == volume.DefaultDriverName {
setBindModeIfNull(mp)
}

View File

@@ -30,7 +30,7 @@ install_containerd() {
mkdir -p ${PREFIX}
cp bin/containerd ${PREFIX}/docker-containerd
cp bin/containerd-shim ${PREFIX}/docker-containerd-shim
cp bin/ctr ${PREFIX}/docker-containerd-ctr
cp bin/containerd ${PREFIX}/containerd
cp bin/containerd-shim ${PREFIX}/containerd-shim
cp bin/ctr ${PREFIX}/ctr
}

View File

@@ -18,5 +18,5 @@ install_runc() {
fi
make BUILDTAGS="$RUNC_BUILDTAGS" "$target"
mkdir -p ${PREFIX}
cp runc ${PREFIX}/docker-runc
cp runc ${PREFIX}/runc
}

View File

@@ -1,9 +1,9 @@
#!/usr/bin/env bash
DOCKER_DAEMON_BINARY_NAME='dockerd'
DOCKER_RUNC_BINARY_NAME='docker-runc'
DOCKER_CONTAINERD_BINARY_NAME='docker-containerd'
DOCKER_CONTAINERD_CTR_BINARY_NAME='docker-containerd-ctr'
DOCKER_CONTAINERD_SHIM_BINARY_NAME='docker-containerd-shim'
DOCKER_RUNC_BINARY_NAME='runc'
DOCKER_CONTAINERD_BINARY_NAME='containerd'
DOCKER_CONTAINERD_CTR_BINARY_NAME='ctr'
DOCKER_CONTAINERD_SHIM_BINARY_NAME='containerd-shim'
DOCKER_PROXY_BINARY_NAME='docker-proxy'
DOCKER_INIT_BINARY_NAME='docker-init'

View File

@@ -112,7 +112,7 @@ error_on_leaked_containerd_shims() {
fi
leftovers=$(ps -ax -o pid,cmd |
awk '$2 == "docker-containerd-shim" && $4 ~ /.*\/bundles\/.*\/test-integration/ { print $1 }')
awk '$2 == "containerd-shim" && $4 ~ /.*\/bundles\/.*\/test-integration/ { print $1 }')
if [ -n "$leftovers" ]; then
ps aux
kill -9 $leftovers 2> /dev/null

View File

@@ -10,14 +10,14 @@ copy_binaries() {
if [ "$(go env GOOS)/$(go env GOARCH)" != "$(go env GOHOSTOS)/$(go env GOHOSTARCH)" ]; then
return
fi
if [ ! -x /usr/local/bin/docker-runc ]; then
if [ ! -x /usr/local/bin/runc ]; then
return
fi
echo "Copying nested executables into $dir"
for file in containerd containerd-shim containerd-ctr runc init proxy; do
cp -f `which "docker-$file"` "$dir/"
for file in containerd containerd-shim ctr runc docker-init docker-proxy; do
cp -f `which "$file"` "$dir/"
if [ "$hash" == "hash" ]; then
hash_files "$dir/docker-$file"
hash_files "$dir/$file"
fi
done
}

View File

@@ -32,7 +32,7 @@ const (
privateRegistryURL = registry.DefaultURL
// path to containerd's ctr binary
ctrBinary = "docker-containerd-ctr"
ctrBinary = "ctr"
// the docker daemon binary to use
dockerdBinary = "dockerd"

View File

@@ -1759,7 +1759,7 @@ func (s *DockerSuite) TestContainersAPICreateMountsValidation(c *check.C) {
Target: destPath}}},
msg: "source path does not exist",
// FIXME(vdemeester) fails into e2e, migrate to integration/container anyway
// msg: "bind mount source path does not exist: " + notExistPath,
// msg: "source path does not exist: " + notExistPath,
},
{
config: containertypes.Config{

View File

@@ -44,6 +44,8 @@ import (
"gotest.tools/icmd"
)
const containerdSocket = "/var/run/docker/containerd/containerd.sock"
// TestLegacyDaemonCommand test starting docker daemon using "deprecated" docker daemon
// command. Remove this test when we remove this.
func (s *DockerDaemonSuite) TestLegacyDaemonCommand(c *check.C) {
@@ -1449,7 +1451,7 @@ func (s *DockerDaemonSuite) TestCleanupMountsAfterDaemonAndContainerKill(c *chec
c.Assert(d.Kill(), check.IsNil)
// kill the container
icmd.RunCommand(ctrBinary, "--address", "/var/run/docker/containerd/docker-containerd.sock",
icmd.RunCommand(ctrBinary, "--address", containerdSocket,
"--namespace", moby_daemon.ContainersNamespace, "tasks", "kill", id).Assert(c, icmd.Success)
// restart daemon.
@@ -1971,7 +1973,7 @@ func (s *DockerDaemonSuite) TestDaemonRestartWithKilledRunningContainer(t *check
}
// kill the container
icmd.RunCommand(ctrBinary, "--address", "/var/run/docker/containerd/docker-containerd.sock",
icmd.RunCommand(ctrBinary, "--address", containerdSocket,
"--namespace", moby_daemon.ContainersNamespace, "tasks", "kill", cid).Assert(t, icmd.Success)
// Give time to containerd to process the command if we don't
@@ -2074,7 +2076,7 @@ func (s *DockerDaemonSuite) TestDaemonRestartWithUnpausedRunningContainer(t *che
// resume the container
result := icmd.RunCommand(
ctrBinary,
"--address", "/var/run/docker/containerd/docker-containerd.sock",
"--address", containerdSocket,
"--namespace", moby_daemon.ContainersNamespace,
"tasks", "resume", cid)
result.Assert(t, icmd.Success)
@@ -2409,7 +2411,7 @@ func (s *DockerDaemonSuite) TestRunWithRuntimeFromConfigFile(c *check.C) {
{
"runtimes": {
"oci": {
"path": "docker-runc"
"path": "runc"
},
"vm": {
"path": "/usr/local/bin/vm-manager",
@@ -2491,7 +2493,7 @@ func (s *DockerDaemonSuite) TestRunWithRuntimeFromConfigFile(c *check.C) {
"default-runtime": "vm",
"runtimes": {
"oci": {
"path": "docker-runc"
"path": "runc"
},
"vm": {
"path": "/usr/local/bin/vm-manager",
@@ -2517,7 +2519,7 @@ func (s *DockerDaemonSuite) TestRunWithRuntimeFromConfigFile(c *check.C) {
}
func (s *DockerDaemonSuite) TestRunWithRuntimeFromCommandLine(c *check.C) {
s.d.StartWithBusybox(c, "--add-runtime", "oci=docker-runc", "--add-runtime", "vm=/usr/local/bin/vm-manager")
s.d.StartWithBusybox(c, "--add-runtime", "oci=runc", "--add-runtime", "vm=/usr/local/bin/vm-manager")
// Run with default runtime
out, err := s.d.Cmd("run", "--rm", "busybox", "ls")
@@ -2564,7 +2566,7 @@ func (s *DockerDaemonSuite) TestRunWithRuntimeFromCommandLine(c *check.C) {
// Check that we can select a default runtime
s.d.Stop(c)
s.d.StartWithBusybox(c, "--default-runtime=vm", "--add-runtime", "oci=docker-runc", "--add-runtime", "vm=/usr/local/bin/vm-manager")
s.d.StartWithBusybox(c, "--default-runtime=vm", "--add-runtime", "oci=runc", "--add-runtime", "vm=/usr/local/bin/vm-manager")
out, err = s.d.Cmd("run", "--rm", "busybox", "ls")
c.Assert(err, check.NotNil, check.Commentf("%s", out))

View File

@@ -7,6 +7,7 @@ import (
"strings"
"testing"
"github.com/docker/docker/api/types"
dclient "github.com/docker/docker/client"
"github.com/docker/docker/internal/test/fakecontext"
"github.com/docker/docker/internal/test/request"
@@ -76,7 +77,7 @@ func TestBuildWithSession(t *testing.T) {
assert.Check(t, is.Contains(string(outBytes), "Successfully built"))
assert.Check(t, is.Equal(strings.Count(string(outBytes), "Using cache"), 4))
_, err = client.BuildCachePrune(context.TODO())
_, err = client.BuildCachePrune(context.TODO(), types.BuildCachePruneOptions{All: true})
assert.Check(t, err)
du, err = client.DiskUsage(context.TODO())

View File

@@ -423,6 +423,38 @@ RUN [ ! -f foo ]
assert.Check(t, is.Contains(out.String(), "Successfully built"))
}
// #37581
func TestBuildWithHugeFile(t *testing.T) {
ctx := context.TODO()
defer setupTest(t)()
dockerfile := `FROM busybox
# create a sparse file with size over 8GB
RUN for g in $(seq 0 8); do dd if=/dev/urandom of=rnd bs=1K count=1 seek=$((1024*1024*g)) status=none; done && \
ls -la rnd && du -sk rnd`
buf := bytes.NewBuffer(nil)
w := tar.NewWriter(buf)
writeTarRecord(t, w, "Dockerfile", dockerfile)
err := w.Close()
assert.NilError(t, err)
apiclient := testEnv.APIClient()
resp, err := apiclient.ImageBuild(ctx,
buf,
types.ImageBuildOptions{
Remove: true,
ForceRemove: true,
})
out := bytes.NewBuffer(nil)
assert.NilError(t, err)
_, err = io.Copy(out, resp.Body)
resp.Body.Close()
assert.NilError(t, err)
assert.Check(t, is.Contains(out.String(), "Successfully built"))
}
func writeTarRecord(t *testing.T, w *tar.Writer, fn, contents string) {
err := w.WriteHeader(&tar.Header{
Name: fn,

View File

@@ -345,6 +345,8 @@ func TestServiceWithDefaultAddressPoolInit(t *testing.T) {
out, err := cli.NetworkInspect(context.Background(), name, types.NetworkInspectOptions{})
assert.NilError(t, err)
t.Logf("%s: NetworkInspect: %+v", t.Name(), out)
assert.Assert(t, len(out.IPAM.Config) > 0)
assert.Equal(t, out.IPAM.Config[0].Subnet, "20.20.0.0/24")
err = cli.ServiceRemove(context.Background(), serviceID)

View File

@@ -38,6 +38,7 @@ type logT interface {
}
const defaultDockerdBinary = "dockerd"
const containerdSocket = "/var/run/docker/containerd/containerd.sock"
var errDaemonNotStarted = errors.New("daemon not started")
@@ -224,7 +225,7 @@ func (d *Daemon) StartWithLogFile(out *os.File, providedArgs ...string) error {
return errors.Wrapf(err, "[%s] could not find docker binary in $PATH", d.id)
}
args := append(d.GlobalFlags,
"--containerd", "/var/run/docker/containerd/docker-containerd.sock",
"--containerd", containerdSocket,
"--data-root", d.Root,
"--exec-root", d.execRoot,
"--pidfile", fmt.Sprintf("%s/docker.pid", d.Folder),

View File

@@ -27,8 +27,8 @@ const (
shutdownTimeout = 15 * time.Second
startupTimeout = 15 * time.Second
configFile = "containerd.toml"
binaryName = "docker-containerd"
pidFile = "docker-containerd.pid"
binaryName = "containerd"
pidFile = "containerd.pid"
)
type pluginConfigs struct {
@@ -43,7 +43,7 @@ type remote struct {
logger *logrus.Entry
daemonWaitCh chan struct{}
daemonStartCh chan struct{}
daemonStartCh chan error
daemonStopCh chan struct{}
rootDir string
@@ -72,7 +72,7 @@ func Start(ctx context.Context, rootDir, stateDir string, opts ...DaemonOpt) (Da
pluginConfs: pluginConfigs{make(map[string]interface{})},
daemonPid: -1,
logger: logrus.WithField("module", "libcontainerd"),
daemonStartCh: make(chan struct{}),
daemonStartCh: make(chan error, 1),
daemonStopCh: make(chan struct{}),
}
@@ -92,7 +92,10 @@ func Start(ctx context.Context, rootDir, stateDir string, opts ...DaemonOpt) (Da
select {
case <-time.After(startupTimeout):
return nil, errors.New("timeout waiting for containerd to start")
case <-r.daemonStartCh:
case err := <-r.daemonStartCh:
if err != nil {
return nil, err
}
}
return r, nil
@@ -245,25 +248,35 @@ func (r *remote) monitorDaemon(ctx context.Context) {
}()
for {
select {
case <-ctx.Done():
r.logger.Info("stopping healthcheck following graceful shutdown")
if client != nil {
client.Close()
if delay != nil {
select {
case <-ctx.Done():
r.logger.Info("stopping healthcheck following graceful shutdown")
if client != nil {
client.Close()
}
return
case <-delay:
}
return
case <-delay:
default:
}
if r.daemonPid == -1 {
if r.daemonWaitCh != nil {
<-r.daemonWaitCh
select {
case <-ctx.Done():
r.logger.Info("stopping containerd startup following graceful shutdown")
return
case <-r.daemonWaitCh:
}
}
os.RemoveAll(r.GRPC.Address)
if err := r.startContainerd(); err != nil {
r.logger.WithError(err).Error("failed starting containerd")
if !started {
r.daemonStartCh <- err
return
}
r.logger.WithError(err).Error("failed restarting containerd")
delay = time.After(50 * time.Millisecond)
continue
}
@@ -276,26 +289,28 @@ func (r *remote) monitorDaemon(ctx context.Context) {
}
}
tctx, cancel := context.WithTimeout(ctx, healthCheckTimeout)
_, err := client.IsServing(tctx)
cancel()
if err == nil {
if !started {
close(r.daemonStartCh)
started = true
if client != nil {
tctx, cancel := context.WithTimeout(ctx, healthCheckTimeout)
_, err := client.IsServing(tctx)
cancel()
if err == nil {
if !started {
close(r.daemonStartCh)
started = true
}
transientFailureCount = 0
delay = time.After(500 * time.Millisecond)
continue
}
transientFailureCount = 0
delay = time.After(500 * time.Millisecond)
continue
}
r.logger.WithError(err).WithField("binary", binaryName).Debug("daemon is not responding")
r.logger.WithError(err).WithField("binary", binaryName).Debug("daemon is not responding")
transientFailureCount++
if transientFailureCount < maxConnectionRetryCount || system.IsProcessAlive(r.daemonPid) {
delay = time.After(time.Duration(transientFailureCount) * 200 * time.Millisecond)
continue
transientFailureCount++
if transientFailureCount < maxConnectionRetryCount || system.IsProcessAlive(r.daemonPid) {
delay = time.After(time.Duration(transientFailureCount) * 200 * time.Millisecond)
continue
}
}
if system.IsProcessAlive(r.daemonPid) {
@@ -304,6 +319,7 @@ func (r *remote) monitorDaemon(ctx context.Context) {
}
client.Close()
client = nil
r.daemonPid = -1
delay = nil
transientFailureCount = 0

View File

@@ -11,8 +11,8 @@ import (
)
const (
sockFile = "docker-containerd.sock"
debugSockFile = "docker-containerd-debug.sock"
sockFile = "containerd.sock"
debugSockFile = "containerd-debug.sock"
)
func (r *remote) setDefaults() {

View File

@@ -7,8 +7,8 @@ import (
)
const (
grpcPipeName = `\\.\pipe\docker-containerd-containerd`
debugPipeName = `\\.\pipe\docker-containerd-debug`
grpcPipeName = `\\.\pipe\containerd-containerd`
debugPipeName = `\\.\pipe\containerd-debug`
)
func (r *remote) setDefaults() {

View File

@@ -54,6 +54,7 @@ func (w *filePoller) Add(name string) error {
}
fi, err := os.Stat(name)
if err != nil {
f.Close()
return err
}
@@ -61,6 +62,7 @@ func (w *filePoller) Add(name string) error {
w.watches = make(map[string]chan struct{})
}
if _, exists := w.watches[name]; exists {
f.Close()
return fmt.Errorf("watch exists")
}
chClose := make(chan struct{})
@@ -113,11 +115,10 @@ func (w *filePoller) Close() error {
return nil
}
w.closed = true
for name := range w.watches {
w.remove(name)
delete(w.watches, name)
}
w.closed = true
return nil
}
@@ -146,12 +147,11 @@ func (w *filePoller) sendErr(e error, chClose <-chan struct{}) error {
func (w *filePoller) watch(f *os.File, lastFi os.FileInfo, chClose chan struct{}) {
defer f.Close()
for {
time.Sleep(watchWaitTime)
select {
case <-time.After(watchWaitTime):
case <-chClose:
logrus.Debugf("watch for %s closed", f.Name())
return
default:
}
fi, err := os.Stat(f.Name())

View File

@@ -39,6 +39,10 @@ type Output interface {
type chanOutput chan<- Progress
func (out chanOutput) WriteProgress(p Progress) error {
// FIXME: workaround for panic in #37735
defer func() {
recover()
}()
out <- p
return nil
}

View File

@@ -1,7 +1,7 @@
# the following lines are in sorted order, FYI
github.com/Azure/go-ansiterm d6e3b3328b783f23731bc4d058875b0371ff8109
github.com/Microsoft/hcsshim v0.6.12
github.com/Microsoft/go-winio v0.4.9
github.com/Microsoft/hcsshim 44c060121b68e8bdc40b411beba551f3b4ee9e55
github.com/Microsoft/go-winio v0.4.10
github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a
github.com/go-check/check 4ed411733c5785b40214c70bce814c3a3a689609 https://github.com/cpuguy83/check.git
github.com/golang/gddo 9b12a26f3fbd7397dee4e20939ddca719d840d2a
@@ -26,7 +26,7 @@ github.com/imdario/mergo v0.3.6
golang.org/x/sync 1d60e4601c6fd243af51cc01ddf169918a5407ca
# buildkit
github.com/moby/buildkit 49906c62925ed429ec9174a0b6869982967f1a39
github.com/moby/buildkit 8f4dff0d16ea91cb43315d5f5aa4b27f4fe4e1f2
github.com/tonistiigi/fsutil b19464cd1b6a00773b4f2eb7acf9c30426f9df42
github.com/grpc-ecosystem/grpc-opentracing 8e809c8a86450a29b90dcc9efbf062d0fe6d9746
github.com/opentracing/opentracing-go 1361b9cd60be79c4c3a7fa9841b3c132e40066a7
@@ -47,7 +47,7 @@ github.com/sean-/seed e2103e2c35297fb7e17febb81e49b312087a2372
github.com/hashicorp/go-sockaddr 6d291a969b86c4b633730bfc6b8b9d64c3aafed9
github.com/hashicorp/go-multierror fcdddc395df1ddf4247c69bd436e84cfa0733f7e
github.com/hashicorp/serf 598c54895cc5a7b1a24a398d635e8c0ea0959870
github.com/docker/libkv 1d8431073ae03cdaedb198a89722f3aab6d418ef
github.com/docker/libkv 458977154600b9f23984d9f4b82e79570b5ae12b
github.com/vishvananda/netns 604eaf189ee867d8c147fafc28def2394e878d25
github.com/vishvananda/netlink b2de5d10e38ecce8607e6b438b6d174f389a004e
@@ -59,13 +59,13 @@ github.com/coreos/etcd v3.2.1
github.com/coreos/go-semver v0.2.0
github.com/ugorji/go f1f1a805ed361a0e078bb537e4ea78cd37dcf065
github.com/hashicorp/consul v0.5.2
github.com/boltdb/bolt fff57c100f4dea1905678da7e90d92429dff2904
github.com/miekg/dns v1.0.7
github.com/ishidawataru/sctp 07191f837fedd2f13d1ec7b5f885f0f3ec54b1cb
go.etcd.io/bbolt v1.3.1-etcd.8
# get graph and distribution packages
github.com/docker/distribution 83389a148052d74ac602f5f1d62f86ff2f3c4aa5
github.com/vbatts/tar-split v0.10.2
github.com/vbatts/tar-split v0.11.0
github.com/opencontainers/go-digest v1.0.0-rc1
# get go-zfs packages
@@ -75,7 +75,7 @@ github.com/pborman/uuid v1.0
google.golang.org/grpc v1.12.0
# This does not need to match RUNC_COMMIT as it is used for helper packages but should be newer or equal
github.com/opencontainers/runc ad0f5255060d36872be04de22f8731f38ef2d7b1
github.com/opencontainers/runc 20aff4f0488c6d4b8df4d85b4f63f1f704c11abd
github.com/opencontainers/runtime-spec d810dbc60d8c5aeeb3d054bd1132fab2121968ce # v1.0.1-43-gd810dbc
github.com/opencontainers/image-spec v1.0.1
github.com/seccomp/libseccomp-golang 32f571b70023028bd57d9288c20efbcb237f3ce0
@@ -114,18 +114,19 @@ github.com/googleapis/gax-go v2.0.0
google.golang.org/genproto 694d95ba50e67b2e363f3483057db5d4910c18f9
# containerd
github.com/containerd/containerd v1.2.0-beta.0
github.com/containerd/containerd v1.2.0-rc.0
github.com/containerd/fifo 3d5202aec260678c48179c56f40e6f38a095738c
github.com/containerd/continuity d3c23511c1bf5851696cba83143d9cbcd666869b
github.com/containerd/continuity f44b615e492bdfb371aae2f76ec694d9da1db537
github.com/containerd/cgroups 5e610833b72089b37d0e615de9a92dfc043757c2
github.com/containerd/console 4d8a41f4ce5b9bae77c41786ea2458330f43f081
github.com/containerd/go-runc edcf3de1f4971445c42d61f20d506b30612aa031
github.com/containerd/console c12b1e7919c14469339a5d38f2f8ed9b64a9de23
github.com/containerd/go-runc 5a6d9f37cfa36b15efba46dc7ea349fa9b7143c3
github.com/containerd/typeurl a93fcdb778cd272c6e9b3028b2f42d813e785d40
github.com/containerd/ttrpc 94dde388801693c54f88a6596f713b51a8b30b2d
github.com/gogo/googleapis 08a7655d27152912db7aaf4f983275eaf8d128ef
github.com/containerd/cri 9f39e3289533fc228c5e5fcac0a6dbdd60c6047b # release/1.2 branch
# cluster
github.com/docker/swarmkit cfa742c8abe6f8e922f6e4e920153c408e7d9c3b
github.com/docker/swarmkit 3044c576a8a970d3079492b585054f29e96e27f1
github.com/gogo/protobuf v1.0.0
github.com/cloudflare/cfssl 1.3.2
github.com/fernet/fernet-go 1b2437bc582b3cfbb341ee5a29f8ef5b42912ff2

View File

@@ -303,7 +303,7 @@ func FileInfoFromHeader(hdr *tar.Header) (name string, size int64, fileInfo *win
if err != nil {
return "", 0, nil, err
}
fileInfo.FileAttributes = uintptr(attr)
fileInfo.FileAttributes = uint32(attr)
} else {
if hdr.Typeflag == tar.TypeDir {
fileInfo.FileAttributes |= syscall.FILE_ATTRIBUTE_DIRECTORY

View File

@@ -20,7 +20,8 @@ const (
// FileBasicInfo contains file access time and file attributes information.
type FileBasicInfo struct {
CreationTime, LastAccessTime, LastWriteTime, ChangeTime syscall.Filetime
FileAttributes uintptr // includes padding
FileAttributes uint32
pad uint32 // padding
}
// GetFileBasicInfo retrieves times and attributes for a file.

View File

@@ -1,12 +1,13 @@
# hcsshim
This package supports launching Windows Server containers from Go. It is
primarily used in the [Docker Engine](https://github.com/docker/docker) project,
but it can be freely used by other projects as well.
[![Build status](https://ci.appveyor.com/api/projects/status/nbcw28mnkqml0loa/branch/master?svg=true)](https://ci.appveyor.com/project/WindowsVirtualization/hcsshim/branch/master)
This package contains the Golang interface for using the Windows [Host Compute Service](https://blogs.technet.microsoft.com/virtualization/2017/01/27/introducing-the-host-compute-service-hcs/) (HCS) to launch and manage [Windows Containers](https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/). It also contains other helpers and functions for managing Windows Containers such as the Golang interface for the Host Network Service (HNS).
It is primarily used in the [Moby Project](https://github.com/moby/moby), but it can be freely used by other projects as well.
## Contributing
---------------
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.microsoft.com.
@@ -19,6 +20,11 @@ This project has adopted the [Microsoft Open Source Code of Conduct](https://ope
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
## Dependencies
This project requires Golang 1.9 or newer to build.
For system requirements to run this project, see the Microsoft docs on [Windows Container requirements](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/system-requirements).
## Reporting Security Issues
@@ -29,5 +35,7 @@ email to ensure we received your original message. Further information, includin
[MSRC PGP](https://technet.microsoft.com/en-us/security/dn606155) key, can be found in
the [Security TechCenter](https://technet.microsoft.com/en-us/security/default).
-------------------------------------------
For additional details, see [Report a Computer Security Vulnerability](https://technet.microsoft.com/en-us/security/ff852094.aspx) on Technet
---------------
Copyright (c) 2018 Microsoft Corp. All rights reserved.

View File

@@ -1,28 +0,0 @@
package hcsshim
import "github.com/sirupsen/logrus"
// ActivateLayer will find the layer with the given id and mount it's filesystem.
// For a read/write layer, the mounted filesystem will appear as a volume on the
// host, while a read-only layer is generally expected to be a no-op.
// An activated layer must later be deactivated via DeactivateLayer.
func ActivateLayer(info DriverInfo, id string) error {
title := "hcsshim::ActivateLayer "
logrus.Debugf(title+"Flavour %d ID %s", info.Flavour, id)
infop, err := convertDriverInfo(info)
if err != nil {
logrus.Error(err)
return err
}
err = activateLayer(&infop, id)
if err != nil {
err = makeErrorf(err, title, "id=%s flavour=%d", id, info.Flavour)
logrus.Error(err)
return err
}
logrus.Debugf(title+" - succeeded id=%s flavour=%d", id, info.Flavour)
return nil
}

View File

@@ -1,800 +1,192 @@
package hcsshim
import (
"encoding/json"
"fmt"
"os"
"sync"
"syscall"
"time"
"github.com/sirupsen/logrus"
"github.com/Microsoft/hcsshim/internal/hcs"
"github.com/Microsoft/hcsshim/internal/mergemaps"
"github.com/Microsoft/hcsshim/internal/schema1"
)
var (
defaultTimeout = time.Minute * 4
)
const (
pendingUpdatesQuery = `{ "PropertyTypes" : ["PendingUpdates"]}`
statisticsQuery = `{ "PropertyTypes" : ["Statistics"]}`
processListQuery = `{ "PropertyTypes" : ["ProcessList"]}`
mappedVirtualDiskQuery = `{ "PropertyTypes" : ["MappedVirtualDisk"]}`
)
type container struct {
handleLock sync.RWMutex
handle hcsSystem
id string
callbackNumber uintptr
}
// ContainerProperties holds the properties for a container and the processes running in that container
type ContainerProperties struct {
ID string `json:"Id"`
Name string
SystemType string
Owner string
SiloGUID string `json:"SiloGuid,omitempty"`
RuntimeID string `json:"RuntimeId,omitempty"`
IsRuntimeTemplate bool `json:",omitempty"`
RuntimeImagePath string `json:",omitempty"`
Stopped bool `json:",omitempty"`
ExitType string `json:",omitempty"`
AreUpdatesPending bool `json:",omitempty"`
ObRoot string `json:",omitempty"`
Statistics Statistics `json:",omitempty"`
ProcessList []ProcessListItem `json:",omitempty"`
MappedVirtualDiskControllers map[int]MappedVirtualDiskController `json:",omitempty"`
}
type ContainerProperties = schema1.ContainerProperties
// MemoryStats holds the memory statistics for a container
type MemoryStats struct {
UsageCommitBytes uint64 `json:"MemoryUsageCommitBytes,omitempty"`
UsageCommitPeakBytes uint64 `json:"MemoryUsageCommitPeakBytes,omitempty"`
UsagePrivateWorkingSetBytes uint64 `json:"MemoryUsagePrivateWorkingSetBytes,omitempty"`
}
type MemoryStats = schema1.MemoryStats
// ProcessorStats holds the processor statistics for a container
type ProcessorStats struct {
TotalRuntime100ns uint64 `json:",omitempty"`
RuntimeUser100ns uint64 `json:",omitempty"`
RuntimeKernel100ns uint64 `json:",omitempty"`
}
type ProcessorStats = schema1.ProcessorStats
// StorageStats holds the storage statistics for a container
type StorageStats struct {
ReadCountNormalized uint64 `json:",omitempty"`
ReadSizeBytes uint64 `json:",omitempty"`
WriteCountNormalized uint64 `json:",omitempty"`
WriteSizeBytes uint64 `json:",omitempty"`
}
type StorageStats = schema1.StorageStats
// NetworkStats holds the network statistics for a container
type NetworkStats struct {
BytesReceived uint64 `json:",omitempty"`
BytesSent uint64 `json:",omitempty"`
PacketsReceived uint64 `json:",omitempty"`
PacketsSent uint64 `json:",omitempty"`
DroppedPacketsIncoming uint64 `json:",omitempty"`
DroppedPacketsOutgoing uint64 `json:",omitempty"`
EndpointId string `json:",omitempty"`
InstanceId string `json:",omitempty"`
}
type NetworkStats = schema1.NetworkStats
// Statistics is the structure returned by a statistics call on a container
type Statistics struct {
Timestamp time.Time `json:",omitempty"`
ContainerStartTime time.Time `json:",omitempty"`
Uptime100ns uint64 `json:",omitempty"`
Memory MemoryStats `json:",omitempty"`
Processor ProcessorStats `json:",omitempty"`
Storage StorageStats `json:",omitempty"`
Network []NetworkStats `json:",omitempty"`
}
type Statistics = schema1.Statistics
// ProcessList is the structure of an item returned by a ProcessList call on a container
type ProcessListItem struct {
CreateTimestamp time.Time `json:",omitempty"`
ImageName string `json:",omitempty"`
KernelTime100ns uint64 `json:",omitempty"`
MemoryCommitBytes uint64 `json:",omitempty"`
MemoryWorkingSetPrivateBytes uint64 `json:",omitempty"`
MemoryWorkingSetSharedBytes uint64 `json:",omitempty"`
ProcessId uint32 `json:",omitempty"`
UserTime100ns uint64 `json:",omitempty"`
}
type ProcessListItem = schema1.ProcessListItem
// MappedVirtualDiskController is the structure of an item returned by a MappedVirtualDiskList call on a container
type MappedVirtualDiskController struct {
MappedVirtualDisks map[int]MappedVirtualDisk `json:",omitempty"`
}
type MappedVirtualDiskController = schema1.MappedVirtualDiskController
// Type of Request Support in ModifySystem
type RequestType string
type RequestType = schema1.RequestType
// Type of Resource Support in ModifySystem
type ResourceType string
type ResourceType = schema1.ResourceType
// RequestType const
const (
Add RequestType = "Add"
Remove RequestType = "Remove"
Network ResourceType = "Network"
Add = schema1.Add
Remove = schema1.Remove
Network = schema1.Network
)
// ResourceModificationRequestResponse is the structure used to send request to the container to modify the system
// Supported resource types are Network and Request Types are Add/Remove
type ResourceModificationRequestResponse struct {
Resource ResourceType `json:"ResourceType"`
Data interface{} `json:"Settings"`
Request RequestType `json:"RequestType,omitempty"`
type ResourceModificationRequestResponse = schema1.ResourceModificationRequestResponse
type container struct {
system *hcs.System
}
// createContainerAdditionalJSON is read from the environment at initialisation
// createComputeSystemAdditionalJSON is read from the environment at initialisation
// time. It allows an environment variable to define additional JSON which
// is merged in the CreateContainer call to HCS.
var createContainerAdditionalJSON string
// is merged in the CreateComputeSystem call to HCS.
var createContainerAdditionalJSON []byte
func init() {
createContainerAdditionalJSON = os.Getenv("HCSSHIM_CREATECONTAINER_ADDITIONALJSON")
createContainerAdditionalJSON = ([]byte)(os.Getenv("HCSSHIM_CREATECONTAINER_ADDITIONALJSON"))
}
// CreateContainer creates a new container with the given configuration but does not start it.
func CreateContainer(id string, c *ContainerConfig) (Container, error) {
return createContainerWithJSON(id, c, "")
}
// CreateContainerWithJSON creates a new container with the given configuration but does not start it.
// It is identical to CreateContainer except that optional additional JSON can be merged before passing to HCS.
func CreateContainerWithJSON(id string, c *ContainerConfig, additionalJSON string) (Container, error) {
return createContainerWithJSON(id, c, additionalJSON)
}
func createContainerWithJSON(id string, c *ContainerConfig, additionalJSON string) (Container, error) {
operation := "CreateContainer"
title := "HCSShim::" + operation
container := &container{
id: id,
fullConfig, err := mergemaps.MergeJSON(c, createContainerAdditionalJSON)
if err != nil {
return nil, fmt.Errorf("failed to merge additional JSON '%s': %s", createContainerAdditionalJSON, err)
}
configurationb, err := json.Marshal(c)
system, err := hcs.CreateComputeSystem(id, fullConfig)
if err != nil {
return nil, err
}
configuration := string(configurationb)
logrus.Debugf(title+" id=%s config=%s", id, configuration)
// Merge any additional JSON. Priority is given to what is passed in explicitly,
// falling back to what's set in the environment.
if additionalJSON == "" && createContainerAdditionalJSON != "" {
additionalJSON = createContainerAdditionalJSON
}
if additionalJSON != "" {
configurationMap := map[string]interface{}{}
if err := json.Unmarshal([]byte(configuration), &configurationMap); err != nil {
return nil, fmt.Errorf("failed to unmarshal %s: %s", configuration, err)
}
additionalMap := map[string]interface{}{}
if err := json.Unmarshal([]byte(additionalJSON), &additionalMap); err != nil {
return nil, fmt.Errorf("failed to unmarshal %s: %s", additionalJSON, err)
}
mergedMap := mergeMaps(additionalMap, configurationMap)
mergedJSON, err := json.Marshal(mergedMap)
if err != nil {
return nil, fmt.Errorf("failed to marshal merged configuration map %+v: %s", mergedMap, err)
}
configuration = string(mergedJSON)
logrus.Debugf(title+" id=%s merged config=%s", id, configuration)
}
var (
resultp *uint16
identity syscall.Handle
)
createError := hcsCreateComputeSystem(id, configuration, identity, &container.handle, &resultp)
if createError == nil || IsPending(createError) {
if err := container.registerCallback(); err != nil {
// Terminate the container if it still exists. We're okay to ignore a failure here.
container.Terminate()
return nil, makeContainerError(container, operation, "", err)
}
}
err = processAsyncHcsResult(createError, resultp, container.callbackNumber, hcsNotificationSystemCreateCompleted, &defaultTimeout)
if err != nil {
if err == ErrTimeout {
// Terminate the container if it still exists. We're okay to ignore a failure here.
container.Terminate()
}
return nil, makeContainerError(container, operation, configuration, err)
}
logrus.Debugf(title+" succeeded id=%s handle=%d", id, container.handle)
return container, nil
}
// mergeMaps recursively merges map `fromMap` into map `ToMap`. Any pre-existing values
// in ToMap are overwritten. Values in fromMap are added to ToMap.
// From http://stackoverflow.com/questions/40491438/merging-two-json-strings-in-golang
func mergeMaps(fromMap, ToMap interface{}) interface{} {
switch fromMap := fromMap.(type) {
case map[string]interface{}:
ToMap, ok := ToMap.(map[string]interface{})
if !ok {
return fromMap
}
for keyToMap, valueToMap := range ToMap {
if valueFromMap, ok := fromMap[keyToMap]; ok {
fromMap[keyToMap] = mergeMaps(valueFromMap, valueToMap)
} else {
fromMap[keyToMap] = valueToMap
}
}
case nil:
// merge(nil, map[string]interface{...}) -> map[string]interface{...}
ToMap, ok := ToMap.(map[string]interface{})
if ok {
return ToMap
}
}
return fromMap
return &container{system}, err
}
// OpenContainer opens an existing container by ID.
func OpenContainer(id string) (Container, error) {
operation := "OpenContainer"
title := "HCSShim::" + operation
logrus.Debugf(title+" id=%s", id)
container := &container{
id: id,
}
var (
handle hcsSystem
resultp *uint16
)
err := hcsOpenComputeSystem(id, &handle, &resultp)
err = processHcsResult(err, resultp)
system, err := hcs.OpenComputeSystem(id)
if err != nil {
return nil, makeContainerError(container, operation, "", err)
return nil, err
}
container.handle = handle
if err := container.registerCallback(); err != nil {
return nil, makeContainerError(container, operation, "", err)
}
logrus.Debugf(title+" succeeded id=%s handle=%d", id, handle)
return container, nil
return &container{system}, err
}
// GetContainers gets a list of the containers on the system that match the query
func GetContainers(q ComputeSystemQuery) ([]ContainerProperties, error) {
operation := "GetContainers"
title := "HCSShim::" + operation
queryb, err := json.Marshal(q)
if err != nil {
return nil, err
}
query := string(queryb)
logrus.Debugf(title+" query=%s", query)
var (
resultp *uint16
computeSystemsp *uint16
)
err = hcsEnumerateComputeSystems(query, &computeSystemsp, &resultp)
err = processHcsResult(err, resultp)
if err != nil {
return nil, err
}
if computeSystemsp == nil {
return nil, ErrUnexpectedValue
}
computeSystemsRaw := convertAndFreeCoTaskMemBytes(computeSystemsp)
computeSystems := []ContainerProperties{}
if err := json.Unmarshal(computeSystemsRaw, &computeSystems); err != nil {
return nil, err
}
logrus.Debugf(title + " succeeded")
return computeSystems, nil
return hcs.GetComputeSystems(q)
}
// Start synchronously starts the container.
func (container *container) Start() error {
container.handleLock.RLock()
defer container.handleLock.RUnlock()
operation := "Start"
title := "HCSShim::Container::" + operation
logrus.Debugf(title+" id=%s", container.id)
if container.handle == 0 {
return makeContainerError(container, operation, "", ErrAlreadyClosed)
}
var resultp *uint16
err := hcsStartComputeSystem(container.handle, "", &resultp)
err = processAsyncHcsResult(err, resultp, container.callbackNumber, hcsNotificationSystemStartCompleted, &defaultTimeout)
if err != nil {
return makeContainerError(container, operation, "", err)
}
logrus.Debugf(title+" succeeded id=%s", container.id)
return nil
return convertSystemError(container.system.Start(), container)
}
// Shutdown requests a container shutdown, if IsPending() on the error returned is true,
// it may not actually be shut down until Wait() succeeds.
// Shutdown requests a container shutdown, but it may not actually be shutdown until Wait() succeeds.
func (container *container) Shutdown() error {
container.handleLock.RLock()
defer container.handleLock.RUnlock()
operation := "Shutdown"
title := "HCSShim::Container::" + operation
logrus.Debugf(title+" id=%s", container.id)
if container.handle == 0 {
return makeContainerError(container, operation, "", ErrAlreadyClosed)
}
var resultp *uint16
err := hcsShutdownComputeSystem(container.handle, "", &resultp)
err = processHcsResult(err, resultp)
if err != nil {
return makeContainerError(container, operation, "", err)
}
logrus.Debugf(title+" succeeded id=%s", container.id)
return nil
return convertSystemError(container.system.Shutdown(), container)
}
// Terminate requests a container terminate, if IsPending() on the error returned is true,
// it may not actually be shut down until Wait() succeeds.
// Terminate requests a container terminate, but it may not actually be terminated until Wait() succeeds.
func (container *container) Terminate() error {
container.handleLock.RLock()
defer container.handleLock.RUnlock()
operation := "Terminate"
title := "HCSShim::Container::" + operation
logrus.Debugf(title+" id=%s", container.id)
if container.handle == 0 {
return makeContainerError(container, operation, "", ErrAlreadyClosed)
}
var resultp *uint16
err := hcsTerminateComputeSystem(container.handle, "", &resultp)
err = processHcsResult(err, resultp)
if err != nil {
return makeContainerError(container, operation, "", err)
}
logrus.Debugf(title+" succeeded id=%s", container.id)
return nil
return convertSystemError(container.system.Terminate(), container)
}
// Wait synchronously waits for the container to shutdown or terminate.
// Waits synchronously waits for the container to shutdown or terminate.
func (container *container) Wait() error {
operation := "Wait"
title := "HCSShim::Container::" + operation
logrus.Debugf(title+" id=%s", container.id)
err := waitForNotification(container.callbackNumber, hcsNotificationSystemExited, nil)
if err != nil {
return makeContainerError(container, operation, "", err)
}
logrus.Debugf(title+" succeeded id=%s", container.id)
return nil
return convertSystemError(container.system.Wait(), container)
}
// WaitTimeout synchronously waits for the container to terminate or the duration to elapse.
// If the timeout expires, IsTimeout(err) == true
func (container *container) WaitTimeout(timeout time.Duration) error {
operation := "WaitTimeout"
title := "HCSShim::Container::" + operation
logrus.Debugf(title+" id=%s", container.id)
err := waitForNotification(container.callbackNumber, hcsNotificationSystemExited, &timeout)
if err != nil {
return makeContainerError(container, operation, "", err)
}
logrus.Debugf(title+" succeeded id=%s", container.id)
return nil
// WaitTimeout synchronously waits for the container to terminate or the duration to elapse. It
// returns false if timeout occurs.
func (container *container) WaitTimeout(t time.Duration) error {
return convertSystemError(container.system.WaitTimeout(t), container)
}
func (container *container) properties(query string) (*ContainerProperties, error) {
var (
resultp *uint16
propertiesp *uint16
)
err := hcsGetComputeSystemProperties(container.handle, query, &propertiesp, &resultp)
err = processHcsResult(err, resultp)
if err != nil {
return nil, err
}
// Pause pauses the execution of a container.
func (container *container) Pause() error {
return convertSystemError(container.system.Pause(), container)
}
if propertiesp == nil {
return nil, ErrUnexpectedValue
}
propertiesRaw := convertAndFreeCoTaskMemBytes(propertiesp)
properties := &ContainerProperties{}
if err := json.Unmarshal(propertiesRaw, properties); err != nil {
return nil, err
}
return properties, nil
// Resume resumes the execution of a container.
func (container *container) Resume() error {
return convertSystemError(container.system.Resume(), container)
}
// HasPendingUpdates returns true if the container has updates pending to install
func (container *container) HasPendingUpdates() (bool, error) {
container.handleLock.RLock()
defer container.handleLock.RUnlock()
operation := "HasPendingUpdates"
title := "HCSShim::Container::" + operation
logrus.Debugf(title+" id=%s", container.id)
if container.handle == 0 {
return false, makeContainerError(container, operation, "", ErrAlreadyClosed)
}
properties, err := container.properties(pendingUpdatesQuery)
if err != nil {
return false, makeContainerError(container, operation, "", err)
}
logrus.Debugf(title+" succeeded id=%s", container.id)
return properties.AreUpdatesPending, nil
return false, nil
}
// Statistics returns statistics for the container
// Statistics returns statistics for the container. This is a legacy v1 call
func (container *container) Statistics() (Statistics, error) {
container.handleLock.RLock()
defer container.handleLock.RUnlock()
operation := "Statistics"
title := "HCSShim::Container::" + operation
logrus.Debugf(title+" id=%s", container.id)
if container.handle == 0 {
return Statistics{}, makeContainerError(container, operation, "", ErrAlreadyClosed)
}
properties, err := container.properties(statisticsQuery)
properties, err := container.system.Properties(schema1.PropertyTypeStatistics)
if err != nil {
return Statistics{}, makeContainerError(container, operation, "", err)
return Statistics{}, convertSystemError(err, container)
}
logrus.Debugf(title+" succeeded id=%s", container.id)
return properties.Statistics, nil
}
// ProcessList returns an array of ProcessListItems for the container
// ProcessList returns an array of ProcessListItems for the container. This is a legacy v1 call
func (container *container) ProcessList() ([]ProcessListItem, error) {
container.handleLock.RLock()
defer container.handleLock.RUnlock()
operation := "ProcessList"
title := "HCSShim::Container::" + operation
logrus.Debugf(title+" id=%s", container.id)
if container.handle == 0 {
return nil, makeContainerError(container, operation, "", ErrAlreadyClosed)
}
properties, err := container.properties(processListQuery)
properties, err := container.system.Properties(schema1.PropertyTypeProcessList)
if err != nil {
return nil, makeContainerError(container, operation, "", err)
return nil, convertSystemError(err, container)
}
logrus.Debugf(title+" succeeded id=%s", container.id)
return properties.ProcessList, nil
}
// MappedVirtualDisks returns a map of the controllers and the disks mapped
// to a container.
//
// Example of JSON returned by the query.
//{
// "Id":"1126e8d7d279c707a666972a15976371d365eaf622c02cea2c442b84f6f550a3_svm",
// "SystemType":"Container",
// "RuntimeOsType":"Linux",
// "RuntimeId":"00000000-0000-0000-0000-000000000000",
// "State":"Running",
// "MappedVirtualDiskControllers":{
// "0":{
// "MappedVirtualDisks":{
// "2":{
// "HostPath":"C:\\lcow\\lcow\\scratch\\1126e8d7d279c707a666972a15976371d365eaf622c02cea2c442b84f6f550a3.vhdx",
// "ContainerPath":"/mnt/gcs/LinuxServiceVM/scratch",
// "Lun":2,
// "CreateInUtilityVM":true
// },
// "3":{
// "HostPath":"C:\\lcow\\lcow\\1126e8d7d279c707a666972a15976371d365eaf622c02cea2c442b84f6f550a3\\sandbox.vhdx",
// "Lun":3,
// "CreateInUtilityVM":true,
// "AttachOnly":true
// }
// }
// }
// }
//}
// This is a legacy v1 call
func (container *container) MappedVirtualDisks() (map[int]MappedVirtualDiskController, error) {
container.handleLock.RLock()
defer container.handleLock.RUnlock()
operation := "MappedVirtualDiskList"
title := "HCSShim::Container::" + operation
logrus.Debugf(title+" id=%s", container.id)
if container.handle == 0 {
return nil, makeContainerError(container, operation, "", ErrAlreadyClosed)
}
properties, err := container.properties(mappedVirtualDiskQuery)
properties, err := container.system.Properties(schema1.PropertyTypeMappedVirtualDisk)
if err != nil {
return nil, makeContainerError(container, operation, "", err)
return nil, convertSystemError(err, container)
}
logrus.Debugf(title+" succeeded id=%s", container.id)
return properties.MappedVirtualDiskControllers, nil
}
// Pause pauses the execution of the container. This feature is not enabled in TP5.
func (container *container) Pause() error {
container.handleLock.RLock()
defer container.handleLock.RUnlock()
operation := "Pause"
title := "HCSShim::Container::" + operation
logrus.Debugf(title+" id=%s", container.id)
if container.handle == 0 {
return makeContainerError(container, operation, "", ErrAlreadyClosed)
}
var resultp *uint16
err := hcsPauseComputeSystem(container.handle, "", &resultp)
err = processAsyncHcsResult(err, resultp, container.callbackNumber, hcsNotificationSystemPauseCompleted, &defaultTimeout)
if err != nil {
return makeContainerError(container, operation, "", err)
}
logrus.Debugf(title+" succeeded id=%s", container.id)
return nil
}
// Resume resumes the execution of the container. This feature is not enabled in TP5.
func (container *container) Resume() error {
container.handleLock.RLock()
defer container.handleLock.RUnlock()
operation := "Resume"
title := "HCSShim::Container::" + operation
logrus.Debugf(title+" id=%s", container.id)
if container.handle == 0 {
return makeContainerError(container, operation, "", ErrAlreadyClosed)
}
var resultp *uint16
err := hcsResumeComputeSystem(container.handle, "", &resultp)
err = processAsyncHcsResult(err, resultp, container.callbackNumber, hcsNotificationSystemResumeCompleted, &defaultTimeout)
if err != nil {
return makeContainerError(container, operation, "", err)
}
logrus.Debugf(title+" succeeded id=%s", container.id)
return nil
}
// CreateProcess launches a new process within the container.
func (container *container) CreateProcess(c *ProcessConfig) (Process, error) {
container.handleLock.RLock()
defer container.handleLock.RUnlock()
operation := "CreateProcess"
title := "HCSShim::Container::" + operation
var (
processInfo hcsProcessInformation
processHandle hcsProcess
resultp *uint16
)
if container.handle == 0 {
return nil, makeContainerError(container, operation, "", ErrAlreadyClosed)
}
// If we are not emulating a console, ignore any console size passed to us
if !c.EmulateConsole {
c.ConsoleSize[0] = 0
c.ConsoleSize[1] = 0
}
configurationb, err := json.Marshal(c)
p, err := container.system.CreateProcess(c)
if err != nil {
return nil, makeContainerError(container, operation, "", err)
return nil, convertSystemError(err, container)
}
configuration := string(configurationb)
logrus.Debugf(title+" id=%s config=%s", container.id, configuration)
err = hcsCreateProcess(container.handle, configuration, &processInfo, &processHandle, &resultp)
err = processHcsResult(err, resultp)
if err != nil {
return nil, makeContainerError(container, operation, configuration, err)
}
process := &process{
handle: processHandle,
processID: int(processInfo.ProcessId),
container: container,
cachedPipes: &cachedPipes{
stdIn: processInfo.StdInput,
stdOut: processInfo.StdOutput,
stdErr: processInfo.StdError,
},
}
if err := process.registerCallback(); err != nil {
return nil, makeContainerError(container, operation, "", err)
}
logrus.Debugf(title+" succeeded id=%s processid=%d", container.id, process.processID)
return process, nil
return &process{p}, nil
}
// OpenProcess gets an interface to an existing process within the container.
func (container *container) OpenProcess(pid int) (Process, error) {
container.handleLock.RLock()
defer container.handleLock.RUnlock()
operation := "OpenProcess"
title := "HCSShim::Container::" + operation
logrus.Debugf(title+" id=%s, processid=%d", container.id, pid)
var (
processHandle hcsProcess
resultp *uint16
)
if container.handle == 0 {
return nil, makeContainerError(container, operation, "", ErrAlreadyClosed)
}
err := hcsOpenProcess(container.handle, uint32(pid), &processHandle, &resultp)
err = processHcsResult(err, resultp)
p, err := container.system.OpenProcess(pid)
if err != nil {
return nil, makeContainerError(container, operation, "", err)
return nil, convertSystemError(err, container)
}
process := &process{
handle: processHandle,
processID: pid,
container: container,
}
if err := process.registerCallback(); err != nil {
return nil, makeContainerError(container, operation, "", err)
}
logrus.Debugf(title+" succeeded id=%s processid=%s", container.id, process.processID)
return process, nil
return &process{p}, nil
}
// Close cleans up any state associated with the container but does not terminate or wait for it.
func (container *container) Close() error {
container.handleLock.Lock()
defer container.handleLock.Unlock()
operation := "Close"
title := "HCSShim::Container::" + operation
logrus.Debugf(title+" id=%s", container.id)
// Don't double free this
if container.handle == 0 {
return nil
}
if err := container.unregisterCallback(); err != nil {
return makeContainerError(container, operation, "", err)
}
if err := hcsCloseComputeSystem(container.handle); err != nil {
return makeContainerError(container, operation, "", err)
}
container.handle = 0
logrus.Debugf(title+" succeeded id=%s", container.id)
return nil
return convertSystemError(container.system.Close(), container)
}
func (container *container) registerCallback() error {
context := &notifcationWatcherContext{
channels: newChannels(),
}
callbackMapLock.Lock()
callbackNumber := nextCallback
nextCallback++
callbackMap[callbackNumber] = context
callbackMapLock.Unlock()
var callbackHandle hcsCallback
err := hcsRegisterComputeSystemCallback(container.handle, notificationWatcherCallback, callbackNumber, &callbackHandle)
if err != nil {
return err
}
context.handle = callbackHandle
container.callbackNumber = callbackNumber
return nil
}
func (container *container) unregisterCallback() error {
callbackNumber := container.callbackNumber
callbackMapLock.RLock()
context := callbackMap[callbackNumber]
callbackMapLock.RUnlock()
if context == nil {
return nil
}
handle := context.handle
if handle == 0 {
return nil
}
// hcsUnregisterComputeSystemCallback has its own syncronization
// to wait for all callbacks to complete. We must NOT hold the callbackMapLock.
err := hcsUnregisterComputeSystemCallback(handle)
if err != nil {
return err
}
closeChannels(context.channels)
callbackMapLock.Lock()
callbackMap[callbackNumber] = nil
callbackMapLock.Unlock()
handle = 0
return nil
}
// Modifies the System by sending a request to HCS
// Modify the System
func (container *container) Modify(config *ResourceModificationRequestResponse) error {
container.handleLock.RLock()
defer container.handleLock.RUnlock()
operation := "Modify"
title := "HCSShim::Container::" + operation
if container.handle == 0 {
return makeContainerError(container, operation, "", ErrAlreadyClosed)
}
requestJSON, err := json.Marshal(config)
if err != nil {
return err
}
requestString := string(requestJSON)
logrus.Debugf(title+" id=%s request=%s", container.id, requestString)
var resultp *uint16
err = hcsModifyComputeSystem(container.handle, requestString, &resultp)
err = processHcsResult(err, resultp)
if err != nil {
return makeContainerError(container, operation, "", err)
}
logrus.Debugf(title+" succeeded id=%s", container.id)
return nil
return convertSystemError(container.system.Modify(config), container)
}

View File

@@ -1,27 +0,0 @@
package hcsshim
import "github.com/sirupsen/logrus"
// CreateLayer creates a new, empty, read-only layer on the filesystem based on
// the parent layer provided.
func CreateLayer(info DriverInfo, id, parent string) error {
title := "hcsshim::CreateLayer "
logrus.Debugf(title+"Flavour %d ID %s parent %s", info.Flavour, id, parent)
// Convert info to API calling convention
infop, err := convertDriverInfo(info)
if err != nil {
logrus.Error(err)
return err
}
err = createLayer(&infop, id, parent)
if err != nil {
err = makeErrorf(err, title, "id=%s parent=%s flavour=%d", id, parent, info.Flavour)
logrus.Error(err)
return err
}
logrus.Debugf(title+" - succeeded id=%s parent=%s flavour=%d", id, parent, info.Flavour)
return nil
}

View File

@@ -1,35 +0,0 @@
package hcsshim
import "github.com/sirupsen/logrus"
// CreateSandboxLayer creates and populates new read-write layer for use by a container.
// This requires both the id of the direct parent layer, as well as the full list
// of paths to all parent layers up to the base (and including the direct parent
// whose id was provided).
func CreateSandboxLayer(info DriverInfo, layerId, parentId string, parentLayerPaths []string) error {
title := "hcsshim::CreateSandboxLayer "
logrus.Debugf(title+"layerId %s parentId %s", layerId, parentId)
// Generate layer descriptors
layers, err := layerPathsToDescriptors(parentLayerPaths)
if err != nil {
return err
}
// Convert info to API calling convention
infop, err := convertDriverInfo(info)
if err != nil {
logrus.Error(err)
return err
}
err = createSandboxLayer(&infop, layerId, parentId, layers)
if err != nil {
err = makeErrorf(err, title, "layerId=%s parentId=%s", layerId, parentId)
logrus.Error(err)
return err
}
logrus.Debugf(title+"- succeeded layerId=%s parentId=%s", layerId, parentId)
return nil
}

View File

@@ -1,26 +0,0 @@
package hcsshim
import "github.com/sirupsen/logrus"
// DeactivateLayer will dismount a layer that was mounted via ActivateLayer.
func DeactivateLayer(info DriverInfo, id string) error {
title := "hcsshim::DeactivateLayer "
logrus.Debugf(title+"Flavour %d ID %s", info.Flavour, id)
// Convert info to API calling convention
infop, err := convertDriverInfo(info)
if err != nil {
logrus.Error(err)
return err
}
err = deactivateLayer(&infop, id)
if err != nil {
err = makeErrorf(err, title, "id=%s flavour=%d", id, info.Flavour)
logrus.Error(err)
return err
}
logrus.Debugf(title+"succeeded flavour=%d id=%s", info.Flavour, id)
return nil
}

View File

@@ -1,27 +0,0 @@
package hcsshim
import "github.com/sirupsen/logrus"
// DestroyLayer will remove the on-disk files representing the layer with the given
// id, including that layer's containing folder, if any.
func DestroyLayer(info DriverInfo, id string) error {
title := "hcsshim::DestroyLayer "
logrus.Debugf(title+"Flavour %d ID %s", info.Flavour, id)
// Convert info to API calling convention
infop, err := convertDriverInfo(info)
if err != nil {
logrus.Error(err)
return err
}
err = destroyLayer(&infop, id)
if err != nil {
err = makeErrorf(err, title, "id=%s flavour=%d", id, info.Flavour)
logrus.Error(err)
return err
}
logrus.Debugf(title+"succeeded flavour=%d id=%s", info.Flavour, id)
return nil
}

View File

@@ -1,92 +1,83 @@
package hcsshim
import (
"errors"
"fmt"
"syscall"
"github.com/Microsoft/hcsshim/internal/hns"
"github.com/Microsoft/hcsshim/internal/hcs"
"github.com/Microsoft/hcsshim/internal/hcserror"
)
var (
// ErrComputeSystemDoesNotExist is an error encountered when the container being operated on no longer exists
ErrComputeSystemDoesNotExist = syscall.Errno(0xc037010e)
// ErrComputeSystemDoesNotExist is an error encountered when the container being operated on no longer exists = hcs.exist
ErrComputeSystemDoesNotExist = hcs.ErrComputeSystemDoesNotExist
// ErrElementNotFound is an error encountered when the object being referenced does not exist
ErrElementNotFound = syscall.Errno(0x490)
ErrElementNotFound = hcs.ErrElementNotFound
// ErrElementNotFound is an error encountered when the object being referenced does not exist
ErrNotSupported = syscall.Errno(0x32)
ErrNotSupported = hcs.ErrNotSupported
// ErrInvalidData is an error encountered when the request being sent to hcs is invalid/unsupported
// decimal -2147024883 / hex 0x8007000d
ErrInvalidData = syscall.Errno(0xd)
ErrInvalidData = hcs.ErrInvalidData
// ErrHandleClose is an error encountered when the handle generating the notification being waited on has been closed
ErrHandleClose = errors.New("hcsshim: the handle generating this notification has been closed")
ErrHandleClose = hcs.ErrHandleClose
// ErrAlreadyClosed is an error encountered when using a handle that has been closed by the Close method
ErrAlreadyClosed = errors.New("hcsshim: the handle has already been closed")
ErrAlreadyClosed = hcs.ErrAlreadyClosed
// ErrInvalidNotificationType is an error encountered when an invalid notification type is used
ErrInvalidNotificationType = errors.New("hcsshim: invalid notification type")
ErrInvalidNotificationType = hcs.ErrInvalidNotificationType
// ErrInvalidProcessState is an error encountered when the process is not in a valid state for the requested operation
ErrInvalidProcessState = errors.New("the process is in an invalid state for the attempted operation")
ErrInvalidProcessState = hcs.ErrInvalidProcessState
// ErrTimeout is an error encountered when waiting on a notification times out
ErrTimeout = errors.New("hcsshim: timeout waiting for notification")
ErrTimeout = hcs.ErrTimeout
// ErrUnexpectedContainerExit is the error encountered when a container exits while waiting for
// a different expected notification
ErrUnexpectedContainerExit = errors.New("unexpected container exit")
ErrUnexpectedContainerExit = hcs.ErrUnexpectedContainerExit
// ErrUnexpectedProcessAbort is the error encountered when communication with the compute service
// is lost while waiting for a notification
ErrUnexpectedProcessAbort = errors.New("lost communication with compute service")
ErrUnexpectedProcessAbort = hcs.ErrUnexpectedProcessAbort
// ErrUnexpectedValue is an error encountered when hcs returns an invalid value
ErrUnexpectedValue = errors.New("unexpected value returned from hcs")
ErrUnexpectedValue = hcs.ErrUnexpectedValue
// ErrVmcomputeAlreadyStopped is an error encountered when a shutdown or terminate request is made on a stopped container
ErrVmcomputeAlreadyStopped = syscall.Errno(0xc0370110)
ErrVmcomputeAlreadyStopped = hcs.ErrVmcomputeAlreadyStopped
// ErrVmcomputeOperationPending is an error encountered when the operation is being completed asynchronously
ErrVmcomputeOperationPending = syscall.Errno(0xC0370103)
ErrVmcomputeOperationPending = hcs.ErrVmcomputeOperationPending
// ErrVmcomputeOperationInvalidState is an error encountered when the compute system is not in a valid state for the requested operation
ErrVmcomputeOperationInvalidState = syscall.Errno(0xc0370105)
ErrVmcomputeOperationInvalidState = hcs.ErrVmcomputeOperationInvalidState
// ErrProcNotFound is an error encountered when the the process cannot be found
ErrProcNotFound = syscall.Errno(0x7f)
ErrProcNotFound = hcs.ErrProcNotFound
// ErrVmcomputeOperationAccessIsDenied is an error which can be encountered when enumerating compute systems in RS1/RS2
// builds when the underlying silo might be in the process of terminating. HCS was fixed in RS3.
ErrVmcomputeOperationAccessIsDenied = syscall.Errno(0x5)
ErrVmcomputeOperationAccessIsDenied = hcs.ErrVmcomputeOperationAccessIsDenied
// ErrVmcomputeInvalidJSON is an error encountered when the compute system does not support/understand the messages sent by management
ErrVmcomputeInvalidJSON = syscall.Errno(0xc037010d)
ErrVmcomputeInvalidJSON = hcs.ErrVmcomputeInvalidJSON
// ErrVmcomputeUnknownMessage is an error encountered guest compute system doesn't support the message
ErrVmcomputeUnknownMessage = syscall.Errno(0xc037010b)
ErrVmcomputeUnknownMessage = hcs.ErrVmcomputeUnknownMessage
// ErrNotSupported is an error encountered when hcs doesn't support the request
ErrPlatformNotSupported = errors.New("unsupported platform request")
ErrPlatformNotSupported = hcs.ErrPlatformNotSupported
)
type EndpointNotFoundError struct {
EndpointName string
}
func (e EndpointNotFoundError) Error() string {
return fmt.Sprintf("Endpoint %s not found", e.EndpointName)
}
type NetworkNotFoundError struct {
NetworkName string
}
func (e NetworkNotFoundError) Error() string {
return fmt.Sprintf("Network %s not found", e.NetworkName)
}
type EndpointNotFoundError = hns.EndpointNotFoundError
type NetworkNotFoundError = hns.NetworkNotFoundError
// ProcessError is an error encountered in HCS during an operation on a Process object
type ProcessError struct {
@@ -94,6 +85,7 @@ type ProcessError struct {
Operation string
ExtraInfo string
Err error
Events []hcs.ErrorEvent
}
// ContainerError is an error encountered in HCS during an operation on a Container object
@@ -102,6 +94,7 @@ type ContainerError struct {
Operation string
ExtraInfo string
Err error
Events []hcs.ErrorEvent
}
func (e *ContainerError) Error() string {
@@ -113,7 +106,7 @@ func (e *ContainerError) Error() string {
return "unexpected nil container for error: " + e.Err.Error()
}
s := "container " + e.Container.id
s := "container " + e.Container.system.ID()
if e.Operation != "" {
s += " encountered an error during " + e.Operation
@@ -123,11 +116,15 @@ func (e *ContainerError) Error() string {
case nil:
break
case syscall.Errno:
s += fmt.Sprintf(": failure in a Windows system call: %s (0x%x)", e.Err, win32FromError(e.Err))
s += fmt.Sprintf(": failure in a Windows system call: %s (0x%x)", e.Err, hcserror.Win32FromError(e.Err))
default:
s += fmt.Sprintf(": %s", e.Err.Error())
}
for _, ev := range e.Events {
s += "\n" + ev.String()
}
if e.ExtraInfo != "" {
s += " extra info: " + e.ExtraInfo
}
@@ -153,12 +150,7 @@ func (e *ProcessError) Error() string {
return "Unexpected nil process for error: " + e.Err.Error()
}
s := fmt.Sprintf("process %d", e.Process.processID)
if e.Process.container != nil {
s += " in container " + e.Process.container.id
}
s := fmt.Sprintf("process %d in container %s", e.Process.p.Pid(), e.Process.p.SystemID())
if e.Operation != "" {
s += " encountered an error during " + e.Operation
}
@@ -167,11 +159,15 @@ func (e *ProcessError) Error() string {
case nil:
break
case syscall.Errno:
s += fmt.Sprintf(": failure in a Windows system call: %s (0x%x)", e.Err, win32FromError(e.Err))
s += fmt.Sprintf(": failure in a Windows system call: %s (0x%x)", e.Err, hcserror.Win32FromError(e.Err))
default:
s += fmt.Sprintf(": %s", e.Err.Error())
}
for _, ev := range e.Events {
s += "\n" + ev.String()
}
return s
}
@@ -189,37 +185,31 @@ func makeProcessError(process *process, operation string, extraInfo string, err
// already exited, or does not exist. Both IsAlreadyStopped and IsNotExist
// will currently return true when the error is ErrElementNotFound or ErrProcNotFound.
func IsNotExist(err error) bool {
err = getInnerError(err)
if _, ok := err.(EndpointNotFoundError); ok {
return true
}
if _, ok := err.(NetworkNotFoundError); ok {
return true
}
return err == ErrComputeSystemDoesNotExist ||
err == ErrElementNotFound ||
err == ErrProcNotFound
return hcs.IsNotExist(getInnerError(err))
}
// IsAlreadyClosed checks if an error is caused by the Container or Process having been
// already closed by a call to the Close() method.
func IsAlreadyClosed(err error) bool {
err = getInnerError(err)
return err == ErrAlreadyClosed
return hcs.IsAlreadyClosed(getInnerError(err))
}
// IsPending returns a boolean indicating whether the error is that
// the requested operation is being completed in the background.
func IsPending(err error) bool {
err = getInnerError(err)
return err == ErrVmcomputeOperationPending
return hcs.IsPending(getInnerError(err))
}
// IsTimeout returns a boolean indicating whether the error is caused by
// a timeout waiting for the operation to complete.
func IsTimeout(err error) bool {
err = getInnerError(err)
return err == ErrTimeout
return hcs.IsTimeout(getInnerError(err))
}
// IsAlreadyStopped returns a boolean indicating whether the error is caused by
@@ -228,10 +218,7 @@ func IsTimeout(err error) bool {
// already exited, or does not exist. Both IsAlreadyStopped and IsNotExist
// will currently return true when the error is ErrElementNotFound or ErrProcNotFound.
func IsAlreadyStopped(err error) bool {
err = getInnerError(err)
return err == ErrVmcomputeAlreadyStopped ||
err == ErrElementNotFound ||
err == ErrProcNotFound
return hcs.IsAlreadyStopped(getInnerError(err))
}
// IsNotSupported returns a boolean indicating whether the error is caused by
@@ -240,12 +227,7 @@ func IsAlreadyStopped(err error) bool {
// ErrVmcomputeInvalidJSON, ErrInvalidData, ErrNotSupported or ErrVmcomputeUnknownMessage
// is thrown from the Platform
func IsNotSupported(err error) bool {
err = getInnerError(err)
// If Platform doesn't recognize or support the request sent, below errors are seen
return err == ErrVmcomputeInvalidJSON ||
err == ErrInvalidData ||
err == ErrNotSupported ||
err == ErrVmcomputeUnknownMessage
return hcs.IsNotSupported(getInnerError(err))
}
func getInnerError(err error) error {
@@ -259,3 +241,17 @@ func getInnerError(err error) error {
}
return err
}
func convertSystemError(err error, c *container) error {
if serr, ok := err.(*hcs.SystemError); ok {
return &ContainerError{Container: c, Operation: serr.Op, ExtraInfo: serr.Extra, Err: serr.Err, Events: serr.Events}
}
return err
}
func convertProcessError(err error, p *process) error {
if perr, ok := err.(*hcs.ProcessError); ok {
return &ProcessError{Process: p, Operation: perr.Op, Err: perr.Err, Events: perr.Events}
}
return err
}

View File

@@ -1,26 +0,0 @@
package hcsshim
import "github.com/sirupsen/logrus"
// ExpandSandboxSize expands the size of a layer to at least size bytes.
func ExpandSandboxSize(info DriverInfo, layerId string, size uint64) error {
title := "hcsshim::ExpandSandboxSize "
logrus.Debugf(title+"layerId=%s size=%d", layerId, size)
// Convert info to API calling convention
infop, err := convertDriverInfo(info)
if err != nil {
logrus.Error(err)
return err
}
err = expandSandboxSize(&infop, layerId, size)
if err != nil {
err = makeErrorf(err, title, "layerId=%s size=%d", layerId, size)
logrus.Error(err)
return err
}
logrus.Debugf(title+"- succeeded layerId=%s size=%d", layerId, size)
return nil
}

View File

@@ -1,19 +0,0 @@
package hcsshim
import (
"crypto/sha1"
"fmt"
)
type GUID [16]byte
func NewGUID(source string) *GUID {
h := sha1.Sum([]byte(source))
var g GUID
copy(g[0:], h[0:16])
return &g
}
func (g *GUID) ToString() string {
return fmt.Sprintf("%02x%02x%02x%02x-%02x%02x-%02x%02x-%02x-%02x", g[3], g[2], g[1], g[0], g[5], g[4], g[7], g[6], g[8:10], g[10:])
}

View File

@@ -4,80 +4,20 @@
package hcsshim
import (
"fmt"
"syscall"
"unsafe"
"github.com/sirupsen/logrus"
"github.com/Microsoft/hcsshim/internal/hcserror"
)
//go:generate go run mksyscall_windows.go -output zhcsshim.go hcsshim.go safeopen.go
//go:generate go run mksyscall_windows.go -output zsyscall_windows.go hcsshim.go
//sys coTaskMemFree(buffer unsafe.Pointer) = ole32.CoTaskMemFree
//sys SetCurrentThreadCompartmentId(compartmentId uint32) (hr error) = iphlpapi.SetCurrentThreadCompartmentId
//sys activateLayer(info *driverInfo, id string) (hr error) = vmcompute.ActivateLayer?
//sys copyLayer(info *driverInfo, srcId string, dstId string, descriptors []WC_LAYER_DESCRIPTOR) (hr error) = vmcompute.CopyLayer?
//sys createLayer(info *driverInfo, id string, parent string) (hr error) = vmcompute.CreateLayer?
//sys createSandboxLayer(info *driverInfo, id string, parent string, descriptors []WC_LAYER_DESCRIPTOR) (hr error) = vmcompute.CreateSandboxLayer?
//sys expandSandboxSize(info *driverInfo, id string, size uint64) (hr error) = vmcompute.ExpandSandboxSize?
//sys deactivateLayer(info *driverInfo, id string) (hr error) = vmcompute.DeactivateLayer?
//sys destroyLayer(info *driverInfo, id string) (hr error) = vmcompute.DestroyLayer?
//sys exportLayer(info *driverInfo, id string, path string, descriptors []WC_LAYER_DESCRIPTOR) (hr error) = vmcompute.ExportLayer?
//sys getLayerMountPath(info *driverInfo, id string, length *uintptr, buffer *uint16) (hr error) = vmcompute.GetLayerMountPath?
//sys getBaseImages(buffer **uint16) (hr error) = vmcompute.GetBaseImages?
//sys importLayer(info *driverInfo, id string, path string, descriptors []WC_LAYER_DESCRIPTOR) (hr error) = vmcompute.ImportLayer?
//sys layerExists(info *driverInfo, id string, exists *uint32) (hr error) = vmcompute.LayerExists?
//sys nameToGuid(name string, guid *GUID) (hr error) = vmcompute.NameToGuid?
//sys prepareLayer(info *driverInfo, id string, descriptors []WC_LAYER_DESCRIPTOR) (hr error) = vmcompute.PrepareLayer?
//sys unprepareLayer(info *driverInfo, id string) (hr error) = vmcompute.UnprepareLayer?
//sys processBaseImage(path string) (hr error) = vmcompute.ProcessBaseImage?
//sys processUtilityImage(path string) (hr error) = vmcompute.ProcessUtilityImage?
//sys importLayerBegin(info *driverInfo, id string, descriptors []WC_LAYER_DESCRIPTOR, context *uintptr) (hr error) = vmcompute.ImportLayerBegin?
//sys importLayerNext(context uintptr, fileName string, fileInfo *winio.FileBasicInfo) (hr error) = vmcompute.ImportLayerNext?
//sys importLayerWrite(context uintptr, buffer []byte) (hr error) = vmcompute.ImportLayerWrite?
//sys importLayerEnd(context uintptr) (hr error) = vmcompute.ImportLayerEnd?
//sys exportLayerBegin(info *driverInfo, id string, descriptors []WC_LAYER_DESCRIPTOR, context *uintptr) (hr error) = vmcompute.ExportLayerBegin?
//sys exportLayerNext(context uintptr, fileName **uint16, fileInfo *winio.FileBasicInfo, fileSize *int64, deleted *uint32) (hr error) = vmcompute.ExportLayerNext?
//sys exportLayerRead(context uintptr, buffer []byte, bytesRead *uint32) (hr error) = vmcompute.ExportLayerRead?
//sys exportLayerEnd(context uintptr) (hr error) = vmcompute.ExportLayerEnd?
//sys hcsEnumerateComputeSystems(query string, computeSystems **uint16, result **uint16) (hr error) = vmcompute.HcsEnumerateComputeSystems?
//sys hcsCreateComputeSystem(id string, configuration string, identity syscall.Handle, computeSystem *hcsSystem, result **uint16) (hr error) = vmcompute.HcsCreateComputeSystem?
//sys hcsOpenComputeSystem(id string, computeSystem *hcsSystem, result **uint16) (hr error) = vmcompute.HcsOpenComputeSystem?
//sys hcsCloseComputeSystem(computeSystem hcsSystem) (hr error) = vmcompute.HcsCloseComputeSystem?
//sys hcsStartComputeSystem(computeSystem hcsSystem, options string, result **uint16) (hr error) = vmcompute.HcsStartComputeSystem?
//sys hcsShutdownComputeSystem(computeSystem hcsSystem, options string, result **uint16) (hr error) = vmcompute.HcsShutdownComputeSystem?
//sys hcsTerminateComputeSystem(computeSystem hcsSystem, options string, result **uint16) (hr error) = vmcompute.HcsTerminateComputeSystem?
//sys hcsPauseComputeSystem(computeSystem hcsSystem, options string, result **uint16) (hr error) = vmcompute.HcsPauseComputeSystem?
//sys hcsResumeComputeSystem(computeSystem hcsSystem, options string, result **uint16) (hr error) = vmcompute.HcsResumeComputeSystem?
//sys hcsGetComputeSystemProperties(computeSystem hcsSystem, propertyQuery string, properties **uint16, result **uint16) (hr error) = vmcompute.HcsGetComputeSystemProperties?
//sys hcsModifyComputeSystem(computeSystem hcsSystem, configuration string, result **uint16) (hr error) = vmcompute.HcsModifyComputeSystem?
//sys hcsRegisterComputeSystemCallback(computeSystem hcsSystem, callback uintptr, context uintptr, callbackHandle *hcsCallback) (hr error) = vmcompute.HcsRegisterComputeSystemCallback?
//sys hcsUnregisterComputeSystemCallback(callbackHandle hcsCallback) (hr error) = vmcompute.HcsUnregisterComputeSystemCallback?
//sys hcsCreateProcess(computeSystem hcsSystem, processParameters string, processInformation *hcsProcessInformation, process *hcsProcess, result **uint16) (hr error) = vmcompute.HcsCreateProcess?
//sys hcsOpenProcess(computeSystem hcsSystem, pid uint32, process *hcsProcess, result **uint16) (hr error) = vmcompute.HcsOpenProcess?
//sys hcsCloseProcess(process hcsProcess) (hr error) = vmcompute.HcsCloseProcess?
//sys hcsTerminateProcess(process hcsProcess, result **uint16) (hr error) = vmcompute.HcsTerminateProcess?
//sys hcsGetProcessInfo(process hcsProcess, processInformation *hcsProcessInformation, result **uint16) (hr error) = vmcompute.HcsGetProcessInfo?
//sys hcsGetProcessProperties(process hcsProcess, processProperties **uint16, result **uint16) (hr error) = vmcompute.HcsGetProcessProperties?
//sys hcsModifyProcess(process hcsProcess, settings string, result **uint16) (hr error) = vmcompute.HcsModifyProcess?
//sys hcsGetServiceProperties(propertyQuery string, properties **uint16, result **uint16) (hr error) = vmcompute.HcsGetServiceProperties?
//sys hcsRegisterProcessCallback(process hcsProcess, callback uintptr, context uintptr, callbackHandle *hcsCallback) (hr error) = vmcompute.HcsRegisterProcessCallback?
//sys hcsUnregisterProcessCallback(callbackHandle hcsCallback) (hr error) = vmcompute.HcsUnregisterProcessCallback?
//sys hcsModifyServiceSettings(settings string, result **uint16) (hr error) = vmcompute.HcsModifyServiceSettings?
//sys _hnsCall(method string, path string, object string, response **uint16) (hr error) = vmcompute.HNSCall?
const (
// Specific user-visible exit codes
WaitErrExecFailed = 32767
ERROR_GEN_FAILURE = syscall.Errno(31)
ERROR_GEN_FAILURE = hcserror.ERROR_GEN_FAILURE
ERROR_SHUTDOWN_IN_PROGRESS = syscall.Errno(1115)
WSAEINVAL = syscall.Errno(10022)
@@ -85,82 +25,4 @@ const (
TimeoutInfinite = 0xFFFFFFFF
)
type HcsError struct {
title string
rest string
Err error
}
type hcsSystem syscall.Handle
type hcsProcess syscall.Handle
type hcsCallback syscall.Handle
type hcsProcessInformation struct {
ProcessId uint32
Reserved uint32
StdInput syscall.Handle
StdOutput syscall.Handle
StdError syscall.Handle
}
func makeError(err error, title, rest string) error {
// Pass through DLL errors directly since they do not originate from HCS.
if _, ok := err.(*syscall.DLLError); ok {
return err
}
return &HcsError{title, rest, err}
}
func makeErrorf(err error, title, format string, a ...interface{}) error {
return makeError(err, title, fmt.Sprintf(format, a...))
}
func win32FromError(err error) uint32 {
if herr, ok := err.(*HcsError); ok {
return win32FromError(herr.Err)
}
if code, ok := err.(syscall.Errno); ok {
return uint32(code)
}
return uint32(ERROR_GEN_FAILURE)
}
func win32FromHresult(hr uintptr) uintptr {
if hr&0x1fff0000 == 0x00070000 {
return hr & 0xffff
}
return hr
}
func (e *HcsError) Error() string {
s := e.title
if len(s) > 0 && s[len(s)-1] != ' ' {
s += " "
}
s += fmt.Sprintf("failed in Win32: %s (0x%x)", e.Err, win32FromError(e.Err))
if e.rest != "" {
if e.rest[0] != ' ' {
s += " "
}
s += e.rest
}
return s
}
func convertAndFreeCoTaskMemString(buffer *uint16) string {
str := syscall.UTF16ToString((*[1 << 30]uint16)(unsafe.Pointer(buffer))[:])
coTaskMemFree(unsafe.Pointer(buffer))
return str
}
func convertAndFreeCoTaskMemBytes(buffer *uint16) []byte {
return []byte(convertAndFreeCoTaskMemString(buffer))
}
func processHcsResult(err error, resultp *uint16) error {
if resultp != nil {
result := convertAndFreeCoTaskMemString(resultp)
logrus.Debugf("Result: %s", result)
}
return err
}
type HcsError = hcserror.HcsError

View File

@@ -1,29 +1,11 @@
package hcsshim
import (
"encoding/json"
"net"
"github.com/sirupsen/logrus"
"github.com/Microsoft/hcsshim/internal/hns"
)
// HNSEndpoint represents a network endpoint in HNS
type HNSEndpoint struct {
Id string `json:"ID,omitempty"`
Name string `json:",omitempty"`
VirtualNetwork string `json:",omitempty"`
VirtualNetworkName string `json:",omitempty"`
Policies []json.RawMessage `json:",omitempty"`
MacAddress string `json:",omitempty"`
IPAddress net.IP `json:",omitempty"`
DNSSuffix string `json:",omitempty"`
DNSServerList string `json:",omitempty"`
GatewayAddress string `json:",omitempty"`
EnableInternalDNS bool `json:",omitempty"`
DisableICC bool `json:",omitempty"`
PrefixLength uint8 `json:",omitempty"`
IsRemoteEndpoint bool `json:",omitempty"`
}
type HNSEndpoint = hns.HNSEndpoint
//SystemType represents the type of the system on which actions are done
type SystemType string
@@ -37,39 +19,19 @@ const (
// EndpointAttachDetachRequest is the structure used to send request to the container to modify the system
// Supported resource types are Network and Request Types are Add/Remove
type EndpointAttachDetachRequest struct {
ContainerID string `json:"ContainerId,omitempty"`
SystemType SystemType `json:"SystemType"`
CompartmentID uint16 `json:"CompartmentId,omitempty"`
VirtualNICName string `json:"VirtualNicName,omitempty"`
}
type EndpointAttachDetachRequest = hns.EndpointAttachDetachRequest
// EndpointResquestResponse is object to get the endpoint request response
type EndpointResquestResponse struct {
Success bool
Error string
}
type EndpointResquestResponse = hns.EndpointResquestResponse
// HNSEndpointRequest makes a HNS call to modify/query a network endpoint
func HNSEndpointRequest(method, path, request string) (*HNSEndpoint, error) {
endpoint := &HNSEndpoint{}
err := hnsCall(method, "/endpoints/"+path, request, &endpoint)
if err != nil {
return nil, err
}
return endpoint, nil
return hns.HNSEndpointRequest(method, path, request)
}
// HNSListEndpointRequest makes a HNS call to query the list of available endpoints
func HNSListEndpointRequest() ([]HNSEndpoint, error) {
var endpoint []HNSEndpoint
err := hnsCall("GET", "/endpoints/", "", &endpoint)
if err != nil {
return nil, err
}
return endpoint, nil
return hns.HNSListEndpointRequest()
}
// HotAttachEndpoint makes a HCS Call to attach the endpoint to the container
@@ -120,204 +82,10 @@ func modifyNetworkEndpoint(containerID string, endpointID string, request Reques
// GetHNSEndpointByID get the Endpoint by ID
func GetHNSEndpointByID(endpointID string) (*HNSEndpoint, error) {
return HNSEndpointRequest("GET", endpointID, "")
return hns.GetHNSEndpointByID(endpointID)
}
// GetHNSEndpointByName gets the endpoint filtered by Name
func GetHNSEndpointByName(endpointName string) (*HNSEndpoint, error) {
hnsResponse, err := HNSListEndpointRequest()
if err != nil {
return nil, err
}
for _, hnsEndpoint := range hnsResponse {
if hnsEndpoint.Name == endpointName {
return &hnsEndpoint, nil
}
}
return nil, EndpointNotFoundError{EndpointName: endpointName}
}
// Create Endpoint by sending EndpointRequest to HNS. TODO: Create a separate HNS interface to place all these methods
func (endpoint *HNSEndpoint) Create() (*HNSEndpoint, error) {
operation := "Create"
title := "HCSShim::HNSEndpoint::" + operation
logrus.Debugf(title+" id=%s", endpoint.Id)
jsonString, err := json.Marshal(endpoint)
if err != nil {
return nil, err
}
return HNSEndpointRequest("POST", "", string(jsonString))
}
// Delete Endpoint by sending EndpointRequest to HNS
func (endpoint *HNSEndpoint) Delete() (*HNSEndpoint, error) {
operation := "Delete"
title := "HCSShim::HNSEndpoint::" + operation
logrus.Debugf(title+" id=%s", endpoint.Id)
return HNSEndpointRequest("DELETE", endpoint.Id, "")
}
// Update Endpoint
func (endpoint *HNSEndpoint) Update() (*HNSEndpoint, error) {
operation := "Update"
title := "HCSShim::HNSEndpoint::" + operation
logrus.Debugf(title+" id=%s", endpoint.Id)
jsonString, err := json.Marshal(endpoint)
if err != nil {
return nil, err
}
err = hnsCall("POST", "/endpoints/"+endpoint.Id, string(jsonString), &endpoint)
return endpoint, err
}
// ContainerHotAttach attaches an endpoint to a running container
func (endpoint *HNSEndpoint) ContainerHotAttach(containerID string) error {
operation := "ContainerHotAttach"
title := "HCSShim::HNSEndpoint::" + operation
logrus.Debugf(title+" id=%s, containerId=%s", endpoint.Id, containerID)
return modifyNetworkEndpoint(containerID, endpoint.Id, Add)
}
// ContainerHotDetach detaches an endpoint from a running container
func (endpoint *HNSEndpoint) ContainerHotDetach(containerID string) error {
operation := "ContainerHotDetach"
title := "HCSShim::HNSEndpoint::" + operation
logrus.Debugf(title+" id=%s, containerId=%s", endpoint.Id, containerID)
return modifyNetworkEndpoint(containerID, endpoint.Id, Remove)
}
// ApplyACLPolicy applies a set of ACL Policies on the Endpoint
func (endpoint *HNSEndpoint) ApplyACLPolicy(policies ...*ACLPolicy) error {
operation := "ApplyACLPolicy"
title := "HCSShim::HNSEndpoint::" + operation
logrus.Debugf(title+" id=%s", endpoint.Id)
for _, policy := range policies {
if policy == nil {
continue
}
jsonString, err := json.Marshal(policy)
if err != nil {
return err
}
endpoint.Policies = append(endpoint.Policies, jsonString)
}
_, err := endpoint.Update()
return err
}
// ContainerAttach attaches an endpoint to container
func (endpoint *HNSEndpoint) ContainerAttach(containerID string, compartmentID uint16) error {
operation := "ContainerAttach"
title := "HCSShim::HNSEndpoint::" + operation
logrus.Debugf(title+" id=%s", endpoint.Id)
requestMessage := &EndpointAttachDetachRequest{
ContainerID: containerID,
CompartmentID: compartmentID,
SystemType: ContainerType,
}
response := &EndpointResquestResponse{}
jsonString, err := json.Marshal(requestMessage)
if err != nil {
return err
}
return hnsCall("POST", "/endpoints/"+endpoint.Id+"/attach", string(jsonString), &response)
}
// ContainerDetach detaches an endpoint from container
func (endpoint *HNSEndpoint) ContainerDetach(containerID string) error {
operation := "ContainerDetach"
title := "HCSShim::HNSEndpoint::" + operation
logrus.Debugf(title+" id=%s", endpoint.Id)
requestMessage := &EndpointAttachDetachRequest{
ContainerID: containerID,
SystemType: ContainerType,
}
response := &EndpointResquestResponse{}
jsonString, err := json.Marshal(requestMessage)
if err != nil {
return err
}
return hnsCall("POST", "/endpoints/"+endpoint.Id+"/detach", string(jsonString), &response)
}
// HostAttach attaches a nic on the host
func (endpoint *HNSEndpoint) HostAttach(compartmentID uint16) error {
operation := "HostAttach"
title := "HCSShim::HNSEndpoint::" + operation
logrus.Debugf(title+" id=%s", endpoint.Id)
requestMessage := &EndpointAttachDetachRequest{
CompartmentID: compartmentID,
SystemType: HostType,
}
response := &EndpointResquestResponse{}
jsonString, err := json.Marshal(requestMessage)
if err != nil {
return err
}
return hnsCall("POST", "/endpoints/"+endpoint.Id+"/attach", string(jsonString), &response)
}
// HostDetach detaches a nic on the host
func (endpoint *HNSEndpoint) HostDetach() error {
operation := "HostDetach"
title := "HCSShim::HNSEndpoint::" + operation
logrus.Debugf(title+" id=%s", endpoint.Id)
requestMessage := &EndpointAttachDetachRequest{
SystemType: HostType,
}
response := &EndpointResquestResponse{}
jsonString, err := json.Marshal(requestMessage)
if err != nil {
return err
}
return hnsCall("POST", "/endpoints/"+endpoint.Id+"/detach", string(jsonString), &response)
}
// VirtualMachineNICAttach attaches a endpoint to a virtual machine
func (endpoint *HNSEndpoint) VirtualMachineNICAttach(virtualMachineNICName string) error {
operation := "VirtualMachineNicAttach"
title := "HCSShim::HNSEndpoint::" + operation
logrus.Debugf(title+" id=%s", endpoint.Id)
requestMessage := &EndpointAttachDetachRequest{
VirtualNICName: virtualMachineNICName,
SystemType: VirtualMachineType,
}
response := &EndpointResquestResponse{}
jsonString, err := json.Marshal(requestMessage)
if err != nil {
return err
}
return hnsCall("POST", "/endpoints/"+endpoint.Id+"/attach", string(jsonString), &response)
}
// VirtualMachineNICDetach detaches a endpoint from a virtual machine
func (endpoint *HNSEndpoint) VirtualMachineNICDetach() error {
operation := "VirtualMachineNicDetach"
title := "HCSShim::HNSEndpoint::" + operation
logrus.Debugf(title+" id=%s", endpoint.Id)
requestMessage := &EndpointAttachDetachRequest{
SystemType: VirtualMachineType,
}
response := &EndpointResquestResponse{}
jsonString, err := json.Marshal(requestMessage)
if err != nil {
return err
}
return hnsCall("POST", "/endpoints/"+endpoint.Id+"/detach", string(jsonString), &response)
return hns.GetHNSEndpointByName(endpointName)
}

16
vendor/github.com/Microsoft/hcsshim/hnsglobals.go generated vendored Normal file
View File

@@ -0,0 +1,16 @@
package hcsshim
import (
"github.com/Microsoft/hcsshim/internal/hns"
)
type HNSGlobals = hns.HNSGlobals
type HNSVersion = hns.HNSVersion
var (
HNSVersion1803 = hns.HNSVersion1803
)
func GetHNSGlobals() (*HNSGlobals, error) {
return hns.GetHNSGlobals()
}

View File

@@ -1,141 +1,36 @@
package hcsshim
import (
"encoding/json"
"net"
"github.com/sirupsen/logrus"
"github.com/Microsoft/hcsshim/internal/hns"
)
// Subnet is assoicated with a network and represents a list
// of subnets available to the network
type Subnet struct {
AddressPrefix string `json:",omitempty"`
GatewayAddress string `json:",omitempty"`
Policies []json.RawMessage `json:",omitempty"`
}
type Subnet = hns.Subnet
// MacPool is assoicated with a network and represents a list
// of macaddresses available to the network
type MacPool struct {
StartMacAddress string `json:",omitempty"`
EndMacAddress string `json:",omitempty"`
}
type MacPool = hns.MacPool
// HNSNetwork represents a network in HNS
type HNSNetwork struct {
Id string `json:"ID,omitempty"`
Name string `json:",omitempty"`
Type string `json:",omitempty"`
NetworkAdapterName string `json:",omitempty"`
SourceMac string `json:",omitempty"`
Policies []json.RawMessage `json:",omitempty"`
MacPools []MacPool `json:",omitempty"`
Subnets []Subnet `json:",omitempty"`
DNSSuffix string `json:",omitempty"`
DNSServerList string `json:",omitempty"`
DNSServerCompartment uint32 `json:",omitempty"`
ManagementIP string `json:",omitempty"`
AutomaticDNS bool `json:",omitempty"`
}
type hnsNetworkResponse struct {
Success bool
Error string
Output HNSNetwork
}
type hnsResponse struct {
Success bool
Error string
Output json.RawMessage
}
type HNSNetwork = hns.HNSNetwork
// HNSNetworkRequest makes a call into HNS to update/query a single network
func HNSNetworkRequest(method, path, request string) (*HNSNetwork, error) {
var network HNSNetwork
err := hnsCall(method, "/networks/"+path, request, &network)
if err != nil {
return nil, err
}
return &network, nil
return hns.HNSNetworkRequest(method, path, request)
}
// HNSListNetworkRequest makes a HNS call to query the list of available networks
func HNSListNetworkRequest(method, path, request string) ([]HNSNetwork, error) {
var network []HNSNetwork
err := hnsCall(method, "/networks/"+path, request, &network)
if err != nil {
return nil, err
}
return network, nil
return hns.HNSListNetworkRequest(method, path, request)
}
// GetHNSNetworkByID
func GetHNSNetworkByID(networkID string) (*HNSNetwork, error) {
return HNSNetworkRequest("GET", networkID, "")
return hns.GetHNSNetworkByID(networkID)
}
// GetHNSNetworkName filtered by Name
func GetHNSNetworkByName(networkName string) (*HNSNetwork, error) {
hsnnetworks, err := HNSListNetworkRequest("GET", "", "")
if err != nil {
return nil, err
}
for _, hnsnetwork := range hsnnetworks {
if hnsnetwork.Name == networkName {
return &hnsnetwork, nil
}
}
return nil, NetworkNotFoundError{NetworkName: networkName}
}
// Create Network by sending NetworkRequest to HNS.
func (network *HNSNetwork) Create() (*HNSNetwork, error) {
operation := "Create"
title := "HCSShim::HNSNetwork::" + operation
logrus.Debugf(title+" id=%s", network.Id)
jsonString, err := json.Marshal(network)
if err != nil {
return nil, err
}
return HNSNetworkRequest("POST", "", string(jsonString))
}
// Delete Network by sending NetworkRequest to HNS
func (network *HNSNetwork) Delete() (*HNSNetwork, error) {
operation := "Delete"
title := "HCSShim::HNSNetwork::" + operation
logrus.Debugf(title+" id=%s", network.Id)
return HNSNetworkRequest("DELETE", network.Id, "")
}
// Creates an endpoint on the Network.
func (network *HNSNetwork) NewEndpoint(ipAddress net.IP, macAddress net.HardwareAddr) *HNSEndpoint {
return &HNSEndpoint{
VirtualNetwork: network.Id,
IPAddress: ipAddress,
MacAddress: string(macAddress),
}
}
func (network *HNSNetwork) CreateEndpoint(endpoint *HNSEndpoint) (*HNSEndpoint, error) {
operation := "CreateEndpoint"
title := "HCSShim::HNSNetwork::" + operation
logrus.Debugf(title+" id=%s, endpointId=%s", network.Id, endpoint.Id)
endpoint.VirtualNetwork = network.Id
return endpoint.Create()
}
func (network *HNSNetwork) CreateRemoteEndpoint(endpoint *HNSEndpoint) (*HNSEndpoint, error) {
operation := "CreateRemoteEndpoint"
title := "HCSShim::HNSNetwork::" + operation
logrus.Debugf(title+" id=%s", network.Id)
endpoint.IsRemoteEndpoint = true
return network.CreateEndpoint(endpoint)
return hns.GetHNSNetworkByName(networkName)
}

View File

@@ -1,94 +1,57 @@
package hcsshim
import (
"github.com/Microsoft/hcsshim/internal/hns"
)
// Type of Request Support in ModifySystem
type PolicyType string
type PolicyType = hns.PolicyType
// RequestType const
const (
Nat PolicyType = "NAT"
ACL PolicyType = "ACL"
PA PolicyType = "PA"
VLAN PolicyType = "VLAN"
VSID PolicyType = "VSID"
VNet PolicyType = "VNET"
L2Driver PolicyType = "L2Driver"
Isolation PolicyType = "Isolation"
QOS PolicyType = "QOS"
OutboundNat PolicyType = "OutBoundNAT"
ExternalLoadBalancer PolicyType = "ELB"
Route PolicyType = "ROUTE"
Nat = hns.Nat
ACL = hns.ACL
PA = hns.PA
VLAN = hns.VLAN
VSID = hns.VSID
VNet = hns.VNet
L2Driver = hns.L2Driver
Isolation = hns.Isolation
QOS = hns.QOS
OutboundNat = hns.OutboundNat
ExternalLoadBalancer = hns.ExternalLoadBalancer
Route = hns.Route
)
type NatPolicy struct {
Type PolicyType `json:"Type"`
Protocol string
InternalPort uint16
ExternalPort uint16
}
type NatPolicy = hns.NatPolicy
type QosPolicy struct {
Type PolicyType `json:"Type"`
MaximumOutgoingBandwidthInBytes uint64
}
type QosPolicy = hns.QosPolicy
type IsolationPolicy struct {
Type PolicyType `json:"Type"`
VLAN uint
VSID uint
InDefaultIsolation bool
}
type IsolationPolicy = hns.IsolationPolicy
type VlanPolicy struct {
Type PolicyType `json:"Type"`
VLAN uint
}
type VlanPolicy = hns.VlanPolicy
type VsidPolicy struct {
Type PolicyType `json:"Type"`
VSID uint
}
type VsidPolicy = hns.VsidPolicy
type PaPolicy struct {
Type PolicyType `json:"Type"`
PA string `json:"PA"`
}
type PaPolicy = hns.PaPolicy
type OutboundNatPolicy struct {
Policy
VIP string `json:"VIP,omitempty"`
Exceptions []string `json:"ExceptionList,omitempty"`
}
type OutboundNatPolicy = hns.OutboundNatPolicy
type ActionType string
type DirectionType string
type RuleType string
type ActionType = hns.ActionType
type DirectionType = hns.DirectionType
type RuleType = hns.RuleType
const (
Allow ActionType = "Allow"
Block ActionType = "Block"
Allow = hns.Allow
Block = hns.Block
In DirectionType = "In"
Out DirectionType = "Out"
In = hns.In
Out = hns.Out
Host RuleType = "Host"
Switch RuleType = "Switch"
Host = hns.Host
Switch = hns.Switch
)
type ACLPolicy struct {
Type PolicyType `json:"Type"`
Protocol uint16
InternalPort uint16
Action ActionType
Direction DirectionType
LocalAddresses string
RemoteAddresses string
LocalPort uint16
RemotePort uint16
RuleType RuleType `json:"RuleType,omitempty"`
Priority uint16
ServiceName string
}
type ACLPolicy = hns.ACLPolicy
type Policy struct {
Type PolicyType `json:"Type"`
}
type Policy = hns.Policy

View File

@@ -1,200 +1,47 @@
package hcsshim
import (
"encoding/json"
"github.com/sirupsen/logrus"
"github.com/Microsoft/hcsshim/internal/hns"
)
// RoutePolicy is a structure defining schema for Route based Policy
type RoutePolicy struct {
Policy
DestinationPrefix string `json:"DestinationPrefix,omitempty"`
NextHop string `json:"NextHop,omitempty"`
EncapEnabled bool `json:"NeedEncap,omitempty"`
}
type RoutePolicy = hns.RoutePolicy
// ELBPolicy is a structure defining schema for ELB LoadBalancing based Policy
type ELBPolicy struct {
LBPolicy
SourceVIP string `json:"SourceVIP,omitempty"`
VIPs []string `json:"VIPs,omitempty"`
ILB bool `json:"ILB,omitempty"`
}
type ELBPolicy = hns.ELBPolicy
// LBPolicy is a structure defining schema for LoadBalancing based Policy
type LBPolicy struct {
Policy
Protocol uint16 `json:"Protocol,omitempty"`
InternalPort uint16
ExternalPort uint16
}
type LBPolicy = hns.LBPolicy
// PolicyList is a structure defining schema for Policy list request
type PolicyList struct {
ID string `json:"ID,omitempty"`
EndpointReferences []string `json:"References,omitempty"`
Policies []json.RawMessage `json:"Policies,omitempty"`
}
type PolicyList = hns.PolicyList
// HNSPolicyListRequest makes a call into HNS to update/query a single network
func HNSPolicyListRequest(method, path, request string) (*PolicyList, error) {
var policy PolicyList
err := hnsCall(method, "/policylists/"+path, request, &policy)
if err != nil {
return nil, err
}
return &policy, nil
return hns.HNSPolicyListRequest(method, path, request)
}
// HNSListPolicyListRequest gets all the policy list
func HNSListPolicyListRequest() ([]PolicyList, error) {
var plist []PolicyList
err := hnsCall("GET", "/policylists/", "", &plist)
if err != nil {
return nil, err
}
return plist, nil
return hns.HNSListPolicyListRequest()
}
// PolicyListRequest makes a HNS call to modify/query a network policy list
func PolicyListRequest(method, path, request string) (*PolicyList, error) {
policylist := &PolicyList{}
err := hnsCall(method, "/policylists/"+path, request, &policylist)
if err != nil {
return nil, err
}
return policylist, nil
return hns.PolicyListRequest(method, path, request)
}
// GetPolicyListByID get the policy list by ID
func GetPolicyListByID(policyListID string) (*PolicyList, error) {
return PolicyListRequest("GET", policyListID, "")
}
// Create PolicyList by sending PolicyListRequest to HNS.
func (policylist *PolicyList) Create() (*PolicyList, error) {
operation := "Create"
title := "HCSShim::PolicyList::" + operation
logrus.Debugf(title+" id=%s", policylist.ID)
jsonString, err := json.Marshal(policylist)
if err != nil {
return nil, err
}
return PolicyListRequest("POST", "", string(jsonString))
}
// Delete deletes PolicyList
func (policylist *PolicyList) Delete() (*PolicyList, error) {
operation := "Delete"
title := "HCSShim::PolicyList::" + operation
logrus.Debugf(title+" id=%s", policylist.ID)
return PolicyListRequest("DELETE", policylist.ID, "")
}
// AddEndpoint add an endpoint to a Policy List
func (policylist *PolicyList) AddEndpoint(endpoint *HNSEndpoint) (*PolicyList, error) {
operation := "AddEndpoint"
title := "HCSShim::PolicyList::" + operation
logrus.Debugf(title+" id=%s, endpointId:%s", policylist.ID, endpoint.Id)
_, err := policylist.Delete()
if err != nil {
return nil, err
}
// Add Endpoint to the Existing List
policylist.EndpointReferences = append(policylist.EndpointReferences, "/endpoints/"+endpoint.Id)
return policylist.Create()
}
// RemoveEndpoint removes an endpoint from the Policy List
func (policylist *PolicyList) RemoveEndpoint(endpoint *HNSEndpoint) (*PolicyList, error) {
operation := "RemoveEndpoint"
title := "HCSShim::PolicyList::" + operation
logrus.Debugf(title+" id=%s, endpointId:%s", policylist.ID, endpoint.Id)
_, err := policylist.Delete()
if err != nil {
return nil, err
}
elementToRemove := "/endpoints/" + endpoint.Id
var references []string
for _, endpointReference := range policylist.EndpointReferences {
if endpointReference == elementToRemove {
continue
}
references = append(references, endpointReference)
}
policylist.EndpointReferences = references
return policylist.Create()
return hns.GetPolicyListByID(policyListID)
}
// AddLoadBalancer policy list for the specified endpoints
func AddLoadBalancer(endpoints []HNSEndpoint, isILB bool, sourceVIP, vip string, protocol uint16, internalPort uint16, externalPort uint16) (*PolicyList, error) {
operation := "AddLoadBalancer"
title := "HCSShim::PolicyList::" + operation
logrus.Debugf(title+" endpointId=%v, isILB=%v, sourceVIP=%s, vip=%s, protocol=%v, internalPort=%v, externalPort=%v", endpoints, isILB, sourceVIP, vip, protocol, internalPort, externalPort)
policylist := &PolicyList{}
elbPolicy := &ELBPolicy{
SourceVIP: sourceVIP,
ILB: isILB,
}
if len(vip) > 0 {
elbPolicy.VIPs = []string{vip}
}
elbPolicy.Type = ExternalLoadBalancer
elbPolicy.Protocol = protocol
elbPolicy.InternalPort = internalPort
elbPolicy.ExternalPort = externalPort
for _, endpoint := range endpoints {
policylist.EndpointReferences = append(policylist.EndpointReferences, "/endpoints/"+endpoint.Id)
}
jsonString, err := json.Marshal(elbPolicy)
if err != nil {
return nil, err
}
policylist.Policies = append(policylist.Policies, jsonString)
return policylist.Create()
return hns.AddLoadBalancer(endpoints, isILB, sourceVIP, vip, protocol, internalPort, externalPort)
}
// AddRoute adds route policy list for the specified endpoints
func AddRoute(endpoints []HNSEndpoint, destinationPrefix string, nextHop string, encapEnabled bool) (*PolicyList, error) {
operation := "AddRoute"
title := "HCSShim::PolicyList::" + operation
logrus.Debugf(title+" destinationPrefix:%s", destinationPrefix)
policylist := &PolicyList{}
rPolicy := &RoutePolicy{
DestinationPrefix: destinationPrefix,
NextHop: nextHop,
EncapEnabled: encapEnabled,
}
rPolicy.Type = Route
for _, endpoint := range endpoints {
policylist.EndpointReferences = append(policylist.EndpointReferences, "/endpoints/"+endpoint.Id)
}
jsonString, err := json.Marshal(rPolicy)
if err != nil {
return nil, err
}
policylist.Policies = append(policylist.Policies, jsonString)
return policylist.Create()
return hns.AddRoute(endpoints, destinationPrefix, nextHop, encapEnabled)
}

13
vendor/github.com/Microsoft/hcsshim/hnssupport.go generated vendored Normal file
View File

@@ -0,0 +1,13 @@
package hcsshim
import (
"github.com/Microsoft/hcsshim/internal/hns"
)
type HNSSupportedFeatures = hns.HNSSupportedFeatures
type HNSAclFeatures = hns.HNSAclFeatures
func GetHNSSupportedFeatures() HNSSupportedFeatures {
return hns.GetHNSSupportedFeatures()
}

View File

@@ -1,117 +1,27 @@
package hcsshim
import (
"encoding/json"
"io"
"time"
"github.com/Microsoft/hcsshim/internal/schema1"
)
// ProcessConfig is used as both the input of Container.CreateProcess
// and to convert the parameters to JSON for passing onto the HCS
type ProcessConfig struct {
ApplicationName string `json:",omitempty"`
CommandLine string `json:",omitempty"`
CommandArgs []string `json:",omitempty"` // Used by Linux Containers on Windows
User string `json:",omitempty"`
WorkingDirectory string `json:",omitempty"`
Environment map[string]string `json:",omitempty"`
EmulateConsole bool `json:",omitempty"`
CreateStdInPipe bool `json:",omitempty"`
CreateStdOutPipe bool `json:",omitempty"`
CreateStdErrPipe bool `json:",omitempty"`
ConsoleSize [2]uint `json:",omitempty"`
CreateInUtilityVm bool `json:",omitempty"` // Used by Linux Containers on Windows
OCISpecification *json.RawMessage `json:",omitempty"` // Used by Linux Containers on Windows
}
type ProcessConfig = schema1.ProcessConfig
type Layer struct {
ID string
Path string
}
type MappedDir struct {
HostPath string
ContainerPath string
ReadOnly bool
BandwidthMaximum uint64
IOPSMaximum uint64
CreateInUtilityVM bool
// LinuxMetadata - Support added in 1803/RS4+.
LinuxMetadata bool `json:",omitempty"`
}
type MappedPipe struct {
HostPath string
ContainerPipeName string
}
type HvRuntime struct {
ImagePath string `json:",omitempty"`
SkipTemplate bool `json:",omitempty"`
LinuxInitrdFile string `json:",omitempty"` // File under ImagePath on host containing an initrd image for starting a Linux utility VM
LinuxKernelFile string `json:",omitempty"` // File under ImagePath on host containing a kernel for starting a Linux utility VM
LinuxBootParameters string `json:",omitempty"` // Additional boot parameters for starting a Linux Utility VM in initrd mode
BootSource string `json:",omitempty"` // "Vhd" for Linux Utility VM booting from VHD
WritableBootSource bool `json:",omitempty"` // Linux Utility VM booting from VHD
}
type MappedVirtualDisk struct {
HostPath string `json:",omitempty"` // Path to VHD on the host
ContainerPath string // Platform-specific mount point path in the container
CreateInUtilityVM bool `json:",omitempty"`
ReadOnly bool `json:",omitempty"`
Cache string `json:",omitempty"` // "" (Unspecified); "Disabled"; "Enabled"; "Private"; "PrivateAllowSharing"
AttachOnly bool `json:",omitempty:`
}
// AssignedDevice represents a device that has been directly assigned to a container
//
// NOTE: Support added in RS5
type AssignedDevice struct {
// InterfaceClassGUID of the device to assign to container.
InterfaceClassGUID string `json:"InterfaceClassGuid,omitempty"`
}
type Layer = schema1.Layer
type MappedDir = schema1.MappedDir
type MappedPipe = schema1.MappedPipe
type HvRuntime = schema1.HvRuntime
type MappedVirtualDisk = schema1.MappedVirtualDisk
// ContainerConfig is used as both the input of CreateContainer
// and to convert the parameters to JSON for passing onto the HCS
type ContainerConfig struct {
SystemType string // HCS requires this to be hard-coded to "Container"
Name string // Name of the container. We use the docker ID.
Owner string `json:",omitempty"` // The management platform that created this container
VolumePath string `json:",omitempty"` // Windows volume path for scratch space. Used by Windows Server Containers only. Format \\?\\Volume{GUID}
IgnoreFlushesDuringBoot bool `json:",omitempty"` // Optimization hint for container startup in Windows
LayerFolderPath string `json:",omitempty"` // Where the layer folders are located. Used by Windows Server Containers only. Format %root%\windowsfilter\containerID
Layers []Layer // List of storage layers. Required for Windows Server and Hyper-V Containers. Format ID=GUID;Path=%root%\windowsfilter\layerID
Credentials string `json:",omitempty"` // Credentials information
ProcessorCount uint32 `json:",omitempty"` // Number of processors to assign to the container.
ProcessorWeight uint64 `json:",omitempty"` // CPU shares (relative weight to other containers with cpu shares). Range is from 1 to 10000. A value of 0 results in default shares.
ProcessorMaximum int64 `json:",omitempty"` // Specifies the portion of processor cycles that this container can use as a percentage times 100. Range is from 1 to 10000. A value of 0 results in no limit.
StorageIOPSMaximum uint64 `json:",omitempty"` // Maximum Storage IOPS
StorageBandwidthMaximum uint64 `json:",omitempty"` // Maximum Storage Bandwidth in bytes per second
StorageSandboxSize uint64 `json:",omitempty"` // Size in bytes that the container system drive should be expanded to if smaller
MemoryMaximumInMB int64 `json:",omitempty"` // Maximum memory available to the container in Megabytes
HostName string `json:",omitempty"` // Hostname
MappedDirectories []MappedDir `json:",omitempty"` // List of mapped directories (volumes/mounts)
MappedPipes []MappedPipe `json:",omitempty"` // List of mapped Windows named pipes
HvPartition bool // True if it a Hyper-V Container
NetworkSharedContainerName string `json:",omitempty"` // Name (ID) of the container that we will share the network stack with.
EndpointList []string `json:",omitempty"` // List of networking endpoints to be attached to container
HvRuntime *HvRuntime `json:",omitempty"` // Hyper-V container settings. Used by Hyper-V containers only. Format ImagePath=%root%\BaseLayerID\UtilityVM
Servicing bool `json:",omitempty"` // True if this container is for servicing
AllowUnqualifiedDNSQuery bool `json:",omitempty"` // True to allow unqualified DNS name resolution
DNSSearchList string `json:",omitempty"` // Comma seperated list of DNS suffixes to use for name resolution
ContainerType string `json:",omitempty"` // "Linux" for Linux containers on Windows. Omitted otherwise.
TerminateOnLastHandleClosed bool `json:",omitempty"` // Should HCS terminate the container once all handles have been closed
MappedVirtualDisks []MappedVirtualDisk `json:",omitempty"` // Array of virtual disks to mount at start
AssignedDevices []AssignedDevice `json:",omitempty"` // Array of devices to assign. NOTE: Support added in RS5
}
type ContainerConfig = schema1.ContainerConfig
type ComputeSystemQuery struct {
IDs []string `json:"Ids,omitempty"`
Types []string `json:",omitempty"`
Names []string `json:",omitempty"`
Owners []string `json:",omitempty"`
}
type ComputeSystemQuery = schema1.ComputeSystemQuery
// Container represents a created (but not necessarily running) container.
type Container interface {

View File

@@ -0,0 +1,22 @@
package guid
import (
"crypto/rand"
"fmt"
"io"
)
type GUID [16]byte
func New() GUID {
g := GUID{}
_, err := io.ReadFull(rand.Reader, g[:])
if err != nil {
panic(err)
}
return g
}
func (g GUID) String() string {
return fmt.Sprintf("%02x%02x%02x%02x-%02x%02x-%02x%02x-%02x-%02x", g[3], g[2], g[1], g[0], g[5], g[4], g[7], g[6], g[8:10], g[10:])
}

View File

@@ -1,8 +1,10 @@
package hcsshim
package hcs
import (
"sync"
"syscall"
"github.com/Microsoft/hcsshim/internal/interop"
)
var (
@@ -62,7 +64,7 @@ func closeChannels(channels notificationChannels) {
func notificationWatcher(notificationType hcsNotification, callbackNumber uintptr, notificationStatus uintptr, notificationData *uint16) uintptr {
var result error
if int32(notificationStatus) < 0 {
result = syscall.Errno(win32FromHresult(notificationStatus))
result = interop.Win32FromHresult(notificationStatus)
}
callbackMapLock.RLock()

View File

@@ -1,4 +1,4 @@
package hcsshim
package hcs
import "C"

View File

@@ -0,0 +1,279 @@
package hcs
import (
"encoding/json"
"errors"
"fmt"
"syscall"
"github.com/Microsoft/hcsshim/internal/interop"
"github.com/sirupsen/logrus"
)
var (
// ErrComputeSystemDoesNotExist is an error encountered when the container being operated on no longer exists
ErrComputeSystemDoesNotExist = syscall.Errno(0xc037010e)
// ErrElementNotFound is an error encountered when the object being referenced does not exist
ErrElementNotFound = syscall.Errno(0x490)
// ErrElementNotFound is an error encountered when the object being referenced does not exist
ErrNotSupported = syscall.Errno(0x32)
// ErrInvalidData is an error encountered when the request being sent to hcs is invalid/unsupported
// decimal -2147024883 / hex 0x8007000d
ErrInvalidData = syscall.Errno(0xd)
// ErrHandleClose is an error encountered when the handle generating the notification being waited on has been closed
ErrHandleClose = errors.New("hcsshim: the handle generating this notification has been closed")
// ErrAlreadyClosed is an error encountered when using a handle that has been closed by the Close method
ErrAlreadyClosed = errors.New("hcsshim: the handle has already been closed")
// ErrInvalidNotificationType is an error encountered when an invalid notification type is used
ErrInvalidNotificationType = errors.New("hcsshim: invalid notification type")
// ErrInvalidProcessState is an error encountered when the process is not in a valid state for the requested operation
ErrInvalidProcessState = errors.New("the process is in an invalid state for the attempted operation")
// ErrTimeout is an error encountered when waiting on a notification times out
ErrTimeout = errors.New("hcsshim: timeout waiting for notification")
// ErrUnexpectedContainerExit is the error encountered when a container exits while waiting for
// a different expected notification
ErrUnexpectedContainerExit = errors.New("unexpected container exit")
// ErrUnexpectedProcessAbort is the error encountered when communication with the compute service
// is lost while waiting for a notification
ErrUnexpectedProcessAbort = errors.New("lost communication with compute service")
// ErrUnexpectedValue is an error encountered when hcs returns an invalid value
ErrUnexpectedValue = errors.New("unexpected value returned from hcs")
// ErrVmcomputeAlreadyStopped is an error encountered when a shutdown or terminate request is made on a stopped container
ErrVmcomputeAlreadyStopped = syscall.Errno(0xc0370110)
// ErrVmcomputeOperationPending is an error encountered when the operation is being completed asynchronously
ErrVmcomputeOperationPending = syscall.Errno(0xC0370103)
// ErrVmcomputeOperationInvalidState is an error encountered when the compute system is not in a valid state for the requested operation
ErrVmcomputeOperationInvalidState = syscall.Errno(0xc0370105)
// ErrProcNotFound is an error encountered when the the process cannot be found
ErrProcNotFound = syscall.Errno(0x7f)
// ErrVmcomputeOperationAccessIsDenied is an error which can be encountered when enumerating compute systems in RS1/RS2
// builds when the underlying silo might be in the process of terminating. HCS was fixed in RS3.
ErrVmcomputeOperationAccessIsDenied = syscall.Errno(0x5)
// ErrVmcomputeInvalidJSON is an error encountered when the compute system does not support/understand the messages sent by management
ErrVmcomputeInvalidJSON = syscall.Errno(0xc037010d)
// ErrVmcomputeUnknownMessage is an error encountered guest compute system doesn't support the message
ErrVmcomputeUnknownMessage = syscall.Errno(0xc037010b)
// ErrNotSupported is an error encountered when hcs doesn't support the request
ErrPlatformNotSupported = errors.New("unsupported platform request")
)
type ErrorEvent struct {
Message string `json:"Message,omitempty"` // Fully formated error message
StackTrace string `json:"StackTrace,omitempty"` // Stack trace in string form
Provider string `json:"Provider,omitempty"`
EventID uint16 `json:"EventId,omitempty"`
Flags uint32 `json:"Flags,omitempty"`
Source string `json:"Source,omitempty"`
//Data []EventData `json:"Data,omitempty"` // Omit this as HCS doesn't encode this well. It's more confusing to include. It is however logged in debug mode (see processHcsResult function)
}
type hcsResult struct {
Error int32
ErrorMessage string
ErrorEvents []ErrorEvent `json:"ErrorEvents,omitempty"`
}
func (ev *ErrorEvent) String() string {
evs := "[Event Detail: " + ev.Message
if ev.StackTrace != "" {
evs += " Stack Trace: " + ev.StackTrace
}
if ev.Provider != "" {
evs += " Provider: " + ev.Provider
}
if ev.EventID != 0 {
evs = fmt.Sprintf("%s EventID: %d", evs, ev.EventID)
}
if ev.Flags != 0 {
evs = fmt.Sprintf("%s flags: %d", evs, ev.Flags)
}
if ev.Source != "" {
evs += " Source: " + ev.Source
}
evs += "]"
return evs
}
func processHcsResult(resultp *uint16) []ErrorEvent {
if resultp != nil {
resultj := interop.ConvertAndFreeCoTaskMemString(resultp)
logrus.Debugf("Result: %s", resultj)
result := &hcsResult{}
if err := json.Unmarshal([]byte(resultj), result); err != nil {
logrus.Warnf("Could not unmarshal HCS result %s: %s", resultj, err)
return nil
}
return result.ErrorEvents
}
return nil
}
type HcsError struct {
Op string
Err error
Events []ErrorEvent
}
func (e *HcsError) Error() string {
s := e.Op + ": " + e.Err.Error()
for _, ev := range e.Events {
s += "\n" + ev.String()
}
return s
}
// ProcessError is an error encountered in HCS during an operation on a Process object
type ProcessError struct {
SystemID string
Pid int
Op string
Err error
Events []ErrorEvent
}
// SystemError is an error encountered in HCS during an operation on a Container object
type SystemError struct {
ID string
Op string
Err error
Extra string
Events []ErrorEvent
}
func (e *SystemError) Error() string {
s := e.Op + " " + e.ID + ": " + e.Err.Error()
for _, ev := range e.Events {
s += "\n" + ev.String()
}
if e.Extra != "" {
s += "\n(extra info: " + e.Extra + ")"
}
return s
}
func makeSystemError(system *System, op string, extra string, err error, events []ErrorEvent) error {
// Don't double wrap errors
if _, ok := err.(*SystemError); ok {
return err
}
return &SystemError{
ID: system.ID(),
Op: op,
Extra: extra,
Err: err,
Events: events,
}
}
func (e *ProcessError) Error() string {
s := fmt.Sprintf("%s %s:%d: %s", e.Op, e.SystemID, e.Pid, e.Err.Error())
for _, ev := range e.Events {
s += "\n" + ev.String()
}
return s
}
func makeProcessError(process *Process, op string, err error, events []ErrorEvent) error {
// Don't double wrap errors
if _, ok := err.(*ProcessError); ok {
return err
}
return &ProcessError{
Pid: process.Pid(),
SystemID: process.SystemID(),
Op: op,
Err: err,
Events: events,
}
}
// IsNotExist checks if an error is caused by the Container or Process not existing.
// Note: Currently, ErrElementNotFound can mean that a Process has either
// already exited, or does not exist. Both IsAlreadyStopped and IsNotExist
// will currently return true when the error is ErrElementNotFound or ErrProcNotFound.
func IsNotExist(err error) bool {
err = getInnerError(err)
return err == ErrComputeSystemDoesNotExist ||
err == ErrElementNotFound ||
err == ErrProcNotFound
}
// IsAlreadyClosed checks if an error is caused by the Container or Process having been
// already closed by a call to the Close() method.
func IsAlreadyClosed(err error) bool {
err = getInnerError(err)
return err == ErrAlreadyClosed
}
// IsPending returns a boolean indicating whether the error is that
// the requested operation is being completed in the background.
func IsPending(err error) bool {
err = getInnerError(err)
return err == ErrVmcomputeOperationPending
}
// IsTimeout returns a boolean indicating whether the error is caused by
// a timeout waiting for the operation to complete.
func IsTimeout(err error) bool {
err = getInnerError(err)
return err == ErrTimeout
}
// IsAlreadyStopped returns a boolean indicating whether the error is caused by
// a Container or Process being already stopped.
// Note: Currently, ErrElementNotFound can mean that a Process has either
// already exited, or does not exist. Both IsAlreadyStopped and IsNotExist
// will currently return true when the error is ErrElementNotFound or ErrProcNotFound.
func IsAlreadyStopped(err error) bool {
err = getInnerError(err)
return err == ErrVmcomputeAlreadyStopped ||
err == ErrElementNotFound ||
err == ErrProcNotFound
}
// IsNotSupported returns a boolean indicating whether the error is caused by
// unsupported platform requests
// Note: Currently Unsupported platform requests can be mean either
// ErrVmcomputeInvalidJSON, ErrInvalidData, ErrNotSupported or ErrVmcomputeUnknownMessage
// is thrown from the Platform
func IsNotSupported(err error) bool {
err = getInnerError(err)
// If Platform doesn't recognize or support the request sent, below errors are seen
return err == ErrVmcomputeInvalidJSON ||
err == ErrInvalidData ||
err == ErrNotSupported ||
err == ErrVmcomputeUnknownMessage
}
func getInnerError(err error) error {
switch pe := err.(type) {
case nil:
return nil
case *HcsError:
err = pe.Err
case *SystemError:
err = pe.Err
case *ProcessError:
err = pe.Err
}
return err
}

View File

@@ -0,0 +1,47 @@
// Shim for the Host Compute Service (HCS) to manage Windows Server
// containers and Hyper-V containers.
package hcs
import (
"syscall"
)
//go:generate go run ../../mksyscall_windows.go -output zsyscall_windows.go hcs.go
//sys hcsEnumerateComputeSystems(query string, computeSystems **uint16, result **uint16) (hr error) = vmcompute.HcsEnumerateComputeSystems?
//sys hcsCreateComputeSystem(id string, configuration string, identity syscall.Handle, computeSystem *hcsSystem, result **uint16) (hr error) = vmcompute.HcsCreateComputeSystem?
//sys hcsOpenComputeSystem(id string, computeSystem *hcsSystem, result **uint16) (hr error) = vmcompute.HcsOpenComputeSystem?
//sys hcsCloseComputeSystem(computeSystem hcsSystem) (hr error) = vmcompute.HcsCloseComputeSystem?
//sys hcsStartComputeSystem(computeSystem hcsSystem, options string, result **uint16) (hr error) = vmcompute.HcsStartComputeSystem?
//sys hcsShutdownComputeSystem(computeSystem hcsSystem, options string, result **uint16) (hr error) = vmcompute.HcsShutdownComputeSystem?
//sys hcsTerminateComputeSystem(computeSystem hcsSystem, options string, result **uint16) (hr error) = vmcompute.HcsTerminateComputeSystem?
//sys hcsPauseComputeSystem(computeSystem hcsSystem, options string, result **uint16) (hr error) = vmcompute.HcsPauseComputeSystem?
//sys hcsResumeComputeSystem(computeSystem hcsSystem, options string, result **uint16) (hr error) = vmcompute.HcsResumeComputeSystem?
//sys hcsGetComputeSystemProperties(computeSystem hcsSystem, propertyQuery string, properties **uint16, result **uint16) (hr error) = vmcompute.HcsGetComputeSystemProperties?
//sys hcsModifyComputeSystem(computeSystem hcsSystem, configuration string, result **uint16) (hr error) = vmcompute.HcsModifyComputeSystem?
//sys hcsRegisterComputeSystemCallback(computeSystem hcsSystem, callback uintptr, context uintptr, callbackHandle *hcsCallback) (hr error) = vmcompute.HcsRegisterComputeSystemCallback?
//sys hcsUnregisterComputeSystemCallback(callbackHandle hcsCallback) (hr error) = vmcompute.HcsUnregisterComputeSystemCallback?
//sys hcsCreateProcess(computeSystem hcsSystem, processParameters string, processInformation *hcsProcessInformation, process *hcsProcess, result **uint16) (hr error) = vmcompute.HcsCreateProcess?
//sys hcsOpenProcess(computeSystem hcsSystem, pid uint32, process *hcsProcess, result **uint16) (hr error) = vmcompute.HcsOpenProcess?
//sys hcsCloseProcess(process hcsProcess) (hr error) = vmcompute.HcsCloseProcess?
//sys hcsTerminateProcess(process hcsProcess, result **uint16) (hr error) = vmcompute.HcsTerminateProcess?
//sys hcsGetProcessInfo(process hcsProcess, processInformation *hcsProcessInformation, result **uint16) (hr error) = vmcompute.HcsGetProcessInfo?
//sys hcsGetProcessProperties(process hcsProcess, processProperties **uint16, result **uint16) (hr error) = vmcompute.HcsGetProcessProperties?
//sys hcsModifyProcess(process hcsProcess, settings string, result **uint16) (hr error) = vmcompute.HcsModifyProcess?
//sys hcsGetServiceProperties(propertyQuery string, properties **uint16, result **uint16) (hr error) = vmcompute.HcsGetServiceProperties?
//sys hcsRegisterProcessCallback(process hcsProcess, callback uintptr, context uintptr, callbackHandle *hcsCallback) (hr error) = vmcompute.HcsRegisterProcessCallback?
//sys hcsUnregisterProcessCallback(callbackHandle hcsCallback) (hr error) = vmcompute.HcsUnregisterProcessCallback?
type hcsSystem syscall.Handle
type hcsProcess syscall.Handle
type hcsCallback syscall.Handle
type hcsProcessInformation struct {
ProcessId uint32
Reserved uint32
StdInput syscall.Handle
StdOutput syscall.Handle
StdError syscall.Handle
}

View File

@@ -0,0 +1,386 @@
package hcs
import (
"encoding/json"
"io"
"sync"
"syscall"
"time"
"github.com/Microsoft/hcsshim/internal/interop"
"github.com/sirupsen/logrus"
)
// ContainerError is an error encountered in HCS
type Process struct {
handleLock sync.RWMutex
handle hcsProcess
processID int
system *System
cachedPipes *cachedPipes
callbackNumber uintptr
}
type cachedPipes struct {
stdIn syscall.Handle
stdOut syscall.Handle
stdErr syscall.Handle
}
type processModifyRequest struct {
Operation string
ConsoleSize *consoleSize `json:",omitempty"`
CloseHandle *closeHandle `json:",omitempty"`
}
type consoleSize struct {
Height uint16
Width uint16
}
type closeHandle struct {
Handle string
}
type ProcessStatus struct {
ProcessID uint32
Exited bool
ExitCode uint32
LastWaitResult int32
}
const (
stdIn string = "StdIn"
stdOut string = "StdOut"
stdErr string = "StdErr"
)
const (
modifyConsoleSize string = "ConsoleSize"
modifyCloseHandle string = "CloseHandle"
)
// Pid returns the process ID of the process within the container.
func (process *Process) Pid() int {
return process.processID
}
// SystemID returns the ID of the process's compute system.
func (process *Process) SystemID() string {
return process.system.ID()
}
// Kill signals the process to terminate but does not wait for it to finish terminating.
func (process *Process) Kill() error {
process.handleLock.RLock()
defer process.handleLock.RUnlock()
operation := "Kill"
title := "hcsshim::Process::" + operation
logrus.Debugf(title+" processid=%d", process.processID)
if process.handle == 0 {
return makeProcessError(process, operation, ErrAlreadyClosed, nil)
}
var resultp *uint16
err := hcsTerminateProcess(process.handle, &resultp)
events := processHcsResult(resultp)
if err != nil {
return makeProcessError(process, operation, err, events)
}
logrus.Debugf(title+" succeeded processid=%d", process.processID)
return nil
}
// Wait waits for the process to exit.
func (process *Process) Wait() error {
operation := "Wait"
title := "hcsshim::Process::" + operation
logrus.Debugf(title+" processid=%d", process.processID)
err := waitForNotification(process.callbackNumber, hcsNotificationProcessExited, nil)
if err != nil {
return makeProcessError(process, operation, err, nil)
}
logrus.Debugf(title+" succeeded processid=%d", process.processID)
return nil
}
// WaitTimeout waits for the process to exit or the duration to elapse. It returns
// false if timeout occurs.
func (process *Process) WaitTimeout(timeout time.Duration) error {
operation := "WaitTimeout"
title := "hcsshim::Process::" + operation
logrus.Debugf(title+" processid=%d", process.processID)
err := waitForNotification(process.callbackNumber, hcsNotificationProcessExited, &timeout)
if err != nil {
return makeProcessError(process, operation, err, nil)
}
logrus.Debugf(title+" succeeded processid=%d", process.processID)
return nil
}
// ResizeConsole resizes the console of the process.
func (process *Process) ResizeConsole(width, height uint16) error {
process.handleLock.RLock()
defer process.handleLock.RUnlock()
operation := "ResizeConsole"
title := "hcsshim::Process::" + operation
logrus.Debugf(title+" processid=%d", process.processID)
if process.handle == 0 {
return makeProcessError(process, operation, ErrAlreadyClosed, nil)
}
modifyRequest := processModifyRequest{
Operation: modifyConsoleSize,
ConsoleSize: &consoleSize{
Height: height,
Width: width,
},
}
modifyRequestb, err := json.Marshal(modifyRequest)
if err != nil {
return err
}
modifyRequestStr := string(modifyRequestb)
var resultp *uint16
err = hcsModifyProcess(process.handle, modifyRequestStr, &resultp)
events := processHcsResult(resultp)
if err != nil {
return makeProcessError(process, operation, err, events)
}
logrus.Debugf(title+" succeeded processid=%d", process.processID)
return nil
}
func (process *Process) Properties() (*ProcessStatus, error) {
process.handleLock.RLock()
defer process.handleLock.RUnlock()
operation := "Properties"
title := "hcsshim::Process::" + operation
logrus.Debugf(title+" processid=%d", process.processID)
if process.handle == 0 {
return nil, makeProcessError(process, operation, ErrAlreadyClosed, nil)
}
var (
resultp *uint16
propertiesp *uint16
)
err := hcsGetProcessProperties(process.handle, &propertiesp, &resultp)
events := processHcsResult(resultp)
if err != nil {
return nil, makeProcessError(process, operation, err, events)
}
if propertiesp == nil {
return nil, ErrUnexpectedValue
}
propertiesRaw := interop.ConvertAndFreeCoTaskMemBytes(propertiesp)
properties := &ProcessStatus{}
if err := json.Unmarshal(propertiesRaw, properties); err != nil {
return nil, makeProcessError(process, operation, err, nil)
}
logrus.Debugf(title+" succeeded processid=%d, properties=%s", process.processID, propertiesRaw)
return properties, nil
}
// ExitCode returns the exit code of the process. The process must have
// already terminated.
func (process *Process) ExitCode() (int, error) {
operation := "ExitCode"
properties, err := process.Properties()
if err != nil {
return 0, makeProcessError(process, operation, err, nil)
}
if properties.Exited == false {
return 0, makeProcessError(process, operation, ErrInvalidProcessState, nil)
}
if properties.LastWaitResult != 0 {
return 0, makeProcessError(process, operation, syscall.Errno(properties.LastWaitResult), nil)
}
return int(properties.ExitCode), nil
}
// Stdio returns the stdin, stdout, and stderr pipes, respectively. Closing
// these pipes does not close the underlying pipes; it should be possible to
// call this multiple times to get multiple interfaces.
func (process *Process) Stdio() (io.WriteCloser, io.ReadCloser, io.ReadCloser, error) {
process.handleLock.RLock()
defer process.handleLock.RUnlock()
operation := "Stdio"
title := "hcsshim::Process::" + operation
logrus.Debugf(title+" processid=%d", process.processID)
if process.handle == 0 {
return nil, nil, nil, makeProcessError(process, operation, ErrAlreadyClosed, nil)
}
var stdIn, stdOut, stdErr syscall.Handle
if process.cachedPipes == nil {
var (
processInfo hcsProcessInformation
resultp *uint16
)
err := hcsGetProcessInfo(process.handle, &processInfo, &resultp)
events := processHcsResult(resultp)
if err != nil {
return nil, nil, nil, makeProcessError(process, operation, err, events)
}
stdIn, stdOut, stdErr = processInfo.StdInput, processInfo.StdOutput, processInfo.StdError
} else {
// Use cached pipes
stdIn, stdOut, stdErr = process.cachedPipes.stdIn, process.cachedPipes.stdOut, process.cachedPipes.stdErr
// Invalidate the cache
process.cachedPipes = nil
}
pipes, err := makeOpenFiles([]syscall.Handle{stdIn, stdOut, stdErr})
if err != nil {
return nil, nil, nil, makeProcessError(process, operation, err, nil)
}
logrus.Debugf(title+" succeeded processid=%d", process.processID)
return pipes[0], pipes[1], pipes[2], nil
}
// CloseStdin closes the write side of the stdin pipe so that the process is
// notified on the read side that there is no more data in stdin.
func (process *Process) CloseStdin() error {
process.handleLock.RLock()
defer process.handleLock.RUnlock()
operation := "CloseStdin"
title := "hcsshim::Process::" + operation
logrus.Debugf(title+" processid=%d", process.processID)
if process.handle == 0 {
return makeProcessError(process, operation, ErrAlreadyClosed, nil)
}
modifyRequest := processModifyRequest{
Operation: modifyCloseHandle,
CloseHandle: &closeHandle{
Handle: stdIn,
},
}
modifyRequestb, err := json.Marshal(modifyRequest)
if err != nil {
return err
}
modifyRequestStr := string(modifyRequestb)
var resultp *uint16
err = hcsModifyProcess(process.handle, modifyRequestStr, &resultp)
events := processHcsResult(resultp)
if err != nil {
return makeProcessError(process, operation, err, events)
}
logrus.Debugf(title+" succeeded processid=%d", process.processID)
return nil
}
// Close cleans up any state associated with the process but does not kill
// or wait on it.
func (process *Process) Close() error {
process.handleLock.Lock()
defer process.handleLock.Unlock()
operation := "Close"
title := "hcsshim::Process::" + operation
logrus.Debugf(title+" processid=%d", process.processID)
// Don't double free this
if process.handle == 0 {
return nil
}
if err := process.unregisterCallback(); err != nil {
return makeProcessError(process, operation, err, nil)
}
if err := hcsCloseProcess(process.handle); err != nil {
return makeProcessError(process, operation, err, nil)
}
process.handle = 0
logrus.Debugf(title+" succeeded processid=%d", process.processID)
return nil
}
func (process *Process) registerCallback() error {
context := &notifcationWatcherContext{
channels: newChannels(),
}
callbackMapLock.Lock()
callbackNumber := nextCallback
nextCallback++
callbackMap[callbackNumber] = context
callbackMapLock.Unlock()
var callbackHandle hcsCallback
err := hcsRegisterProcessCallback(process.handle, notificationWatcherCallback, callbackNumber, &callbackHandle)
if err != nil {
return err
}
context.handle = callbackHandle
process.callbackNumber = callbackNumber
return nil
}
func (process *Process) unregisterCallback() error {
callbackNumber := process.callbackNumber
callbackMapLock.RLock()
context := callbackMap[callbackNumber]
callbackMapLock.RUnlock()
if context == nil {
return nil
}
handle := context.handle
if handle == 0 {
return nil
}
// hcsUnregisterProcessCallback has its own syncronization
// to wait for all callbacks to complete. We must NOT hold the callbackMapLock.
err := hcsUnregisterProcessCallback(handle)
if err != nil {
return err
}
closeChannels(context.channels)
callbackMapLock.Lock()
callbackMap[callbackNumber] = nil
callbackMapLock.Unlock()
handle = 0
return nil
}

View File

@@ -0,0 +1,547 @@
package hcs
import (
"encoding/json"
"os"
"strconv"
"sync"
"syscall"
"time"
"github.com/Microsoft/hcsshim/internal/interop"
"github.com/Microsoft/hcsshim/internal/schema1"
"github.com/Microsoft/hcsshim/internal/timeout"
"github.com/sirupsen/logrus"
)
// currentContainerStarts is used to limit the number of concurrent container
// starts.
var currentContainerStarts containerStarts
type containerStarts struct {
maxParallel int
inProgress int
sync.Mutex
}
func init() {
mpsS := os.Getenv("HCSSHIM_MAX_PARALLEL_START")
if len(mpsS) > 0 {
mpsI, err := strconv.Atoi(mpsS)
if err != nil || mpsI < 0 {
return
}
currentContainerStarts.maxParallel = mpsI
}
}
type System struct {
handleLock sync.RWMutex
handle hcsSystem
id string
callbackNumber uintptr
}
// CreateComputeSystem creates a new compute system with the given configuration but does not start it.
func CreateComputeSystem(id string, hcsDocumentInterface interface{}) (*System, error) {
operation := "CreateComputeSystem"
title := "hcsshim::" + operation
computeSystem := &System{
id: id,
}
hcsDocumentB, err := json.Marshal(hcsDocumentInterface)
if err != nil {
return nil, err
}
hcsDocument := string(hcsDocumentB)
logrus.Debugf(title+" ID=%s config=%s", id, hcsDocument)
var (
resultp *uint16
identity syscall.Handle
)
createError := hcsCreateComputeSystem(id, hcsDocument, identity, &computeSystem.handle, &resultp)
if createError == nil || IsPending(createError) {
if err := computeSystem.registerCallback(); err != nil {
// Terminate the compute system if it still exists. We're okay to
// ignore a failure here.
computeSystem.Terminate()
return nil, makeSystemError(computeSystem, operation, "", err, nil)
}
}
events, err := processAsyncHcsResult(createError, resultp, computeSystem.callbackNumber, hcsNotificationSystemCreateCompleted, &timeout.Duration)
if err != nil {
if err == ErrTimeout {
// Terminate the compute system if it still exists. We're okay to
// ignore a failure here.
computeSystem.Terminate()
}
return nil, makeSystemError(computeSystem, operation, hcsDocument, err, events)
}
logrus.Debugf(title+" succeeded id=%s handle=%d", id, computeSystem.handle)
return computeSystem, nil
}
// OpenComputeSystem opens an existing compute system by ID.
func OpenComputeSystem(id string) (*System, error) {
operation := "OpenComputeSystem"
title := "hcsshim::" + operation
logrus.Debugf(title+" ID=%s", id)
computeSystem := &System{
id: id,
}
var (
handle hcsSystem
resultp *uint16
)
err := hcsOpenComputeSystem(id, &handle, &resultp)
events := processHcsResult(resultp)
if err != nil {
return nil, makeSystemError(computeSystem, operation, "", err, events)
}
computeSystem.handle = handle
if err := computeSystem.registerCallback(); err != nil {
return nil, makeSystemError(computeSystem, operation, "", err, nil)
}
logrus.Debugf(title+" succeeded id=%s handle=%d", id, handle)
return computeSystem, nil
}
// GetComputeSystems gets a list of the compute systems on the system that match the query
func GetComputeSystems(q schema1.ComputeSystemQuery) ([]schema1.ContainerProperties, error) {
operation := "GetComputeSystems"
title := "hcsshim::" + operation
queryb, err := json.Marshal(q)
if err != nil {
return nil, err
}
query := string(queryb)
logrus.Debugf(title+" query=%s", query)
var (
resultp *uint16
computeSystemsp *uint16
)
err = hcsEnumerateComputeSystems(query, &computeSystemsp, &resultp)
events := processHcsResult(resultp)
if err != nil {
return nil, &HcsError{Op: operation, Err: err, Events: events}
}
if computeSystemsp == nil {
return nil, ErrUnexpectedValue
}
computeSystemsRaw := interop.ConvertAndFreeCoTaskMemBytes(computeSystemsp)
computeSystems := []schema1.ContainerProperties{}
if err := json.Unmarshal(computeSystemsRaw, &computeSystems); err != nil {
return nil, err
}
logrus.Debugf(title + " succeeded")
return computeSystems, nil
}
// Start synchronously starts the computeSystem.
func (computeSystem *System) Start() error {
computeSystem.handleLock.RLock()
defer computeSystem.handleLock.RUnlock()
title := "hcsshim::ComputeSystem::Start ID=" + computeSystem.ID()
logrus.Debugf(title)
if computeSystem.handle == 0 {
return makeSystemError(computeSystem, "Start", "", ErrAlreadyClosed, nil)
}
// This is a very simple backoff-retry loop to limit the number
// of parallel container starts if environment variable
// HCSSHIM_MAX_PARALLEL_START is set to a positive integer.
// It should generally only be used as a workaround to various
// platform issues that exist between RS1 and RS4 as of Aug 2018
if currentContainerStarts.maxParallel > 0 {
for {
currentContainerStarts.Lock()
if currentContainerStarts.inProgress < currentContainerStarts.maxParallel {
currentContainerStarts.inProgress++
currentContainerStarts.Unlock()
break
}
if currentContainerStarts.inProgress == currentContainerStarts.maxParallel {
currentContainerStarts.Unlock()
time.Sleep(100 * time.Millisecond)
}
}
// Make sure we decrement the count when we are done.
defer func() {
currentContainerStarts.Lock()
currentContainerStarts.inProgress--
currentContainerStarts.Unlock()
}()
}
var resultp *uint16
err := hcsStartComputeSystem(computeSystem.handle, "", &resultp)
events, err := processAsyncHcsResult(err, resultp, computeSystem.callbackNumber, hcsNotificationSystemStartCompleted, &timeout.Duration)
if err != nil {
return makeSystemError(computeSystem, "Start", "", err, events)
}
logrus.Debugf(title + " succeeded")
return nil
}
// ID returns the compute system's identifier.
func (computeSystem *System) ID() string {
return computeSystem.id
}
// Shutdown requests a compute system shutdown, if IsPending() on the error returned is true,
// it may not actually be shut down until Wait() succeeds.
func (computeSystem *System) Shutdown() error {
computeSystem.handleLock.RLock()
defer computeSystem.handleLock.RUnlock()
title := "hcsshim::ComputeSystem::Shutdown"
logrus.Debugf(title)
if computeSystem.handle == 0 {
return makeSystemError(computeSystem, "Shutdown", "", ErrAlreadyClosed, nil)
}
var resultp *uint16
err := hcsShutdownComputeSystem(computeSystem.handle, "", &resultp)
events := processHcsResult(resultp)
if err != nil {
return makeSystemError(computeSystem, "Shutdown", "", err, events)
}
logrus.Debugf(title + " succeeded")
return nil
}
// Terminate requests a compute system terminate, if IsPending() on the error returned is true,
// it may not actually be shut down until Wait() succeeds.
func (computeSystem *System) Terminate() error {
computeSystem.handleLock.RLock()
defer computeSystem.handleLock.RUnlock()
title := "hcsshim::ComputeSystem::Terminate ID=" + computeSystem.ID()
logrus.Debugf(title)
if computeSystem.handle == 0 {
return makeSystemError(computeSystem, "Terminate", "", ErrAlreadyClosed, nil)
}
var resultp *uint16
err := hcsTerminateComputeSystem(computeSystem.handle, "", &resultp)
events := processHcsResult(resultp)
if err != nil {
return makeSystemError(computeSystem, "Terminate", "", err, events)
}
logrus.Debugf(title + " succeeded")
return nil
}
// Wait synchronously waits for the compute system to shutdown or terminate.
func (computeSystem *System) Wait() error {
title := "hcsshim::ComputeSystem::Wait ID=" + computeSystem.ID()
logrus.Debugf(title)
err := waitForNotification(computeSystem.callbackNumber, hcsNotificationSystemExited, nil)
if err != nil {
return makeSystemError(computeSystem, "Wait", "", err, nil)
}
logrus.Debugf(title + " succeeded")
return nil
}
// WaitTimeout synchronously waits for the compute system to terminate or the duration to elapse.
// If the timeout expires, IsTimeout(err) == true
func (computeSystem *System) WaitTimeout(timeout time.Duration) error {
title := "hcsshim::ComputeSystem::WaitTimeout ID=" + computeSystem.ID()
logrus.Debugf(title)
err := waitForNotification(computeSystem.callbackNumber, hcsNotificationSystemExited, &timeout)
if err != nil {
return makeSystemError(computeSystem, "WaitTimeout", "", err, nil)
}
logrus.Debugf(title + " succeeded")
return nil
}
func (computeSystem *System) Properties(types ...schema1.PropertyType) (*schema1.ContainerProperties, error) {
computeSystem.handleLock.RLock()
defer computeSystem.handleLock.RUnlock()
queryj, err := json.Marshal(schema1.PropertyQuery{types})
if err != nil {
return nil, makeSystemError(computeSystem, "Properties", "", err, nil)
}
var resultp, propertiesp *uint16
err = hcsGetComputeSystemProperties(computeSystem.handle, string(queryj), &propertiesp, &resultp)
events := processHcsResult(resultp)
if err != nil {
return nil, makeSystemError(computeSystem, "Properties", "", err, events)
}
if propertiesp == nil {
return nil, ErrUnexpectedValue
}
propertiesRaw := interop.ConvertAndFreeCoTaskMemBytes(propertiesp)
properties := &schema1.ContainerProperties{}
if err := json.Unmarshal(propertiesRaw, properties); err != nil {
return nil, makeSystemError(computeSystem, "Properties", "", err, nil)
}
return properties, nil
}
// Pause pauses the execution of the computeSystem. This feature is not enabled in TP5.
func (computeSystem *System) Pause() error {
computeSystem.handleLock.RLock()
defer computeSystem.handleLock.RUnlock()
title := "hcsshim::ComputeSystem::Pause ID=" + computeSystem.ID()
logrus.Debugf(title)
if computeSystem.handle == 0 {
return makeSystemError(computeSystem, "Pause", "", ErrAlreadyClosed, nil)
}
var resultp *uint16
err := hcsPauseComputeSystem(computeSystem.handle, "", &resultp)
events, err := processAsyncHcsResult(err, resultp, computeSystem.callbackNumber, hcsNotificationSystemPauseCompleted, &timeout.Duration)
if err != nil {
return makeSystemError(computeSystem, "Pause", "", err, events)
}
logrus.Debugf(title + " succeeded")
return nil
}
// Resume resumes the execution of the computeSystem. This feature is not enabled in TP5.
func (computeSystem *System) Resume() error {
computeSystem.handleLock.RLock()
defer computeSystem.handleLock.RUnlock()
title := "hcsshim::ComputeSystem::Resume ID=" + computeSystem.ID()
logrus.Debugf(title)
if computeSystem.handle == 0 {
return makeSystemError(computeSystem, "Resume", "", ErrAlreadyClosed, nil)
}
var resultp *uint16
err := hcsResumeComputeSystem(computeSystem.handle, "", &resultp)
events, err := processAsyncHcsResult(err, resultp, computeSystem.callbackNumber, hcsNotificationSystemResumeCompleted, &timeout.Duration)
if err != nil {
return makeSystemError(computeSystem, "Resume", "", err, events)
}
logrus.Debugf(title + " succeeded")
return nil
}
// CreateProcess launches a new process within the computeSystem.
func (computeSystem *System) CreateProcess(c interface{}) (*Process, error) {
computeSystem.handleLock.RLock()
defer computeSystem.handleLock.RUnlock()
title := "hcsshim::ComputeSystem::CreateProcess ID=" + computeSystem.ID()
var (
processInfo hcsProcessInformation
processHandle hcsProcess
resultp *uint16
)
if computeSystem.handle == 0 {
return nil, makeSystemError(computeSystem, "CreateProcess", "", ErrAlreadyClosed, nil)
}
configurationb, err := json.Marshal(c)
if err != nil {
return nil, makeSystemError(computeSystem, "CreateProcess", "", err, nil)
}
configuration := string(configurationb)
logrus.Debugf(title+" config=%s", configuration)
err = hcsCreateProcess(computeSystem.handle, configuration, &processInfo, &processHandle, &resultp)
events := processHcsResult(resultp)
if err != nil {
return nil, makeSystemError(computeSystem, "CreateProcess", configuration, err, events)
}
process := &Process{
handle: processHandle,
processID: int(processInfo.ProcessId),
system: computeSystem,
cachedPipes: &cachedPipes{
stdIn: processInfo.StdInput,
stdOut: processInfo.StdOutput,
stdErr: processInfo.StdError,
},
}
if err := process.registerCallback(); err != nil {
return nil, makeSystemError(computeSystem, "CreateProcess", "", err, nil)
}
logrus.Debugf(title+" succeeded processid=%d", process.processID)
return process, nil
}
// OpenProcess gets an interface to an existing process within the computeSystem.
func (computeSystem *System) OpenProcess(pid int) (*Process, error) {
computeSystem.handleLock.RLock()
defer computeSystem.handleLock.RUnlock()
title := "hcsshim::ComputeSystem::OpenProcess ID=" + computeSystem.ID()
logrus.Debugf(title+" processid=%d", pid)
var (
processHandle hcsProcess
resultp *uint16
)
if computeSystem.handle == 0 {
return nil, makeSystemError(computeSystem, "OpenProcess", "", ErrAlreadyClosed, nil)
}
err := hcsOpenProcess(computeSystem.handle, uint32(pid), &processHandle, &resultp)
events := processHcsResult(resultp)
if err != nil {
return nil, makeSystemError(computeSystem, "OpenProcess", "", err, events)
}
process := &Process{
handle: processHandle,
processID: pid,
system: computeSystem,
}
if err := process.registerCallback(); err != nil {
return nil, makeSystemError(computeSystem, "OpenProcess", "", err, nil)
}
logrus.Debugf(title+" succeeded processid=%s", process.processID)
return process, nil
}
// Close cleans up any state associated with the compute system but does not terminate or wait for it.
func (computeSystem *System) Close() error {
computeSystem.handleLock.Lock()
defer computeSystem.handleLock.Unlock()
title := "hcsshim::ComputeSystem::Close ID=" + computeSystem.ID()
logrus.Debugf(title)
// Don't double free this
if computeSystem.handle == 0 {
return nil
}
if err := computeSystem.unregisterCallback(); err != nil {
return makeSystemError(computeSystem, "Close", "", err, nil)
}
if err := hcsCloseComputeSystem(computeSystem.handle); err != nil {
return makeSystemError(computeSystem, "Close", "", err, nil)
}
computeSystem.handle = 0
logrus.Debugf(title + " succeeded")
return nil
}
func (computeSystem *System) registerCallback() error {
context := &notifcationWatcherContext{
channels: newChannels(),
}
callbackMapLock.Lock()
callbackNumber := nextCallback
nextCallback++
callbackMap[callbackNumber] = context
callbackMapLock.Unlock()
var callbackHandle hcsCallback
err := hcsRegisterComputeSystemCallback(computeSystem.handle, notificationWatcherCallback, callbackNumber, &callbackHandle)
if err != nil {
return err
}
context.handle = callbackHandle
computeSystem.callbackNumber = callbackNumber
return nil
}
func (computeSystem *System) unregisterCallback() error {
callbackNumber := computeSystem.callbackNumber
callbackMapLock.RLock()
context := callbackMap[callbackNumber]
callbackMapLock.RUnlock()
if context == nil {
return nil
}
handle := context.handle
if handle == 0 {
return nil
}
// hcsUnregisterComputeSystemCallback has its own syncronization
// to wait for all callbacks to complete. We must NOT hold the callbackMapLock.
err := hcsUnregisterComputeSystemCallback(handle)
if err != nil {
return err
}
closeChannels(context.channels)
callbackMapLock.Lock()
callbackMap[callbackNumber] = nil
callbackMapLock.Unlock()
handle = 0
return nil
}
// Modifies the System by sending a request to HCS
func (computeSystem *System) Modify(config interface{}) error {
computeSystem.handleLock.RLock()
defer computeSystem.handleLock.RUnlock()
title := "hcsshim::Modify ID=" + computeSystem.id
if computeSystem.handle == 0 {
return makeSystemError(computeSystem, "Modify", "", ErrAlreadyClosed, nil)
}
requestJSON, err := json.Marshal(config)
if err != nil {
return err
}
requestString := string(requestJSON)
logrus.Debugf(title + " " + requestString)
var resultp *uint16
err = hcsModifyComputeSystem(computeSystem.handle, requestString, &resultp)
events := processHcsResult(resultp)
if err != nil {
return makeSystemError(computeSystem, "Modify", requestString, err, events)
}
logrus.Debugf(title + " succeeded ")
return nil
}

View File

@@ -1,4 +1,4 @@
package hcsshim
package hcs
import (
"io"

View File

@@ -1,4 +1,4 @@
package hcsshim
package hcs
import (
"time"
@@ -6,13 +6,13 @@ import (
"github.com/sirupsen/logrus"
)
func processAsyncHcsResult(err error, resultp *uint16, callbackNumber uintptr, expectedNotification hcsNotification, timeout *time.Duration) error {
err = processHcsResult(err, resultp)
func processAsyncHcsResult(err error, resultp *uint16, callbackNumber uintptr, expectedNotification hcsNotification, timeout *time.Duration) ([]ErrorEvent, error) {
events := processHcsResult(resultp)
if IsPending(err) {
return waitForNotification(callbackNumber, expectedNotification, timeout)
return nil, waitForNotification(callbackNumber, expectedNotification, timeout)
}
return err
return events, err
}
func waitForNotification(callbackNumber uintptr, expectedNotification hcsNotification, timeout *time.Duration) error {

View File

@@ -0,0 +1,441 @@
// MACHINE GENERATED BY 'go generate' COMMAND; DO NOT EDIT
package hcs
import (
"syscall"
"unsafe"
"github.com/Microsoft/hcsshim/internal/interop"
"golang.org/x/sys/windows"
)
var _ unsafe.Pointer
// Do the interface allocations only once for common
// Errno values.
const (
errnoERROR_IO_PENDING = 997
)
var (
errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
)
// errnoErr returns common boxed Errno values, to prevent
// allocations at runtime.
func errnoErr(e syscall.Errno) error {
switch e {
case 0:
return nil
case errnoERROR_IO_PENDING:
return errERROR_IO_PENDING
}
// TODO: add more here, after collecting data on the common
// error values see on Windows. (perhaps when running
// all.bat?)
return e
}
var (
modvmcompute = windows.NewLazySystemDLL("vmcompute.dll")
procHcsEnumerateComputeSystems = modvmcompute.NewProc("HcsEnumerateComputeSystems")
procHcsCreateComputeSystem = modvmcompute.NewProc("HcsCreateComputeSystem")
procHcsOpenComputeSystem = modvmcompute.NewProc("HcsOpenComputeSystem")
procHcsCloseComputeSystem = modvmcompute.NewProc("HcsCloseComputeSystem")
procHcsStartComputeSystem = modvmcompute.NewProc("HcsStartComputeSystem")
procHcsShutdownComputeSystem = modvmcompute.NewProc("HcsShutdownComputeSystem")
procHcsTerminateComputeSystem = modvmcompute.NewProc("HcsTerminateComputeSystem")
procHcsPauseComputeSystem = modvmcompute.NewProc("HcsPauseComputeSystem")
procHcsResumeComputeSystem = modvmcompute.NewProc("HcsResumeComputeSystem")
procHcsGetComputeSystemProperties = modvmcompute.NewProc("HcsGetComputeSystemProperties")
procHcsModifyComputeSystem = modvmcompute.NewProc("HcsModifyComputeSystem")
procHcsRegisterComputeSystemCallback = modvmcompute.NewProc("HcsRegisterComputeSystemCallback")
procHcsUnregisterComputeSystemCallback = modvmcompute.NewProc("HcsUnregisterComputeSystemCallback")
procHcsCreateProcess = modvmcompute.NewProc("HcsCreateProcess")
procHcsOpenProcess = modvmcompute.NewProc("HcsOpenProcess")
procHcsCloseProcess = modvmcompute.NewProc("HcsCloseProcess")
procHcsTerminateProcess = modvmcompute.NewProc("HcsTerminateProcess")
procHcsGetProcessInfo = modvmcompute.NewProc("HcsGetProcessInfo")
procHcsGetProcessProperties = modvmcompute.NewProc("HcsGetProcessProperties")
procHcsModifyProcess = modvmcompute.NewProc("HcsModifyProcess")
procHcsGetServiceProperties = modvmcompute.NewProc("HcsGetServiceProperties")
procHcsRegisterProcessCallback = modvmcompute.NewProc("HcsRegisterProcessCallback")
procHcsUnregisterProcessCallback = modvmcompute.NewProc("HcsUnregisterProcessCallback")
)
func hcsEnumerateComputeSystems(query string, computeSystems **uint16, result **uint16) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(query)
if hr != nil {
return
}
return _hcsEnumerateComputeSystems(_p0, computeSystems, result)
}
func _hcsEnumerateComputeSystems(query *uint16, computeSystems **uint16, result **uint16) (hr error) {
if hr = procHcsEnumerateComputeSystems.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsEnumerateComputeSystems.Addr(), 3, uintptr(unsafe.Pointer(query)), uintptr(unsafe.Pointer(computeSystems)), uintptr(unsafe.Pointer(result)))
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsCreateComputeSystem(id string, configuration string, identity syscall.Handle, computeSystem *hcsSystem, result **uint16) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(id)
if hr != nil {
return
}
var _p1 *uint16
_p1, hr = syscall.UTF16PtrFromString(configuration)
if hr != nil {
return
}
return _hcsCreateComputeSystem(_p0, _p1, identity, computeSystem, result)
}
func _hcsCreateComputeSystem(id *uint16, configuration *uint16, identity syscall.Handle, computeSystem *hcsSystem, result **uint16) (hr error) {
if hr = procHcsCreateComputeSystem.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall6(procHcsCreateComputeSystem.Addr(), 5, uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(configuration)), uintptr(identity), uintptr(unsafe.Pointer(computeSystem)), uintptr(unsafe.Pointer(result)), 0)
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsOpenComputeSystem(id string, computeSystem *hcsSystem, result **uint16) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(id)
if hr != nil {
return
}
return _hcsOpenComputeSystem(_p0, computeSystem, result)
}
func _hcsOpenComputeSystem(id *uint16, computeSystem *hcsSystem, result **uint16) (hr error) {
if hr = procHcsOpenComputeSystem.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsOpenComputeSystem.Addr(), 3, uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(computeSystem)), uintptr(unsafe.Pointer(result)))
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsCloseComputeSystem(computeSystem hcsSystem) (hr error) {
if hr = procHcsCloseComputeSystem.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsCloseComputeSystem.Addr(), 1, uintptr(computeSystem), 0, 0)
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsStartComputeSystem(computeSystem hcsSystem, options string, result **uint16) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(options)
if hr != nil {
return
}
return _hcsStartComputeSystem(computeSystem, _p0, result)
}
func _hcsStartComputeSystem(computeSystem hcsSystem, options *uint16, result **uint16) (hr error) {
if hr = procHcsStartComputeSystem.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsStartComputeSystem.Addr(), 3, uintptr(computeSystem), uintptr(unsafe.Pointer(options)), uintptr(unsafe.Pointer(result)))
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsShutdownComputeSystem(computeSystem hcsSystem, options string, result **uint16) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(options)
if hr != nil {
return
}
return _hcsShutdownComputeSystem(computeSystem, _p0, result)
}
func _hcsShutdownComputeSystem(computeSystem hcsSystem, options *uint16, result **uint16) (hr error) {
if hr = procHcsShutdownComputeSystem.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsShutdownComputeSystem.Addr(), 3, uintptr(computeSystem), uintptr(unsafe.Pointer(options)), uintptr(unsafe.Pointer(result)))
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsTerminateComputeSystem(computeSystem hcsSystem, options string, result **uint16) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(options)
if hr != nil {
return
}
return _hcsTerminateComputeSystem(computeSystem, _p0, result)
}
func _hcsTerminateComputeSystem(computeSystem hcsSystem, options *uint16, result **uint16) (hr error) {
if hr = procHcsTerminateComputeSystem.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsTerminateComputeSystem.Addr(), 3, uintptr(computeSystem), uintptr(unsafe.Pointer(options)), uintptr(unsafe.Pointer(result)))
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsPauseComputeSystem(computeSystem hcsSystem, options string, result **uint16) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(options)
if hr != nil {
return
}
return _hcsPauseComputeSystem(computeSystem, _p0, result)
}
func _hcsPauseComputeSystem(computeSystem hcsSystem, options *uint16, result **uint16) (hr error) {
if hr = procHcsPauseComputeSystem.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsPauseComputeSystem.Addr(), 3, uintptr(computeSystem), uintptr(unsafe.Pointer(options)), uintptr(unsafe.Pointer(result)))
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsResumeComputeSystem(computeSystem hcsSystem, options string, result **uint16) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(options)
if hr != nil {
return
}
return _hcsResumeComputeSystem(computeSystem, _p0, result)
}
func _hcsResumeComputeSystem(computeSystem hcsSystem, options *uint16, result **uint16) (hr error) {
if hr = procHcsResumeComputeSystem.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsResumeComputeSystem.Addr(), 3, uintptr(computeSystem), uintptr(unsafe.Pointer(options)), uintptr(unsafe.Pointer(result)))
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsGetComputeSystemProperties(computeSystem hcsSystem, propertyQuery string, properties **uint16, result **uint16) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(propertyQuery)
if hr != nil {
return
}
return _hcsGetComputeSystemProperties(computeSystem, _p0, properties, result)
}
func _hcsGetComputeSystemProperties(computeSystem hcsSystem, propertyQuery *uint16, properties **uint16, result **uint16) (hr error) {
if hr = procHcsGetComputeSystemProperties.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall6(procHcsGetComputeSystemProperties.Addr(), 4, uintptr(computeSystem), uintptr(unsafe.Pointer(propertyQuery)), uintptr(unsafe.Pointer(properties)), uintptr(unsafe.Pointer(result)), 0, 0)
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsModifyComputeSystem(computeSystem hcsSystem, configuration string, result **uint16) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(configuration)
if hr != nil {
return
}
return _hcsModifyComputeSystem(computeSystem, _p0, result)
}
func _hcsModifyComputeSystem(computeSystem hcsSystem, configuration *uint16, result **uint16) (hr error) {
if hr = procHcsModifyComputeSystem.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsModifyComputeSystem.Addr(), 3, uintptr(computeSystem), uintptr(unsafe.Pointer(configuration)), uintptr(unsafe.Pointer(result)))
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsRegisterComputeSystemCallback(computeSystem hcsSystem, callback uintptr, context uintptr, callbackHandle *hcsCallback) (hr error) {
if hr = procHcsRegisterComputeSystemCallback.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall6(procHcsRegisterComputeSystemCallback.Addr(), 4, uintptr(computeSystem), uintptr(callback), uintptr(context), uintptr(unsafe.Pointer(callbackHandle)), 0, 0)
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsUnregisterComputeSystemCallback(callbackHandle hcsCallback) (hr error) {
if hr = procHcsUnregisterComputeSystemCallback.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsUnregisterComputeSystemCallback.Addr(), 1, uintptr(callbackHandle), 0, 0)
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsCreateProcess(computeSystem hcsSystem, processParameters string, processInformation *hcsProcessInformation, process *hcsProcess, result **uint16) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(processParameters)
if hr != nil {
return
}
return _hcsCreateProcess(computeSystem, _p0, processInformation, process, result)
}
func _hcsCreateProcess(computeSystem hcsSystem, processParameters *uint16, processInformation *hcsProcessInformation, process *hcsProcess, result **uint16) (hr error) {
if hr = procHcsCreateProcess.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall6(procHcsCreateProcess.Addr(), 5, uintptr(computeSystem), uintptr(unsafe.Pointer(processParameters)), uintptr(unsafe.Pointer(processInformation)), uintptr(unsafe.Pointer(process)), uintptr(unsafe.Pointer(result)), 0)
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsOpenProcess(computeSystem hcsSystem, pid uint32, process *hcsProcess, result **uint16) (hr error) {
if hr = procHcsOpenProcess.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall6(procHcsOpenProcess.Addr(), 4, uintptr(computeSystem), uintptr(pid), uintptr(unsafe.Pointer(process)), uintptr(unsafe.Pointer(result)), 0, 0)
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsCloseProcess(process hcsProcess) (hr error) {
if hr = procHcsCloseProcess.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsCloseProcess.Addr(), 1, uintptr(process), 0, 0)
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsTerminateProcess(process hcsProcess, result **uint16) (hr error) {
if hr = procHcsTerminateProcess.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsTerminateProcess.Addr(), 2, uintptr(process), uintptr(unsafe.Pointer(result)), 0)
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsGetProcessInfo(process hcsProcess, processInformation *hcsProcessInformation, result **uint16) (hr error) {
if hr = procHcsGetProcessInfo.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsGetProcessInfo.Addr(), 3, uintptr(process), uintptr(unsafe.Pointer(processInformation)), uintptr(unsafe.Pointer(result)))
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsGetProcessProperties(process hcsProcess, processProperties **uint16, result **uint16) (hr error) {
if hr = procHcsGetProcessProperties.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsGetProcessProperties.Addr(), 3, uintptr(process), uintptr(unsafe.Pointer(processProperties)), uintptr(unsafe.Pointer(result)))
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsModifyProcess(process hcsProcess, settings string, result **uint16) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(settings)
if hr != nil {
return
}
return _hcsModifyProcess(process, _p0, result)
}
func _hcsModifyProcess(process hcsProcess, settings *uint16, result **uint16) (hr error) {
if hr = procHcsModifyProcess.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsModifyProcess.Addr(), 3, uintptr(process), uintptr(unsafe.Pointer(settings)), uintptr(unsafe.Pointer(result)))
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsGetServiceProperties(propertyQuery string, properties **uint16, result **uint16) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(propertyQuery)
if hr != nil {
return
}
return _hcsGetServiceProperties(_p0, properties, result)
}
func _hcsGetServiceProperties(propertyQuery *uint16, properties **uint16, result **uint16) (hr error) {
if hr = procHcsGetServiceProperties.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsGetServiceProperties.Addr(), 3, uintptr(unsafe.Pointer(propertyQuery)), uintptr(unsafe.Pointer(properties)), uintptr(unsafe.Pointer(result)))
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsRegisterProcessCallback(process hcsProcess, callback uintptr, context uintptr, callbackHandle *hcsCallback) (hr error) {
if hr = procHcsRegisterProcessCallback.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall6(procHcsRegisterProcessCallback.Addr(), 4, uintptr(process), uintptr(callback), uintptr(context), uintptr(unsafe.Pointer(callbackHandle)), 0, 0)
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}
func hcsUnregisterProcessCallback(callbackHandle hcsCallback) (hr error) {
if hr = procHcsUnregisterProcessCallback.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsUnregisterProcessCallback.Addr(), 1, uintptr(callbackHandle), 0, 0)
if int32(r0) < 0 {
hr = interop.Win32FromHresult(r0)
}
return
}

Some files were not shown because too many files have changed in this diff Show More