Compare commits

..

544 Commits

Author SHA1 Message Date
Cory Snider
f415784c1a Merge pull request #51575 from smerkviladze/25.0-add-windows-integration-tests
[25.0 backport] integration: add Windows network driver and isolation tests
2025-11-26 18:55:24 -05:00
Sopho Merkviladze
4ef26e4c35 integration: add Windows network driver and isolation tests
Add integration tests for Windows container functionality focusing on network drivers and container isolation modes.

Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
2025-11-26 13:24:13 +04:00
Sebastiaan van Stijn
2b409606ac Merge pull request #51344 from smerkviladze/50179-25.0-windows-gha-updates
[25.0 backport] gha: update to windows 2022 / 2025
2025-10-30 22:20:37 +01:00
Sebastiaan van Stijn
00fbff3423 integration/networking: increase context timeout for attach
The TestBridgeICCWindows test was failing on Windows due to a context timeout:

=== FAIL: github.com/docker/docker/integration/networking TestBridgeICCWindows/User_defined_nat_network (9.02s)
    bridge_test.go:243: assertion failed: error is not nil: Post "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.44/containers/62a4ed964f125e023cc298fde2d4d2f8f35415da970fd163b24e181b8c0c6654/start": context deadline exceeded
    panic.go:635: assertion failed: error is not nil: Error response from daemon: error while removing network: network mynat id 25066355c070294c1d8d596c204aa81f056cc32b3e12bf7c56ca9c5746a85b0c has active endpoints

=== FAIL: github.com/docker/docker/integration/networking TestBridgeICCWindows (17.65s)

Windows appears to be slower to start, so these timeouts are expected.
Increase the context timeout to give it a little more time.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0ea28fede0)
Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
2025-10-30 21:58:10 +04:00
Sebastiaan van Stijn
92df858a5b integration-cli: TestCopyFromContainerPathIsNotDir: adjust for win 2025
It looks like the error returned by Windows changed in Windows 2025; before
Windows 2025, this produced a `ERROR_INVALID_NAME`;

    The filename, directory name, or volume label syntax is incorrect.

But Windows 2025 produces a `ERROR_DIRECTORY` ("The directory name is invalid."):

    CreateFile \\\\?\\Volume{d9f06b05-0405-418b-b3e5-4fede64f3cdc}\\windows\\system32\\drivers\\etc\\hosts\\: The directory name is invalid.

Docs; https://learn.microsoft.com/en-us/windows/win32/debug/system-error-codes--0-499-

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d3d20b9195)
Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
2025-10-30 14:19:54 +04:00
Sebastiaan van Stijn
00f9f839c6 gha: run windows 2025 on PRs, 2022 scheduled
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9316396db0)
Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
2025-10-30 13:37:45 +04:00
Sebastiaan van Stijn
acd2546285 gha: update to windows 2022 / 2025
The hosted Windows 2019 runners reach EOL on June 30;
https://github.com/actions/runner-images/issues/12045

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6f484d0d4c)
Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
2025-10-30 13:07:40 +04:00
Sebastiaan van Stijn
d334795adb Merge pull request #51129 from smerkviladze/25.0-bump-swarmkit-to-v2.1.1
[25.0 backport] vendor: github.com/moby/swarmkit/v2 v2.1.1
2025-10-09 20:51:21 +02:00
Sopho Merkviladze
71967c3a82 vendor: github.com/moby/swarmkit/v2 v2.1.1
- Afford NetworkAllocator passing per-app state in Control API responses
- manager: restore NewServer to v2 signature
- api: afford passing params to OnGetNetwork hook
- Remove weak TLS cipher suites

full diff: https://github.com/moby/swarmkit/compare/v2.0.0...v2.1.1

Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
(cherry picked from commit ca9c5c6f7b)
Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
2025-10-09 21:52:39 +04:00
Sebastiaan van Stijn
f06fd6d3c9 Merge pull request #51117 from austinvazquez/cherry-pick-fix-go-validation-checks-to-25.0
[25.0] Rework Go mod tidy/vendor checks
2025-10-07 23:09:20 +02:00
Sebastiaan van Stijn
ce61e5777b Merge pull request #51084 from smerkviladze/25.0-enable-strong-ciphers
[25.0] daemon: add support for DOCKER_DISABLE_WEAK_CIPHERS env-var to enforce strong TLS ciphers
2025-10-07 20:49:07 +02:00
Sopho Merkviladze
26d6c35b1b daemon: optionally enforce strong TLS ciphers via environment variable
Introduce the DOCKER_DISABLE_WEAK_CIPHERS environment variable to allow
disabling weak TLS ciphers. When set to true, the daemon restricts
TLS to a modern, secure subset of cipher suites, disabling known weak
ciphers such as CBC-mode ciphers.

This is intended as an edge-case option and is not exposed via a CLI flag or
config option. By default, weak ciphers remain enabled for backward compatibility.

Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
2025-10-07 21:24:07 +04:00
Sebastiaan van Stijn
a14b16e1f3 Merge pull request #51123 from crazy-max/25.0_ci-cache-fixes
[25.0] ci: update gha cache attributes
2025-10-07 19:19:53 +02:00
CrazyMax
3ea40f50ef ci: update gha cache attributes
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-10-07 12:42:46 +02:00
Austin Vazquez
7c47f6d831 Rework Go mod tidy/vendor checks
This change reworks the Go mod tidy/vendor checks to run for all tracked Go modules by the project and fail for any uncommitted changes.

Signed-off-by: Austin Vazquez <austin.vazquez@docker.com>
(cherry picked from commit f6e1bf2808)
Signed-off-by: Austin Vazquez <austin.vazquez@docker.com>
2025-10-06 19:32:59 -05:00
Austin Vazquez
0847330073 Add existence check for go.mod and go.sum files
Signed-off-by: Austin Vazquez <austin.vazquez@docker.com>
(cherry picked from commit 0ad35e3ef0)
Signed-off-by: Austin Vazquez <austin.vazquez@docker.com>
2025-10-06 19:25:50 -05:00
Paweł Gronowski
b4c0ebf6d4 Merge pull request #50939 from vvoland/50936-25.0
[25.0 backport] Dockerfile.windows: remove deprecated 7Zip4Powershell
2025-09-09 19:35:21 +02:00
Paweł Gronowski
00f6814357 Dockerfile.windows: remove deprecated 7Zip4Powershell
`tar` utility is included in Windows 10 (17063+) and Windows Server
2019+ so we can use it directly.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 8c8324b37f)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2025-09-09 17:35:49 +02:00
Cory Snider
165516eb47 Merge pull request #50551 from corhere/backport-25.0/libn/all-the-overlay-fixes
[25.0] libnetwork/overlay: backport all the fixes
2025-08-11 16:52:52 -04:00
Cory Snider
f099e911bd libnetwork: handle coalesced endpoint events
The eventually-consistent nature of NetworkDB means we cannot depend on
events being received in the same order that they were sent. Nor can we
depend on receiving events for all intermediate states. It is possible
for a series of entry UPDATEs, or a DELETE followed by a CREATE with the
same key, to get coalesced into a single UPDATE event on the receiving
node. Watchers of NetworkDB tables therefore need to be prepared to
gracefully handle arbitrary UPDATEs of a key, including those where the
new value may have nothing in common with the previous value.

The libnetwork controller naively handled events for endpoint_table
assuming that an endpoint leave followed by a rejoin of the same
endpoint would always be expressed as a DELETE event followed by a
CREATE. It would handle a coalesced UPDATE as a CREATE, adding a new
service binding without removing the old one. This would
have various side effects, such as having the "transient state" of
having multiple conflicting service bindings where more than one
endpoint is assigned an IP address never settling.

Modify the libnetwork controller to handle an UPDATE by removing the
previous service binding then adding the new one.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 4538a1de0a)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
bace1b8a3b libnetwork/d/overlay: handle coalesced peer updates
The eventually-consistent nature of NetworkDB means we cannot depend on
events being received in the same order that they were sent. Nor can we
depend on receiving events for all intermediate states. It is possible
for a series of entry UPDATEs, or a DELETE followed by a CREATE with the
same key, to get coalesced into a single UPDATE event on the receiving
node. Watchers of NetworkDB tables therefore need to be prepared to
gracefully handle arbitrary UPDATEs of a key, including those where the
new value may have nothing in common with the previous value.

The overlay driver naively handled events for overlay_peer_table
assuming that an endpoint leave followed by a rejoin of the same
endpoint would always be expressed as a DELETE event followed by a
CREATE. It would handle a coalesced UPDATE as a CREATE, inserting a new
entry into peerDB without removing the old one. This would
have various side effects, such as having the "transient state" of
multiple entries in peerDB with the same peer IP never settle.

Update driverapi to pass both the previous and new value of a table
entry into the driver. Modify the overlay driver to handle an UPDATE by
removing the previous peer entry from peerDB then adding the new one.
Modify the Windows overlay driver to match.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit e1a586a9a7)

libn/d/overlay: don't deref nil PeerRecord on error

If unmarshaling the peer record fails, there is no need to check if it's
a record for a local peer. Attempting to do so anyway will result in a
nil-dereference panic. Don't do that.

The Windows overlay driver has a typo: prevPeer is being checked twice
for whether it was a local-peer record. Check prevPeer once and newPeer
once each, as intended.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 12c6345d3a)

Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
f9e54290b5 libn/d/win/overlay: dedupe NetworkDB definitions
Windows and Linux overlay driver instances are interoperable, working
from the same NetworkDB table for peer discovery. As both drivers
produce and consume serialized data through the table, they both need to
have a shared understanding of the shape and semantics of that data.
The Windows overlay driver contains a duplicate copy of the protobuf
definitions used for marshaling and unmarshaling the NetworkDB peer
entries for dubious reasons. It gives us the flexibility to have the
definitions diverge, which is only really useful for shooting ourselves
in the foot.

Make libnetwork/drivers/overlay the source of truth for the peer record
definitions and the name of the NetworkDB table for distributing peer
records.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 8340e109de)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
fc3df55230 libn/d/overlay: extract hashable address types
The macAddr and ipmac types are generally useful within libnetwork. Move
them to a dedicated package and overhaul the API to be more like that of
the net/netip package.

Update the overlay driver to utilize these types, adapting to the new
API.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit c7b93702b9)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
b22872af60 libnetwork/driverapi: make EventNotify optional
Overlay is the only driver which makes use of the EventNotify facility,
yet all other driver implementations are forced to provide a stub
implementation. Move the EventNotify and DecodeTableEntry methods into a
new optional TableWatcher interface and remove the stubs from all the
other drivers.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 844023f794)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
c7e17ae65d libn/networkdb: report prev value in update events
When handling updates to existing entries, it is often necessary to know
what the previous value was. NetworkDB knows the previous and new values
when it broadcasts an update event for an entry. Include both values in
the update event so the watchers do not have to do their own parallel
bookkeeping.

Unify the event types under WatchEvent as representing the operation kind
in the type system has been inconvenient, not useful. The operation is
now implied by the nilness of the Value and Prev event fields.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 69c3c56eba)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
d60c71a9d7 libnetwork/d/overlay: fix logical race conditions
The concurrency control in the overlay driver is logically unsound.
While the use of mutexes is sufficient to prevent data races --
violations of the Go memory model -- many operations which need to be
atomic are performed with unbounded concurrency.

Overhaul the use of locks in the overlay network driver. Implement sound
locking at the network granularity: operations may proceed concurrently
iff they are being applied to distinct networks. Push the responsibility
of locking up to the code which calls methods or accesses struct fields
to avoid deadlock situations like we had previously with
d.initSandboxPeerDB() and to make the code easier to reason about.

Each overlay network has a distinct peer db. The NetworkDB watch for the
overlay peer table for the network will only start after
(*driver).CreateNetwork returns and will be stopped before libnetwork
calls (*driver).DeleteNetwork, therefore the lifetime of the peer db for
a network is constrained to the lifetime of the network itself. Yet the
peer db for a network is tracked in a dedicated map, separately from the
network objects themselves. This has resulted in a parallel set of
mutexes to manage concurrency of the peer db distinct from the mutexes
for the driver and networks. Move the peer db for a network into a field
of the network struct and guard it from concurrent access using the
per-network lock. Move the methods for manipulating the peer db into the
network struct so that the methods can only be called if the caller has
a reference to the network object.

Network creation and deletion are synchronized using the driver-scope
mutex, but some of the kernel programming is performed outside of the
critical section. It is possible for network deletion to race with
recreating the network, interleaving the kernel programming for the
network creation and deletion, resulting in inconsistent kernel state.
Parallelize network creation and deletion soundly. Use a double-checked
locking scheme to soundly handle the case of concurrent CreateNetwork
and DeleteNetwork for the same network id without blocking operations
on other networks. Synchronize operations on a network so that
operations on the network such as adding a neighbor to the peer db are
performed atomically, not interleaved with deleting the network.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 89d3419093)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
ad54b8f9ce libn/d/overlay: fix encryption race conditions
There is a dedicated mutex for synchronizing access to the encrMap.
Separately, the main driver mutex is used for synchronizing access to
the encryption keys. Their use is sufficient to prevent data races (if
used correctly, which is not the case) but not logical race conditions.
Programming the encryption parameters for a peer can race with
encryption keys being updated, which could lead to inconsistencies
between the parameters programmed into the kernel and the desired state.

Introduce a new mutex for synchronizing encryption operations. Use that
mutex to synchronize access to both encrMap and keys. Handle encryption
key updates in a critical section so they can no longer be interleaved
with kernel programming of encryption parameters.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 843cd96725)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
8075689abd libn/d/overlay: inline secMapWalk into only caller
func (*driver) secMapWalk is a curious beast. It is named walk, yet it
also mutates the collection being iterated over. It returns an error,
but that error is always nil. It takes a callback that can break
iteration, yet the only caller makes no use of that affordance. Its
utility is limited and the abstraction hinders readability more than it
helps. Open-code the d.secMap.nodes loop into
func (*driver) updateKeys(), the only caller.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit a1d299749c)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
480dfaef06 libnetwork/d/overlay: un-embed mutexes
It is easier to find all references when they are struct fields rather
than embedded structs.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 74713e1a7d)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
e604d70e22 libnetwork/d/overlay: ref-count encryption params
The IPsec encryption parameters (Security Association Database and
Security Policy Database entries) for a particular overlay network peer
(VTEP) are shared global state as they have to be programmed into the
root network namespace. The same parameters are used when encrypting
VXLAN traffic to a particular VTEP for all overlay networks. Deleting
the entries for a VTEP will break encryption to that VTEP across all
encrypted overlay networks, therefore the decision of when to delete the
entries must take the state of all overlay networks into account.
Unfortunately this is not the case.

The overlay driver uses local per-network state to decide when to
program and delete the parameters for a VTEP. In practice, the
parameters for all VTEPs participating in an encrypted overlay network
are deleted when the network is deleted. Encryption to that VTEP over
all other active encrypted overlay networks would be broken until some
other incidental peerDB event triggered a re-programming of the
parameters for that VTEP.

Change the setupEncryption and removeEncryption functions to be
reference-counted. The removeEncryption function needs to be called the
same number of times as addEncryption before the parameters are deleted
from the kernel.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 057e35dd65)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Sebastiaan van Stijn
b6b13b20af libnetwork/drivers/overlay: fix naked returns, output variables
libnetwork/drivers/overlay/encryption.go:370:2: naked return in func `programSA` with 64 lines of code (nakedret)
        return
        ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 02b4c7cc52)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
b539aea3cd libnetwork/d/overlay: properly model peer db
The overlay driver assumes that the peer table in NetworkDB will always
converge to a 1:1:1 mapping from peer endpoint IP address to MAC address
to VTEP. While this currently holds true in practice most of the time,
it is not an invariant and there are ways that users can violate this
assumption.

The driver detects whether peer entries conflict with each other by
matching up (IP, MAC) tuples. In the common case this works out fine as
the MAC address for an endpoint is generally derived from the assigned
IP address. If an IP address gets reassigned to a container on another
node the MAC address will follow, so the driver's conflict resolution
logic will behave as intended. However users may explicitly configure
the MAC address for a container's network endpoints. If an IP address
gets reassigned from a container with an auto-generated MAC address to a
container with a manually-configured MAC, or vice versa, the driver
would not detect the conflict as the (IP, MAC) tuples won't match up. It
would attempt to program the kernel's neighbor table with two
conflicting MAC addresses for one IP, which will fail. And since it
does not realize that there is a conflict, the driver won't reprogram
the kernel from the remaining entry when the other entry is deleted.

The assumption that only one IP address may resolve to a given MAC
address is violated if multiple IP addresses are assigned to an
endpoint. This rarely comes up in practice today as the overlay driver
only supports IPv4 single-stack connectivity for endpoints. If multiple
distinct peer entries exist with the same MAC address, the driver will
delete the MAC->VTEP mapping from the kernel's forwarding database when
any entry is deleted, even if other entries remain active. This
limitation is one of the biggest obstacles in the way of supporting IPv6
and dual-stack connectivity for endpoints attached to overlay networks.

Modify the peer db logic to correctly handle the cases where peer
entries have non-unique MAC or VTEP values. Treat any set of entries
with non-unique IP addresses as a conflict, irrespective of the entries'
MAC addresses. Maintain a reference count of forwarding database entries
and only delete the MAC->VTEP mapping from the kernel when there are no
longer any neighbor entries which resolve to that MAC.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 1c2b744ca2)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
e43e322a3b libnetwork/d/overlay: refactor peer db impl
The peer db implementation is more complex than it needs to be.
Notably, the peerCRUD / peerCRUDOp function split is a vestige of its
evolution from a worker goroutine receiving commands over a channel.

Refactor the peer db operations to be easier to read, understand and
modify. Factor the kernel-programming operations out into dedicated
addNeighbor and deleteNeighbor functions. Inline the rest of the
peerCRUDOp functions into their respective peerCRUD wrappers.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 59437f56f9)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
89ea2469df libnetwork/d/overlay: drop initEncryption function
The (*driver).Join function does many things to set up overlay
networking. One of the first things it does is call
(*network).joinSandbox, which in turn calls (*driver).initSandboxPeerDB.
The initSandboxPeerDB function iterates through the peer db to add
entries to the VXLAN FDB, neighbor table and IPsec security association
database in the kernel for all known peers on the overlay network.

One of the last things the (*driver).Join function does is call
(*driver).initEncryption. The initEncryption function iterates through
the peer db to add entries to the IPsec security association database in
the kernel for all known peers on the overlay network. But the preceding
initSandboxPeerDB call already did that! The initEncryption function is
redundant and can safely be removed.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit df6b405796)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
f69e64ab12 libnetwork/d/overlay: drop checkEncryption function
In addition to being three functions in a trenchcoat, the
checkEncryption function has a very subtle implementation which is
difficult to reason about. That is not a good property for security
relevant code to have.

Replace two of the three calls to checkEncryption with conditional calls
to setupEncryption and removeEncryption, lifting the conditional logic
which was hidden away in checkEncryption into the call sites to make it
easier to reason about the code. Replace the third call with a call to a
new initEncryption function.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 713f887698)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
67fbdf3c28 libnetwork/d/overlay: make setupEncryption a method
The setupEncryption and removeEncryption functions take several
parameters, but all call sites pass the same values for all the
parameters aside from remoteIP: values taken from fields of the driver
struct. Refactor these functions to be methods of the driver struct and
drop the redundant parameters.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit cb4e7b2f03)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
33a7e83e6d libnetwork/d/overlay: checkEncryption: drop isLocal param
Since it is not meaningful to add or remove encryption between the local
node and itself, the isLocal parameter is redundant. Setting up
encryption for all network peers is now invoked by calling

    checkEncryption(nid, netip.Addr{}, true)

Calling checkEncryption with isLocal=true, add=false is now more
explicitly a no-op. It always was effectively a no-op, but that was not
easy to spot by inspection. In the world with the isLocal flag,
calls to checkEncryption where isLocal=true and add=false would have rIP
set to d.advertiseAddr. In other words, it was a request to remove
encryption parameters between the local peer and itself if peerDB had no
remote-peer entries for the network. So either the call would do
nothing, or it would remove encryption parameters that aren't used for
anything. Now the equivalent call always does nothing.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 0d893252ac)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
684b2688d2 libnetwork/d/overlay: peerdb: drop isLocal param
Drop the isLocal boolean parameters from the peerDB functions. Local
peers have vtep == netip.Addr{}.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 4b1c1236b9)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
b61930cc82 libnetwork/d/overlay: elide vtep for local peers
The VTEP value for a peer in peerDB is only accurate for a remote peer.
The VTEP for a local peer would be the driver's advertise address, which
is not necessarily constant for the lifetime of the driver instance.
The VTEP values persisted in the peerDB entries for local peers could be
stale or missing if not kept in sync with the advertise address. And the
peerDB could get polluted with duplicate entries for local peers if the
advertise address was to change, as entries which differ only by VTEP
are considered distinct by SetMatrix. Persisting the advertise address
as the VTEP for local peers creates lots of problems that are not easy
to solve.

Stop persisting the VTEP for local peers in peerDB. Any code that needs
to know the VTEP for local peers can look that up from the source of
truth: the driver's advertise address. Use the lack of a VTEP in peerDB
entries to signify local peers, making the isLocal flag redundant.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 48e0b24ff7)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
1db0510301 libnetwork/d/overlay: filter local peers explicitly
The overlay driver's checkEncryption function configures the IPSec
parameters for the VXLAN tunnels to peer nodes. When called with
isLocal=true, it configures encryption for all peer nodes with at least
one peerDB entry. Since the local peers are also included in the peerDB,
it needs to filter those entries out. It does so by filtering out any
peer entries whose VTEP address is equal to the current local advertise
address. Trouble is, the local advertise address is not necessarily
constant. The driver tries to handle this case by calling
peerDBUpdateSelf() when the advertise address changes. This function
iterates through the peerDB and tries to update the VTEP address for all
local peer entries, but it does not actually do anything: it mutates a
temporary copy of the entry which is not persisted back into the peerDB.
(It used to be functional, but was broken when the peerDB was extended
to use SetMatrix.) So there may be cases where local peer entries are
not filtered out properly, resulting in spurious encryption parameters
being programmed into the kernel.

Filter out local peers when walking the peerDB by filtering on whether
the entry has the isLocal flag set. Remove the no-op code which attempts
to update local entries in the peerDB. No other code takes any interest
in the VTEP value for isLocal peer entries.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit a9e2d6d06e)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
9ff06c515c libn/d/overlay: use netip types more
The netip types are really useful for tracking state in the overlay
driver as they are hashable, unlike net.IP and friends, making them
directly useable as map keys. Converting between netip and net types is
fairly trivial, but fewer conversions is more ergonomic.

The NetworkDB entries for the overlay peer table encode the IP addresses
as strings. We need to parse them to some representation before
processing them further. Parse directly into netip types and pass those
values around to cut down on the number of conversions needed.

The peerDB needs to marshal the keys and entries to structs of hashable
values to be able to insert them into the SetMatrix. Use netip.Addr in
peerEntry so that peerEntry values can be directly inserted into the
SetMatrix without conversions. Use a hashable struct type as the
SetMatrix key to avoid having to marshal the whole struct to a string
and parse it back out.

Use netip.Addr as the map key for the driver's encryption map so the
values do not need to be converted to and from strings. Change the
encryption configuration methods to take netip types so the peerDB code
can pass netip values directly.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit d188df0039)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
8f0a803fc6 libnetwork/internal/setmatrix: make keys generic
Make the SetMatrix key's type generic so that e.g. netip.Addr values can
be used as matrix keys.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 0317f773a6)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
7d8c7c21f2 libnetwork/osl: stop tracking neighbor entries
The Namespace keeps some state for each inserted neighbor-table entry
which is used to delete the entry (and any related entries) given only
the IP and MAC address of the entry to delete. This state is not
strictly required as the retained data is a pure function of the
parameters passed to AddNeighbor(), and the kernel can inform us whether
an attempt to add a neighbor entry would conflict with an existing
entry. Get rid of the neighbor state in Namespace. It's just one more
piece of state that can cause lots of grief if it falls out of sync with
ground truth. Require callers to call DeleteNeighbor() with the same
aguments as they had passed to AddNeighbor(). Push the responsibility
for detecting attempts to insert conflicting entries into the neighbor
table onto the kernel by using (*netlink.Handle).NeighAdd() instead of
NeighSet().

Modernize the error messages and logging in DeleteNeighbor() and
AddNeighbor().

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 0d6e7cd983)

libn/d/overlay: delete FDB entry from AF_BRIDGE

Starting with commit 0d6e7cd983
DeleteNeighbor() needs to be called with the same options as the
AddNeighbor() call that created the neighbor entry. The calls in peerdb
were modified incorrectly, resulting in the deletes failing and leaking
neighbor entries. Fix up the DeleteNeighbor calls so that the FDB entry
is deleted from the FDB instead of the neighbor table, and the neighbor
is deleted from the neighbor table instead of the FDB.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 7a12bbe5d3)

Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
9cd4021dae libnetwork/osl: remove superfluous locks in Namespace
The isDefault and nlHandle fields are immutable once the Namespace is
constructed.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 9866738736)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
4d6c4e44d7 libn/osl: refactor func (*Namespace) AddNeighbor
Scope local variables as narrowly as possible.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit b6d76eb572)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
e5b652add3 libn/osl: drop unused AddNeighbor force parameter
func (*Namespace) AddNeighbor is only ever called with the force
parameter set to false. Remove the parameter and eliminate dead code.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 3bdf99d127)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
ca41647695 libn/d/overlay: drop miss flags from peerAddOp
as all callers unconditionally set them to false.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit a8e8a4cdad)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:24 -04:00
Cory Snider
199b2496e7 libnetwork/d/overlay: drop miss flags from peerAdd
as all callers unconditionally set them to false.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 6ee58c2d29)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:24 -04:00
Cory Snider
65ec8c89a6 libn/d/overlay: drop obsolete writeToStore comment
The writeToStore() call was removed from CreateNetwork in
commit 0fa873c0fe. The comment about
undoing the write is no longer applicable.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit d90277372f)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:24 -04:00
Austin Vazquez
c447682dee Merge pull request #50693 from corhere/backport-25.0/fix-frozen
[25.0 backport] Fix download-frozen-image-v2
2025-08-11 12:11:09 -07:00
Paweł Gronowski
a749f055d9 download-frozen-image-v2: Use curl -L
Passing the Auth to the redirected location was fixed in curl 7.58:
https://curl.se/changes.html#7_58_0 so we no longer need the extra
handling and can just use `-L` to let curl handle redirects.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2025-08-11 13:40:18 -04:00
Paweł Gronowski
5a12eaf718 download-frozen-image-v2: handle 307 responses without decimal
Correctly parse HTTP response that doesn't contain an HTTP version with a decimal place:

```
< HTTP/2 307
```

The previous version would only match strings like `HTTP/2.0 307`.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2025-08-11 13:40:18 -04:00
Cory Snider
59f062b233 Merge pull request #50511 from corhere/backport-25.0/libn/all-the-networkdb-fixes
[25.0] libnetwork/networkdb: backport all the fixes
2025-08-07 11:44:08 -04:00
Paweł Gronowski
842a9c522a Merge commit from fork
[25.0] Restore INC rules on firewalld reload
2025-07-29 10:01:01 +00:00
Rob Murray
651b2feb27 Restore INC iptables rules on firewalld reload
Signed-off-by: Rob Murray <rob.murray@docker.com>
2025-07-28 12:06:58 -04:00
Cory Snider
a43c1eef18 Merge pull request #50445 from corhere/backport-25.0/fix-firewalld-reload
[25.0 backport] libnetwork/d/{bridge,overlay}: fix firewalld reload handling
2025-07-28 12:05:05 -04:00
Cory Snider
728de37428 libnetwork/networkdb: improve quality of randomness
The property test for the mRandomNodes function revealed that it may
sometimes pick out a sample of fewer than m nodes even when the number
of nodes to pick from (excluding the local node) is >= m. Rewrite it
using a random shuffle or permutation so that it always picks a
uniformly-distributed sample of the requested size whenever the
population is large enough.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit ac5f464649)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:34 -04:00
Cory Snider
5bf90ded7a libnetwork/networkdb: test quality of mRandomNodes
TestNetworkDBAlwaysConverges will occasionally find a failure where one
entry is missing on one node even after waiting a full five minutes. One
possible explanation is that the selection of nodes to gossip with is
biased in some way. Test that the mRandomNodes function picks a
uniformly distributed sample of node IDs of sufficient length.

The new test reveals that mRandomNodes may sometimes pick out a sample
of fewer than m nodes even when the number of nodes to pick from
(excluding the local node) is >= m. Put the test behind an xfail tag so
it is opt-in to run, without interfering with CI or bisecting.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 5799deb853)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:34 -04:00
Cory Snider
51d13163c5 libnetwork/networkdb: add convergence test
Add a property-based test which asserts that a cluster of NetworkDB
nodes always eventually converges to a consistent state. As this test
takes a long time to run it is build-tagged to be excluded from CI.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit d8730dc1d3)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:34 -04:00
Cory Snider
9ca52f5fb9 libnetwork/networkdb: log encryption keys to file
Add a feature to NetworkDB to log the encryption keys to a file for the
Wireshark memberlist plugin to consume, configured using an environment
variable.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit ebfafa1561)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
ec820662de libn/networkdb: stop forging tombstone entries
When a node leaves a network, all entries owned by that node are
implicitly deleted. The other NetworkDB nodes handle the leave by
setting the deleted flag on the entries owned by the left node in their
local stores. This behaviour is problematic as it results in two
conflicting entries with the same Lamport timestamp propagating
through the cluster.

Consider two NetworkDB nodes, A, and B, which are both joined to some
network. Node A in quick succession leaves the network, immediately
rejoins it, then creates an entry. If Node B processes the
entry-creation event first, it will add the entry to its local store
then set the deleted flag upon processing the network-leave. No matter
how many times B bulk-syncs with A, B will ignore the live entry for
having the same timestamp as its local tombstone entry. Once this
situation occurs, the only way to recover is for the entry to get
updated by A with a new timestamp.

There is no need for a node to store forged tombstones for another
node's entries. All nodes will purge the entries naturally when they
process the network-leave or node-leave event. Simply delete the
non-owned entries from the local store so there is no inconsistent state
to interfere with convergence when nodes rejoin a network. Have nodes
update their local store with tombstones for entries when leaving a
network so that after a rapid leave-then-rejoin the entry deletions
propagate to nodes which may have missed the leave event.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 21d9109750)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
f3f1e091a8 libnetwork/networkdb: fix broadcast queue deadlocks
NetworkDB's JoinNetwork function enqueues a message onto a
TransmitLimitedQueue while holding the NetworkDB mutex locked for
writing. The TransmitLimitedQueue has its own synchronization;
it locks its mutex when enqueueing a message. Locking order:
  1. (NetworkDB).RWMutex.Lock()
  2. (TransmitLimitedQueue).mu.Lock()

NetworkDB's gossip periodic task calls GetBroadcasts on the same
TransmitLimitedQueue to retrieve the enqueued messages. GetBroadcasts
invokes the queue's NumNodes callback while the mutex is locked. The
NumNodes callback function that NetworkDB sets locks the NetworkDB mutex
for reading to take the length of the nodes map. Locking order:
  1. (TransmitLimitedQueue).mu.Lock()
  2. (NetworkDB).RWMutex.RLock()

If one goroutine calls GetBroadcasts on the queue concurrently with
another goroutine calling JoinNetwork on the NetworkDB, the goroutines
may deadlock due to the lock inversion.

Fix the deadlock by caching the number of nodes in an atomic variable so
that the NumNodes callback can load the value without blocking or
violating Go's memory model. And fix a similar deadlock situation with
the table-event broadcast queues.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 08bde5edfa)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
16dc168388 libn/networkdb: make TestNetworkDBIslands not flaky
With rejoinClusterBootStrap fixed in tests, split clusters should
reliably self-heal in tests as well as production. Work around the other
source of flakiness in TestNetworkDBIslands: timing out waiting for a
failed node to transition to gracefully left. This flake happens when
one of the leaving nodes sends its NodeLeft message to the other leaving
node, and the second is shut down before it has a chance to rebroadcast
the message to the remaining nodes. The proper fix would be to leverage
memberlist's own bookkeeping instead of duplicating it poorly with user
messages, but doing so requires a change in the memberlist module.
Instead have the test check that the sum of failed+left nodes is
expected instead of waiting for all nodes to have failed==3 && left==0.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit aff444df86)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
12aaf29287 libn/networkdb: prevent spurious rejoins in tests
The rejoinClusterBootStrap periodic task rejoins with the bootstrap
nodes if none of them are members of the cluster. It correlates the
cluster nodes with the bootstrap list by comparing IP addresses,
ignoring ports. In normal operation this works out fine as every node
has a unique IP address, but in unit tests every node listens on a
distinct port of 127.0.0.1. This situation causes the check to
incorrectly filter out all nodes from the list, mistaking them for the
local node.

Filter out the local node using pointer equality of the *node to avoid
any ambiguity. Correlate the remote nodes by IP:port so that the check
behaves the same in tests and in production.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 1e1be54d3e)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
ca5250dc9f libnetwork/networkdb: prioritize local broadcasts
A network node is responsible for both broadcasting table events for
entries it owns and for rebroadcasting table events from other nodes it
has received. Table events to be broadcast are added to a single queue
per network, including events for rebroadcasting. As the memberlist
TransmitLimitedQueue is (to a first approximation) LIFO, a flood of
events from other nodes could delay the broadcasting of
locally-generated events indefinitely. Prioritize broadcasting local
events by splitting up the queues and only pulling from the rebroadcast
queue if there is free space in the gossip packet after draining the
local-broadcast queue.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 6ec6e0991a)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
c912e5278b libnetwork/networkdb: improve TestCRUDTableEntries
Log more details when assertions fail to provide a more complete picture
of what went wrong when TestCRUDTableEntries fails. Log the state of
each NetworkDB instance at various points in TestCRUDTableEntries to
provide an even more complete picture.

Increase the global logger verbosity in tests so warnings and debug logs
are printed to the test log.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit e9a7154909)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
e9ed499888 libn/networkdb: use distinct type for own networks
NetworkDB uses a muli-dimensional map of struct network to keep track of
network attachments for both remote nodes and the local node. Only a
subset of the struct fields are used for remote nodes' network
attachments. The tableBroadcasts pointer field in particular is
always initialized for network values representing local attachments
(read: nDB.networks[nDB.config.NodeID]) and always nil for remote
attachments. Consequently, unnecessary defensive nil-pointer checks are
peppered throughout the code despite the aforementioned invariant.

Enshrine the invariant that tableBroadcasts is initialized iff the
network attachment is for the local node in the type system. Pare down
struct network to only the fields needed for remote network attachments
and move the local-only fields into a new struct thisNodeNetwork. Elide
the unnecessary nil-checks.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit dbb0d88109)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
6856a17655 libnetwork/networkdb: don't clear queue on rejoin
When joining a network that was previously joined but not yet reaped,
NetworkDB replaces the network struct value with a zeroed-out one with
the entries count copied over. This is also the case when joining a
network that is currently joined! Consequently, joining a network has
the side effect of clearing the broadcast queue. If the queue is cleared
while messages are still pending broadcast, convergence may be delayed
until the next bulk sync cycle.

Make it an error to join a network twice without leaving. Retain the
existing broadcast queue when rejoining a network that has not yet been
reaped.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 51f31826ee)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
31f4c5914e libnetwork/networkdb: drop id field from network
The map key for nDB.networks is the network ID. The struct field is not
actually used anywhere in practice.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 30b27ab6ea)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
35f7b1d7c9 libn/networkdb: take most tests off flaky list
The loopback-test fixes seem to be sufficient to resolve the flakiness
of all the tests aside from TestFlakyNetworkDBIslands.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 697c17ca95)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
743a0df9ec libnetwork/networkdb: always shut down memberlist
Gracefully leaving the memberlist cluster is a best-effort operation.
Failing to successfully broadcast the leave message to a peer should not
prevent NetworkDB from cleaning up the memberlist instance on close. But
that was not the case in practice. Log the error returned from
(*memberlist.Memberlist).Leave instead of returning it and proceed with
shutting down irrespective of whether Leave() returns an error.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 16ed51d864)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Matthieu MOREL
bacba3726f fix redefines-builtin-id from revive
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-07-25 16:20:29 -04:00
Andrey Epifanov
f93d90cee3 overlay: Reload Ingress iptables rules in swarm mode
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit a1f68bf5a6)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:17:16 -04:00
Andrey Epifanov
00232ac981 libnetwork: split programIngress() and dependent functions on Add and Del functions
- refactor programIngressPorts to use Rule.Insert/Append/Delete for improved rule management
- split programIngress() and dependent functions on Add and Del functions

Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit 8b208f1b95)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:17:16 -04:00
Andrey Epifanov
88d0ed889d libnetwork: refactor ingress chain management for improved rule handling and initialization
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit 50e6f4c4cb)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:17:16 -04:00
Andrey Epifanov
32c814a85f libnetwork: add FlushChain methods for improved iptables management
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit 4f0485e45f)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 15:17:16 -04:00
Andrey Epifanov
fb8e5d85f6 libnetwork: refactor rule management to use Ensure method for Append and Insert operations
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit 262c32565b)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:15:50 -04:00
Andrey Epifanov
fb6695de75 libnetwork: refactor iptable functions to include table parameter for improved rule management
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit 19a8083866)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:15:50 -04:00
Andrey Epifanov
089d70f3c8 libnetwork: extract plumpIngressProxy steps in a separate function
- Extract plumpIngressProxy steps in a separate function
- Don't create a new listener if there's already one in ingressProxyTbl

Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit c2e2e7fe24)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:15:50 -04:00
Andrey Epifanov
2710c239df libnetwork: extract programIngressPorts steps in a separate functions
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit 51ed289b06)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:15:50 -04:00
Andrey Epifanov
7982904677 libnetwork: extract creation/initiation of INGRESS-DOCKER chains in separate function
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit 752758ae77)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:15:50 -04:00
Rob Murray
fbffa88b76 Restore legacy links along with other iptables rules
On firewalld reload, all the iptables rules are deleted. Legacy
links use iptables.OnReloaded to achieve that - but there's no
way to deleted an OnRelaoded callback. So, a firewalld reload
after the linked containers are deleted results in zombie rules
being re-created.

Legacy links are created by ProgramExternalConnectivity, but
removed in Leave (rather than RevokeExternalConnectivity).

So, restore legacy links for current endpoints, along with the
other per-network/per-port rules.

Move link-removal to RevokeExternalConnectivity, so that it
happens with the configNetwork lock held.

Signed-off-by: Rob Murray <rob.murray@docker.com>
2025-07-25 15:15:50 -04:00
Rob Murray
41f080df25 Restore iptables for current networks on firewalld reload
Using iptables.OnReloaded to restore individual per-network rules
on firewalld reload means rules for deleted networks pop back in
to existence (because there was no way to delete the callbacks on
network-delete).

So, on firewalld reload, walk over current networks and ask them
to restore their iptables rules.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit a527e5a546)

Test that firewalld reload doesn't re-create deleted iptables rules

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit c3fa7c1779)

Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:15:50 -04:00
Rob Murray
c64e8a8117 Add test util "WithPortMap"
Backport the WithPortMap() function through a partial cherry-pick.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 20c99e4156)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:15:50 -04:00
Albin Kerouanton
0316eaaa23 Install and run firewalld for CI's firewalld tests
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
(cherry picked from commit 8883db20c5)
Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit adfed82ab8)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:15:50 -04:00
Rob Murray
270166cbe5 Fix TestPassthrough
Doesn't look like it would ever have worked, but:
- init the dbus connection to avoid a segv
- include the chain name when creating the rule
- remove the test rule if it's created

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 0ab6f07c31)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 15:15:50 -04:00
Rob Murray
a012739c2c Add test util "FirewalldRunning"
Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit b8cacdf324)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 14:46:24 -04:00
Matthieu MOREL
e53cf6bc02 fix(ST1016): Use consistent method receiver names
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 70139978d3)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-24 14:01:41 -04:00
Sebastiaan van Stijn
4c5a99d08c Merge pull request #50230 from corhere/backport-25.0/libn/fix-networkdb-dns-update-delete
[25.0 backport] libnetwork: fix flaky Swarm service DNS
2025-06-19 10:30:26 +02:00
Cory Snider
f2126bfc7f libnetwork: fix flaky Swarm service DNS
When libnetwork receives a watch event for a driver table entry from
NetworkDB it passes the event along to the interested driver. This code
contains a subtle bug: update events from NetworkDB are passed along to
the driver as Delete events! This bug was lying dormant as driver-table
entries can only be added by the driver, not updated. Now that NetworkDB
broadcasts an UpdateEvent to watchers if the entry is already known to
the local NetworkDB, irrespective of whether the event received from the
remote peer was a CREATE or UPDATE event, the bug is causing problems.
Whenever a remote node replaces an entry in the overlay_peer_table but
the intermediate delete state was not received by the local node, the
new CREATE event would be translated to an UpdateEvent by NetworkDB and
subsequently handled by the overlay driver as if the entry was deleted!

Bubble table UPDATE events up to the network driver as Update events.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit a7f01d238e)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-06-18 19:20:40 -04:00
Sebastiaan van Stijn
95aff4f75c Merge pull request #50203 from corhere/backport-25.0/cycle-free-swarmkit
[25.0] vendor: github.com/moby/swarmkit/v2 v2.0.0
2025-06-17 11:30:15 +02:00
Cory Snider
4d168615cc vendor: github.com/moby/swarmkit/v2 v2.0.0
- add Unwrap errror to custom error types
- removes dependency on github.com/rexray/gocsi
- fix CSI plugin load issue
- add ALPN next protos configuration to gRPC server
- fix task scheduler infinite loop

full diff: https://github.com/moby/swarmkit/compare/911c97650f2e...v2.0.0

Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-06-16 18:49:51 -04:00
Cory Snider
614ecc8201 hack: block imports of vendored testify packages
While github.com/stretchr/testify is not used directly by any of the
repository code, it is a transitive dependency via Swarmkit and
therefore still easy to use without having to revendor. Add lint rules
to ban importing testify packages to make sure nobody does.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 7ebd88d2d9)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-06-16 18:49:51 -04:00
Cory Snider
26a318189b libn/cnmallocator: migrate tests to gotest.tools/v3
Apply command gotest.tools/v3/assert/cmd/gty-migrate-from-testify to the
cnmallocator package to be consistent with the assertion library used
elsewhere in moby.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry-picked from commit 4f30a930ad)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-06-16 18:49:51 -04:00
Cory Snider
4e2e8fe181 Vendor dependency cycle-free swarmkit
Moby imports Swarmkit; Swarmkit no longer imports Moby. In order to
accomplish this feat, Swarmkit has introduced a new plugin.Getter
interface so it could stop importing our pkg/plugingetter package. This
new interface is not entirely compatible with our
plugingetter.PluginGetter interface, necessitating a thin adapter.

Swarmkit had to jettison the CNM network allocator to stop having to
import libnetwork as the cnmallocator package is deeply tied to
libnetwork. Move the CNM network allocator into libnetwork, where it
belongs. The package had a short an uninteresting Git history in the
Swarmkit repository so no effort was made to retain history.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry-picked from commit 7b0ab1011c)

d/cluster/convert: expose Addr() on plugins

The swarmPlugin type does not implement the Swarm plugin.AddrPlugin
interface because it embeds an interface value which does not include
that method in its method set. (You can type-assert an interface value
to another interface which the concrete type implements, but a struct
embedding an interface value is not itself an interface value.) Wrap the
plugin with a different adapter type which exposes the Addr() method if
the concrete plugin implements it.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 8b6d6b9ad5)

libnetwork/cnmallocator: fix non-constant format string in call (govet)

    libnetwork/cnmallocator/drivers_ipam.go:43:31: printf: non-constant format string in call to (*github.com/docker/docker/vendor/github.com/sirupsen/logrus.Entry).Infof (govet)
            log.G(context.TODO()).Infof("Swarm initialized global default address pool to: " + str.String())
                                        ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7b60a7047d)

Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-06-16 18:49:33 -04:00
Austin Vazquez
4f2d0e656b Merge pull request #50056 from thaJeztah/25.0_fluentd_migrate
[25.0] daemon: restore: migrate deprecated fluentd-async-connect
2025-06-12 11:02:37 -07:00
Sebastiaan van Stijn
8b44d5e80a [25.0] daemon: restore: migrate deprecated fluentd-async-connect
The "fluentd-async-connect" option was deprecated in 20.10 through
cc1f3c750e, and removed in 28.0 trough
49ec488036, which added migration code
on daemon startup.

This patch ports the migration code to the 25.0 branch to prevent future
disruption when upgrading existing containers to a new version of the
daemon.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-05-23 14:54:39 +02:00
Cory Snider
4749b46391 Merge pull request #50053 from aepifanov/dev/go1.23.9/25.0
[25.0] Update to go1.23.9
2025-05-22 18:05:54 -04:00
Andrey Epifanov
3acb76ef2f update to go1.23.9
https://github.com/golang/go/issues?q=milestone%3AGo1.23.9
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit 40b0fcd12f)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/test.yml
#	Dockerfile.e2e
#	Dockerfile.windows
#	vendor.mod
2025-05-22 11:45:04 -07:00
Cory Snider
d4f1fb1db2 Merge pull request #50005 from dperny/25.0-backport-network-fixes
[25.0] Backport network fixes
2025-05-15 18:46:42 -04:00
Cory Snider
b7346c5fb5 libn/networkdb: stop table events from racing network leaves
When a node leaves a network or the cluster, or memberlist considers the
node as failed, NetworkDB atomically deletes all table entries (for the
left network) owned by the node. This maintains the invariant that table
entries owned by a node are present in the local database indices iff
that node is an active cluster member which is participating in the
network the entries pertain to.

(*NetworkDB).handleTableEvent() is written in a way which attempts to
minimize the amount of time it is in a critical section with the mutex
locked for writing. It first checks under a read-lock whether both the
local node and the node where the event originated are participating in
the network which the event pertains to. If the check passes, the mutex
is unlocked for reading and locked for writing so the local database
state is mutated in a critical section. That leaves a window of time
between the participation check the write-lock being acquired for a
network or node event to arrive and be processed. If a table event for a
node+network races a node or network event which triggers the purge of
all table entries for the same node+network, the invariant could be
violated. The table entry described by the table event may be reinserted
into the local database state after being purged by the node's leaving,
resulting in an orphaned table entry which the local node will bulk-sync
to other nodes indefinitely.

It's not completely wrong to perform a pre-flight check outside of the
critical section. It allows for an early return in the no-op case
without having to bear the cost of synchronization. But such optimistic
concurrency control is only sound if the condition is double-checked
inside the critical section. It is tricky to get right, and this
instance of optimistic concurrency control smells like a case of
premature optimization. Move the pre-flight check into the critical
section to ensure that the invariant is maintained.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 270a4d41dc)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-05-15 15:37:49 -04:00
Cory Snider
545e84c7ff libn/networkdb: Watch() without race conditions
NetworkDB's Watch() facility is problematic to use in practice. The
stream of events begins when the watch is started, so the watch cannot
be used to process table entries that existed beforehand. Either option
to process existing table entries is racy: walking the table before
starting the watch leaves a race window where events could be missed,
and walking the table after starting the watch leaves a race window
where created/updated entries could be processed twice.

Modify Watch() to initialize the channel with synthetic CREATE events
for all existing entries owned by remote nodes before hooking it up to
the live event stream. This way watchers observe an equivalent sequence
of events irrespective of whether the watch was started before or after
entries from remote nodes are added to the database. Remove the bespoke
and racy synthetic event replay logic for driver watches from the
libnetwork agent.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit a3aea15257)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-05-15 15:37:48 -04:00
Cory Snider
bc97de45b4 libn/networkdb: record tombstones for all deletes
The gossip protocol which powers NetworkDB does not guarantee in-order
reception of events. This poses a problem with deleting entries: without
some mechanism to discard stale CREATE or UPDATE events received after a
DELETE, out-of-order reception of events could result in a deleted entry
being spuriously resurrected in the local NetworkDB state! NetworkDB
handles this situation by storing "tombstone" entries for a period of
time with the Lamport timestamps of the entries' respective DELETE
events. Out-of-order CREATE or UPDATE events will be ignored by virtue
of having older timestmaps than the tombstone entry, just like how it
works for entries that have not yet been deleted.

NetworkDB was only storing a tombstone if the entry was already present
in the local database at the time of the DELETE event. If the first
event received for an entry is a DELETE, no tombstone is stored. If a
stale CREATE/UPDATE event for the entry (with an older timestamp than
the DELETE) is subsequently received, NetworkDB erroneously creates a
live entry in the local state with stale data. Modify NetworkDB to store
tombstones for DELETE events irrespective of whether the entry was known
to NetworkDB beforehand so that it correctly discards out-of-order
CREATEs and UPDATEs in all cases.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit ada8bc3695)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-05-15 15:33:32 -04:00
Cory Snider
e78b2bdb84 libn/networkdb: b'cast watch events from local POV
NetworkDB gossips changes to table entries to other nodes using distinct
CREATE, UPDATE and DELETE events. It is unfortunate that the wire
protocol distinguishes CREATEs from UPDATEs as nothing useful can be
done with this information. Newer events for an entry invalidate older
ones, so there is no guarantee that a CREATE event is broadcast to any
node before an UPDATE is broadcast. And due to the nature of gossip
protocols, even if the CREATE event is broadcast from the originating
node, there is no guarantee that any particular node will receive the
CREATE before an UPDATE. Any code which handles an UPDATE event
differently from a CREATE event is therefore going to behave in
unexpected ways in less than perfect conditions.

NetworkDB table watchers also receive CREATE, UPDATE and DELETE events.
Since the watched tables are local to the node, the events could all
have well-defined meanings that are actually useful. Unfortunately
NetworkDB is just bubbling up the wire-protocol event types to the
watchers. Redefine the table-watch events such that a CREATE event is
broadcast when an entry pops into existence in the local NetworkDB, an
UPDATE event is broadcast when an entry which was already present in the
NetworkDB state is modified, and a DELETE event is broadcast when an
entry which was already present in the NetworkDB state is marked for
deletion. DELETE events are broadcast with the same value as the most
recent CREATE or UPDATE event for the entry.

The handler for endpoint table events in the libnetwork agent assumed
incorrectly that CREATE events always correspond to adding a new active
endpoint and that UPDATE events always correspond to disabling an
endpoint. Fix up the handler to handle CREATE and UPDATE events using
the same code path, checking the table entry's ServiceDisabled flag to
determine which action to take.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit c68671d908)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-05-15 15:32:00 -04:00
Cory Snider
5d123a0ef8 libn/networkdb: fix data race in GetTableByNetwork
The function was accessing the index map without holding the mutex, so
it would race any mutation to the database indexes. Fetch the reference
to the tree's root while holding a read lock. Since the radix tree is
immutable, taking a reference to the root is equivalent to starting a
read-only database transaction, providing a consistent view of the data
at a snapshot in time, even as the live state is mutated concurrently.

Also optimize the WalkTable function by leveraging the immutability of
the radix tree.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit ec65f2d21b)
2025-05-15 15:17:19 -04:00
Cory Snider
9c1b0fb58f libn/networkdb: SetPrimaryKey() under a write lock
(*NetworkDB).SetPrimaryKey() acquires a read lock on the NetworkDB
instance. That seems sound on the surface as it is only reading from the
NetworkDB struct, not mutating it. However, concurrent calls to
(*memberlist.Keyring).UseKey() would get flagged by Go's race detector
due to some questionable locking in its implementation. Acquire an
exclusive lock in SetPrimaryKey so concurrent calls don't race each
other.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit c9b01e0c4c)
Signed-off-by: Drew Erny <derny@mirantis.com>
2025-05-15 13:18:05 -05:00
Cory Snider
5c192650eb Fix possible overlapping IPs when ingressNA == nil
Logic was added to the Swarm executor in commit 0d9b0ed678
to clean up managed networks whenever the node's load-balancer IP
address is removed or changed in order to free up the address in the
case where the container fails to start entirely. Unfortunately, due to
an oversight the function returns early if the Swarm is lacking
an ingress network. Remove the early return so that load-balancer IP
addresses for all the other networks are freed as appropriate,
irrespective of whether an ingress network exists in the Swarm.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 56ad941564)
Signed-off-by: Drew Erny <derny@mirantis.com>
2025-05-15 13:17:57 -05:00
Cory Snider
00d8bed6cf libn/networkdb: don't exceed broadcast size limit
NetworkDB uses a hierarchy of queues to prioritize messages for
broadcast. Unfortunately the logic to pull from multiple queues is
flawed. The length of the messages pulled from the first queue is not
taken into account when pulling messages from the second queue. A list
of messages up to tiwce as long as the limit could be returned! Messages
beyond the limit will be truncated unceremoniously by memberlist.

Memberlist broadcast queues assume that all messages returned from a
GetBroadcasts call will be broadcasted to other nodes in the cluster.
Messages are popped from the queue once they have hit their retransmit
limit. On a busy system messages may be broadcast fewer times than
intended, possibly even being dropped without ever being broadcast!

Subtract the length of messages pulled from the first queue from the
broadcast size limit so the limit is not exceeded when pulling from the
second queue.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit dacf445614)
Signed-off-by: Drew Erny <derny@mirantis.com>
2025-05-15 13:10:03 -05:00
Cory Snider
c31faaed8c libn/networkdb: listen only on loopback in tests
NetworkDB defaults to binding to the unspecified address for gossip
communications, with no advertise address set. In this configuration,
the memberlist instance listens on all network interfaces and picks one
of the host's public IP addresses as the advertise address.
The NetworkDB unit tests don't override this default, leaving them
vulnerable to flaking out as a result of rogue network traffic
perturbing the test, or the inferred advertise address not being useable
for loopback testing. And macOS prompts for permission to allow the test
executable to listen on public interfaces every time it is rebuilt.

Modify the NetworkDB tests to explicitly bind to, advertise, and join
ports on 127.0.0.1 to make the tests more robust to flakes in CI and
more convenient to run locally.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 90ec2c209b)
Signed-off-by: Drew Erny <derny@mirantis.com>
2025-05-15 13:10:03 -05:00
Cory Snider
e5e7d89092 libn/networkdb: advertise the configured bind port
The NetworkDB unit tests instantiate clusters which communicate over
loopback where every "node" listens on a distinct localhost port. The
tests make use of a NetworkDB configuration knob to set the port. When
the NetworkDB configuration's BindPort field is set to a nonzero value,
its memberlist instance is configured to bind to the specified port
number. However, the advertise port is left at the
memberlist.DefaultLANConfig() default value of 7946. Because of this,
nodes would be unable to contact any of the other nodes in the cluster
learned by gossip as the gossiped addresseses specify the wrong ports!
The flaky tests passed as often as they did thanks to the robustness of
the memberlist module: NetworkDB gossip and and memberlist node
liveness-probe pings to unreachable nodes can all be relayed through
the reachable nodes, the nodes on the bootstrap join list.

Make the NetworkDB unit tests less flaky by setting each node's
advertise port to the bind port.

The daemon is unaffected by this oversight as it unconditionally uses
the default listen port of 7946, which aligns with the advertise port.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit e3f9edd348)
Signed-off-by: Drew Erny <derny@mirantis.com>
2025-05-15 13:10:03 -05:00
Cory Snider
f19a4f7d9e Merge pull request #49970 from aepifanov/bump-containerd
[25.0] Dockerfile: update containerd binary to v1.7.27
2025-05-13 17:40:48 -04:00
Paweł Gronowski
74d2c7fd53 libnetwork: Mark flaky tests
Mark the following tests as flaky:
- TestNetworkDBCRUDTableEntry
- TestNetworkDBCRUDTableEntries
- TestNetworkDBIslands
- TestNetworkDBNodeLeave

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 9893520c62)
2025-05-13 13:48:14 -07:00
Paweł Gronowski
cad60979a1 hack/unit: Rerun failed flaky libnetwork tests
libnetwork tests tend to be flaky (namely `TestNetworkDBIslands` and
`TestNetworkDBCRUDTableEntries`).

Move execution of tests which name has `TestFlaky` prefix to a separate
gotestsum pass which allows them to be reran 4 times.

On Windows, the libnetwork test execution is not split into a separate
pass.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit d0d8d5d97d)
2025-05-13 13:48:14 -07:00
Austin Vazquez
7656928264 Dockerfile: update containerd binary to v1.7.27
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
(cherry picked from commit 35766af7d2)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	Dockerfile
#	Dockerfile.windows
#	hack/dockerfile/install/containerd.installer
2025-05-13 11:56:58 -07:00
Cory Snider
c96356750f Merge pull request #49967 from aepifanov/bump-golang-1.23
[25.0]Bump golang 1.23
2025-05-13 13:18:49 -04:00
Sebastiaan van Stijn
c231772a5c update go:build tags to go1.23 to align with vendor.mod
Go maintainers started to unconditionally update the minimum go version
for golang.org/x/ dependencies to go1.23, which means that we'll no longer
be able to support any version below that when updating those dependencies;

> all: upgrade go directive to at least 1.23.0 [generated]
>
> By now Go 1.24.0 has been released, and Go 1.22 is no longer supported
> per the Go Release Policy (https://go.dev/doc/devel/release#policy).
>
> For golang/go#69095.

This updates our minimum version to go1.23, as we won't be able to maintain
compatibility with older versions because of the above.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7c52c4d92e)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	api/server/router/container/inspect.go
#	api/server/router/grpc/grpc.go
#	api/server/router/system/system.go
#	api/server/router/system/system_routes.go
#	api/types/registry/registry.go
#	api/types/registry/registry_test.go
#	builder/builder-next/adapters/containerimage/pull.go
#	container/view.go
#	daemon/container_operations.go
#	daemon/containerd/image_inspect.go
#	daemon/containerd/image_push_test.go
#	daemon/create.go
#	daemon/daemon.go
#	daemon/daemon_unix.go
#	daemon/info.go
#	daemon/inspect.go
#	daemon/logger/loggerutils/logfile.go
#	internal/gocompat/modulegenerator.go
#	internal/maputil/maputil.go
#	internal/platform/platform_linux.go
#	internal/sliceutil/sliceutil.go
#	libnetwork/config/config.go
#	libnetwork/drivers/bridge/port_mapping_linux.go
#	libnetwork/drivers/overlay/peerdb.go
#	libnetwork/endpoint.go
#	libnetwork/endpoint_store.go
#	libnetwork/internal/l2disco/unsol_arp_linux.go
#	libnetwork/internal/l2disco/unsol_na_linux.go
#	libnetwork/internal/nftables/nftables_linux.go
#	libnetwork/internal/resolvconf/resolvconf.go
#	libnetwork/internal/setmatrix/setmatrix.go
#	libnetwork/ipams/defaultipam/address_space.go
#	libnetwork/ipamutils/utils.go
#	libnetwork/iptables/iptables.go
#	libnetwork/netutils/utils_linux.go
#	libnetwork/network.go
#	libnetwork/network_store.go
#	libnetwork/networkdb/networkdb.go
#	libnetwork/options/options.go
#	libnetwork/osl/interface_linux.go
#	libnetwork/osl/route_linux.go
#	libnetwork/portallocator/portallocator.go
#	libnetwork/sandbox.go
#	libnetwork/service.go
#	oci/defaults.go
#	plugin/v2/plugin_linux.go
#	testutil/daemon/daemon.go
#	testutil/helpers.go
2025-05-13 08:50:07 -07:00
Sebastiaan van Stijn
6ac44a4973 vendor.mod: update minimum go version to go1.23
Go maintainers started to unconditionally update the minimum go version
for golang.org/x/ dependencies to go1.23, which means that we'll no longer
be able to support any version below that when updating those dependencies;

> all: upgrade go directive to at least 1.23.0 [generated]
>
> By now Go 1.24.0 has been released, and Go 1.22 is no longer supported
> per the Go Release Policy (https://go.dev/doc/devel/release#policy).
>
> For golang/go#69095.

This updates our minimum version to go1.23, as we won't be able to maintain
compatibility with older versions because of the above.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6e8eb8a90f)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	hack/with-go-mod.sh
#	vendor.mod
2025-05-13 08:50:07 -07:00
Sebastiaan van Stijn
da1d6a4cac update to go1.23.8 (fix CVE-2025-22871)
full diff: https://github.com/golang/go/compare/go1.23.7...go1.23.8
release notes: https://go.dev/doc/devel/release#go1.24.2

go1.23.8 (released 2025-04-01) includes security fixes to the net/http package,
as well as bug fixes to the runtime and the go command. See the Go 1.23.8
milestone on our issue tracker for details;

https://github.com/golang/go/issues?q=milestone%3AGo1.23.8+label%3ACherryPickApproved

From the mailing list:

Hello gophers,

We have just released Go versions 1.24.2 and 1.23.8, minor point releases.
These minor releases include 1 security fixes following the security policy:

- net/http: request smuggling through invalid chunked data
  The net/http package accepted data in the chunked transfer encoding
  containing an invalid chunk-size line terminated by a bare LF.
  When used in conjunction with a server or proxy which incorrectly
  interprets a bare LF in a chunk extension as part of the extension,
  this could permit request smuggling.
  The net/http package now rejects chunk-size lines containing a bare LF.
  Thanks to Jeppe Bonde Weikop for reporting this issue.
  This is CVE-2025-22871 and Go issue https://go.dev/issue/71988.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 74b71c41ac)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/.test.yml
#	.github/workflows/.windows.yml
#	.github/workflows/arm64.yml
#	.github/workflows/buildkit.yml
#	.github/workflows/codeql.yml
#	.github/workflows/test.yml
#	.golangci.yml
#	Dockerfile
#	Dockerfile.simple
#	Dockerfile.windows
#	hack/dockerfiles/generate-files.Dockerfile
#	hack/dockerfiles/govulncheck.Dockerfile
2025-05-13 08:49:03 -07:00
Cory Snider
f91f92463d Merge pull request #49804 from aepifanov/backport-25.0/ubuntu-22.04-gha
[25.0] Update remaining Ubuntu 20.04 GHA uses to 22.04 and 24.04 #49775
2025-05-07 13:10:03 -04:00
Sebastiaan van Stijn
b1bcc6be66 gha: update buildkit to fix integration tests
full diff: 0bfcd83e6d...d77361423c

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c42b304f62)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/buildkit.yml
2025-05-05 13:54:03 -07:00
Sebastiaan van Stijn
341bb5120f Dockerfile: fix linting warnings
The 'as' keyword should match the case of the 'from' keyword
    FromAsCasing: 'as' and 'FROM' keywords' casing do not match
    More info: https://docs.docker.com/go/dockerfile/rule/from-as-casing/

    Setting platform to predefined $TARGETPLATFORM in FROM is redundant as this is the default behavior
    RedundantTargetPlatform: Setting platform to predefined $TARGETPLATFORM in FROM is redundant as this is the default behavior
    More info: https://docs.docker.com/go/dockerfile/rule/redundant-target-platform/

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b2b55903d0)
2025-05-05 13:44:27 -07:00
Andrey Epifanov
87b232ecbc gha: integration: replace Ubuntu 20.04 on 24.04
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-05-05 13:44:27 -07:00
Derek McGowan
b88ec510b5 Run CLI tests with cgroups v2
Signed-off-by: Derek McGowan <derek@mcg.dev>
(cherry picked from commit cd89a35ea0)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	integration-cli/docker_cli_update_unix_test.go
2025-05-05 05:17:39 -07:00
Derek McGowan
c1992d0de0 Update remaining Ubuntu 20.04 uses to 22.04 and 24.04
Signed-off-by: Derek McGowan <derek@mcg.dev>
(cherry picked from commit 45f9d679f8)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/.test.yml
2025-04-24 09:48:47 -07:00
Sebastiaan van Stijn
06ef6f04fd gha: test-prepare: update to Ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f87ae7c914)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:46 -07:00
Sebastiaan van Stijn
9dfebd37c2 gha: build, cross: update to Ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c41ed7c98c)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:45 -07:00
Sebastiaan van Stijn
14697bedd9 gha: integration-cli-prepare: update to Ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d29038d1cb)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:43 -07:00
Sebastiaan van Stijn
881e809b0a gha: integration-cli-report: update to Ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a23058e0d7)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:42 -07:00
Sebastiaan van Stijn
897614add1 gha: integration-report: update to Ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit de69b552ff)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:40 -07:00
Sebastiaan van Stijn
af633391ca gha: arm64: update Ubuntu 22.04 -> 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 651fb91c4d)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:39 -07:00
Sebastiaan van Stijn
6b3afa4c2d gha: arm64: test-integration-report: update to Ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f6a9ed5f0a)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:37 -07:00
Sebastiaan van Stijn
17c33f93ce gha: arm64: test-unit-report: update to ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 13e1ef6277)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:36 -07:00
Sebastiaan van Stijn
8de9fb4d4b gha: validate, build-dev: update to Ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 27404044a6)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:34 -07:00
Sebastiaan van Stijn
b67e668c0f gha: smoke: update to Ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3571982458)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:33 -07:00
Sebastiaan van Stijn
56885c08ba gha: docker-py: update to ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ee73f2e5da)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:31 -07:00
Sebastiaan van Stijn
3ea3d5c759 gha: unit: update to ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b9ca3d198e)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/.test.yml
2025-04-24 09:48:30 -07:00
Sebastiaan van Stijn
5e024a24a0 gha: bin-image: update to ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1a0afb0f9e)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:29 -07:00
Sebastiaan van Stijn
fa10b16add gha: buildkit: update to ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4919bf9f41)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:28 -07:00
Sebastiaan van Stijn
b3b6bc9770 gha: validate-pr: update to ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7b1fd61864)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/validate-pr.yml
2025-04-24 09:48:27 -07:00
Sebastiaan van Stijn
b8551186f7 gha: dco: update to ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit eeffc099ef)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:26 -07:00
Sebastiaan van Stijn
3bd7323b96 gha: docker-py: set TEST_SKIP_INTEGRATION_CLI=1
These tests don't actually run the integration-cli suite, but
the global hack/xxx script errors because it's not set;

    ---> Making bundle: test-docker-py (in bundles/test-docker-py)
    ---> Making bundle: .integration-daemon-start (in bundles/test-docker-py)
    Using test binary /usr/local/cli-integration/docker
    # DOCKER_EXPERIMENTAL is set: starting daemon with experimental features enabled!
    # cgroup v2 requires TEST_SKIP_INTEGRATION_CLI to be set
    make: *** [Makefile:220: test-docker-py] Error 1
    Error: Process completed with exit code 2.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 06b87d80ee)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:24 -07:00
Sebastiaan van Stijn
ddc689206b Merge pull request #49790 from pendo324/update-jwt-v4.5.2
[25.0] vendor: github.com/golang-jwt/jwt/v4 v4.5.2
2025-04-10 21:53:32 +02:00
Justin Alvarez
ebbb4cad63 vendor: github.com/golang-jwt/jwt/v4 v4.5.2
Signed-off-by: Justin Alvarez <alvajus@amazon.com>
2025-04-10 17:51:07 +00:00
Sebastiaan van Stijn
a926bec8fc Merge pull request #49488 from austinvazquez/cherry-pick-838ae09a2337e6561b40d13be6ddf43005a92a9e-to-25.0
[25.0 backport] Dockerfile: update runc binary to v1.2.5
2025-02-18 18:07:24 +01:00
Sebastiaan van Stijn
89a48b65fc Dockerfile: update runc binary to v1.2.5
This is the fifth patch release in the 1.2.z series of runc. It primarily fixes
an issue caused by an upstream systemd bug.

* There was a regression in systemd v230 which made the way we define
  device rule restrictions require a systemctl daemon-reload for our
  transient units. This caused issues for workloads using NVIDIA GPUs.
  Workaround the upstream regression by re-arranging how the unit properties
  are defined.
* Dependency github.com/cyphar/filepath-securejoin is updated to v0.4.1,
  to allow projects that vendor runc to bump it as well.
* CI: fixed criu-dev compilation.
* Dependency golang.org/x/net is updated to 0.33.0.

full diff: https://github.com/opencontainers/runc/compare/v1.2.4...v1.2.5
release notes: https://github.com/opencontainers/runc/releases/tag/v1.2.5

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 838ae09a23)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2025-02-18 15:22:24 +00:00
Sebastiaan van Stijn
74360f99d7 Merge pull request #49400 from vvoland/49394-25.0
[25.0 backport] update to go1.22.12
2025-02-06 13:07:51 +01:00
Paweł Gronowski
aae4029600 update to go1.22.12
This minor release include 1 security fix following the security policy:

- crypto/elliptic: timing sidechannel for P-256 on ppc64le

  Due to the usage of a variable time instruction in the assembly implementation
  of an internal function, a small number of bits of secret scalars are leaked on
  the ppc64le architecture. Due to the way this function is used, we do not
  believe this leakage is enough to allow recovery of the private key when P-256
  is used in any well known protocols.

This is CVE-2025-22866 and Go issue https://go.dev/issue/71383.

View the release notes for more information:
https://go.dev/doc/devel/release#go1.22.12

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit a584f0b227)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2025-02-05 12:35:42 +01:00
Sebastiaan van Stijn
822b2b6a1d Merge pull request #49345 from austinvazquez/cherry-pick-c83862c5419508bcdfafb07165b1a21ecb73c1e2-to-25.0
[25.0 backport] update to go1.22.11 (fix CVE-2024-45341, CVE-2024-45336)
2025-01-29 12:07:42 +01:00
Sebastiaan van Stijn
a2802d0746 update to go1.22.11 (fix CVE-2024-45341, CVE-2024-45336)
go1.22.11 (released 2025-01-16) includes security fixes to the crypto/x509 and
net/http packages, as well as bug fixes to the runtime. See the Go 1.22.11
milestone on our issue tracker for details.

- https://github.com/golang/go/issues?q=milestone%3AGo1.22.11+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.22.10...go1.22.11

Hello gophers,

We have just released Go versions 1.23.5 and 1.22.11, minor point releases.

These minor releases include 2 security fixes following the security policy:

- crypto/x509: usage of IPv6 zone IDs can bypass URI name constraints

  A certificate with a URI which has a IPv6 address with a zone ID may
  incorrectly satisfy a URI name constraint that applies to the certificate
  chain.

  Certificates containing URIs are not permitted in the web PKI, so this
  only affects users of private PKIs which make use of URIs.

  Thanks to Juho Forsén of Mattermost for reporting this issue.

  This is CVE-2024-45341 and Go issue https://go.dev/issue/71156.

- net/http: sensitive headers incorrectly sent after cross-domain redirect

  The HTTP client drops sensitive headers after following a cross-domain redirect.
  For example, a request to a.com/ containing an Authorization header which is
  redirected to b.com/ will not send that header to b.com.

  In the event that the client received a subsequent same-domain redirect, however,
  the sensitive headers would be restored. For example, a chain of redirects from
  a.com/, to b.com/1, and finally to b.com/2 would incorrectly send the Authorization
  header to b.com/2.

  Thanks to Kyle Seely for reporting this issue.

  This is CVE-2024-45336 and Go issue https://go.dev/issue/70530.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c83862c541)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2025-01-28 16:11:51 +00:00
Austin Vazquez
9281aea6ce ci: update base container to alpine20 for buildkit workflow
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2025-01-28 16:10:45 +00:00
Austin Vazquez
0e655eaff2 Merge pull request #49321 from thaJeztah/25.0_backport_backport_gha_arm64
[25.0 backport] ci: switch from jenkins to gha for arm64 build and tests (and set correct go version for branch)
2025-01-28 10:08:23 -06:00
Sebastiaan van Stijn
b1d6fd957d gha: set arm64 GO_VERSION to 1.22.10
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6c832d05c4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-01-28 13:43:02 +01:00
CrazyMax
7540f88434 ci: switch from jenkins to gha for arm64 build and tests
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit 8c236de735)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-01-28 13:43:02 +01:00
Sebastiaan van Stijn
19dd685407 Merge pull request #49346 from austinvazquez/cherry-pick-f8a973ba4e7d4e5b90d5a89bb4a8633ceae26985-to-25.0
[25.0 backport] ci: update bake-action to v6
2025-01-28 13:42:11 +01:00
CrazyMax
f8d9617c43 ci(bin-image): fix bake build
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit d86920b9b3)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2025-01-28 04:50:37 +00:00
CrazyMax
bec5e8eed1 ci: update bake-action to v6
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit f8a973ba4e)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2025-01-28 04:44:57 +00:00
Sebastiaan van Stijn
71907ca48e Merge pull request #49269 from austinvazquez/cherry-pick-update-runc-1.2.4-to-25.0
[25.0 backport] Dockerfile: update runc binary to v1.2.4
2025-01-14 12:58:14 +01:00
Sebastiaan van Stijn
72f6828fd3 Merge pull request #49268 from austinvazquez/cherry-pick-update-containerd-1.7.25-to-25.0
[25.0 backport] Dockerfile: update containerd to v1.7.25
2025-01-13 19:43:04 +01:00
Sebastiaan van Stijn
fcb50183e4 Dockerfile: update runc binary to v1.2.4
This is the fourth patch release of the 1.2.z release branch of runc. It
includes a fix for a regression introduced in 1.2.0 related to the
default device list.

- Re-add tun/tap devices to built-in allowed devices lists.

 In runc 1.2.0 we removed these devices from the default allow-list
 (which were added seemingly by accident early in Docker's history) as
 a precaution in order to try to reduce the attack surface of device
 inodes available to most containers. At the time we thought
 that the vast majority of users using tun/tap would already be
 specifying what devices they need (such as by using --device with
 Docker/Podman) as opposed to doing the mknod manually, and thus
 there would've been no user-visible change.

 Unfortunately, it seems that this regressed a noticeable number of
 users (and not all higher-level tools provide easy ways to specify
 devices to allow) and so this change needed to be reverted. Users
 that do not need these devices are recommended to explicitly disable
 them by adding deny rules in their container configuration.

full diff: https://github.com/opencontainers/runc/compare/v1.2.3...v1.2.4
release notes: https://github.com/opencontainers/runc/releases/tag/v1.2.4

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit aad7bcedd2)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2025-01-13 10:24:10 -07:00
Sebastiaan van Stijn
20af9f77a6 Dockerfile: update containerd to v1.7.25
release notes: https://github.com/containerd/containerd/releases/tag/v1.7.25
full diff: https://github.com/containerd/containerd/compare/v1.7.24...v1.7.25

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c12bfda3cd)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2025-01-13 10:14:14 -07:00
Austin Vazquez
eee2f6d0de Merge pull request #49173 from austinvazquez/cherry-pick-ec5c9e06e39a4e6d29700f4ca5376773fae57fa0-to-25.0
[25.0 backport] Dockerfile: update runc binary to v1.2.3
2024-12-31 12:32:51 -06:00
Sebastiaan van Stijn
7d20eee4fd Dockerfile: update runc binary to v1.2.3
This is the third patch release of the 1.2.z release branch of runc. It
primarily fixes some minor regressions introduced in 1.2.0.

- Fixed a regression in use of securejoin.MkdirAll, where multiple
  runc processes racing to create the same mountpoint in a shared rootfs
  would result in spurious EEXIST errors. In particular, this regression
  caused issues with BuildKit.
- Fixed a regression in eBPF support for pre-5.6 kernels after upgrading
  Cilium's eBPF library version to 0.16 in runc.

full diff: https://github.com/opencontainers/runc/compare/v1.2.2...v1.2.3
release notes: https://github.com/opencontainers/runc/releases/tag/v1.2.3

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ec5c9e06e3)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-12-28 22:11:11 -07:00
Albin Kerouanton
d86f1d1cde Merge pull request #49112 from thaJeztah/25.0_backport_fix_setupIPChains_defer
[25.0 backport] libnetwork/drivers/bridge: setupIPChains: fix defer checking wrong err
2024-12-16 21:26:06 +01:00
Sebastiaan van Stijn
eacc3610f9 libnetwork/drivers/bridge: setupIPChains: fix defer checking wrong err
The output variable was renamed in 0503cf2510,
but that commit failed to change this defer, which was now checking the
wrong error.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 01a55860c6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-12-16 16:54:55 +01:00
Akihiro Suda
5bd40c3b0a Merge pull request #49082 from thaJeztah/25.0_backport_bump_xx
[25.0 backport] update xx to v1.6.1 for compatibility with alpine 3.21
2024-12-16 13:52:25 +09:00
Sebastiaan van Stijn
842024e721 update xx to v1.6.1 for compatibility with alpine 3.21
This fixes compatibility with alpine 3.21

- Fix additional possible `xx-cc`/`xx-cargo` compatibility issue with Alpine 3.21
- Support for Alpine 3.21
- Fix `xx-verify` with `file` 5.46+
- Fix possible error taking lock in `xx-apk` in latest Alpine without `coreutils`

full diff: https://github.com/tonistiigi/xx/compare/v1.5.0...v1.6.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 89899b71a0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-12-13 00:44:42 +01:00
Sebastiaan van Stijn
96b8a34d2b Dockerfile: update xx to v1.5.0
full diff: https://github.com/tonistiigi/xx/compare/v1.4.0...v1.5.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c4ba1f4718)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-12-13 00:44:42 +01:00
Sebastiaan van Stijn
5ed63409a2 Dockerfile: update xx to v1.4.0
full diff: https://github.com/tonistiigi/xx/compare/v1.2.1...v1.4.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4f46c44725)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-12-13 00:44:41 +01:00
Sebastiaan van Stijn
81ac5dace5 Merge pull request #49048 from austinvazquez/cherry-pick-0e34b3956b6e95324d67517305a3376d36896490-to-25.0
[25.0] update to go1.22.10
2024-12-07 23:38:31 +01:00
Sebastiaan van Stijn
03885ae2c0 update to go1.22.10
go1.22.10 (released 2024-12-03) includes fixes to the runtime and the syscall
package. See the Go 1.22.10 milestone on our issue tracker for details.

- https://github.com/golang/go/issues?q=milestone%3AGo1.22.10+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.22.9...go1.22.10

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0e34b3956b)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-12-07 12:32:08 -07:00
Sebastiaan van Stijn
a39b701d10 Merge pull request #49029 from thaJeztah/25.0_backport_cdi-rootless
[25.0 backport] Dockerd rootless: make {/etc,/var/run}/cdi available
2024-12-04 15:17:59 +01:00
Rafael Fernández López
ddc8a15eb5 Dockerd rootless: make {/etc,/var/run}/cdi available
When dockerd is executed with the `dockerd-rootless.sh` script, make
/etc/cdi and /var/run/cdi available to the daemon if they exist.

This makes it possible to enable the CDI integration in rootless mode.

Fixes: #47676

Signed-off-by: Rafael Fernández López <ereslibre@ereslibre.es>
(cherry picked from commit 4e30acb63f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-12-04 12:04:53 +01:00
Sebastiaan van Stijn
be15fac5cd Merge pull request #49011 from vvoland/49009-25.0
[25.0 backport] c8d/tag: Don't log a warning if the source image is not dangling
2024-12-02 13:32:21 +01:00
Paweł Gronowski
6648f3a10e c8d/tag: Don't log a warning if the source image is not dangling
After the image is tagged, the engine attempts to delete a dangling
image of the source image, so the image is no longer dangling.

When the source image is not dangling, the removal errors out (as
expected), but a warning is logged to the daemon log:

```
time="2024-12-02T10:44:25.386957553Z" level=warning msg="unexpected error when deleting dangling image" error="NotFound: image \"moby-dangling@sha256:54d8c2251c811295690b53af7767ecaf246f1186c36e4f2b2a63e0bfa42df045\": not found" imageID="sha256:54d8c2251c811295690b53af7767ecaf246f1186c36e4f2b2a63e0bfa42df045" spanID=bd10a21a07830d72 tag="docker.io/library/test:latest" traceID=4cf61671c2dc6da3dc7a09c0c6ac4e16
```

Remove that log as it causes unnecessary confusion, as the failure is
expected.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit a93f6c61db)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-12-02 11:52:01 +01:00
Sebastiaan van Stijn
5a7a2099b2 Merge pull request #48921 from austinvazquez/cherry-pick-runtime-updates-to-25.0
[25.0 backport] Dockerfile: update containerd v1.7.24, runc v1.2.2
2024-12-01 10:59:03 +01:00
Sebastiaan van Stijn
6f497b2d51 Dockerfile: update to runc v1.2.2
- 1.2.2 release notes: https://github.com/opencontainers/runc/releases/tag/v1.2.2
- 1.2.1 release notes: https://github.com/opencontainers/runc/releases/tag/v1.2.1
- 1.2.0 release notes: https://github.com/opencontainers/runc/releases/tag/v1.2.0

Breaking changes and deprecations are included below;

Breaking changes:

Several aspects of how mount options work has been adjusted in a way that
could theoretically break users that have very strange mount option strings.
This was necessary to fix glaring issues in how mount options were being
treated. The key changes are:

- Mount options on bind-mounts that clear a mount flag are now always
  applied. Previously, if a user requested a bind-mount with only clearing
  options (such as rw,exec,dev) the options would be ignored and the
  original bind-mount options would be set. Unfortunately this also means
  that container configurations which specified only clearing mount options
  will now actually get what they asked for, which could break existing
  containers (though it seems unlikely that a user who requested a specific
  mount option would consider it "broken" to get the mount options they
  asked foruser who requested a specific mount option would consider it
  "broken" to get the mount options they asked for). This also allows us to
  silently add locked mount flags the user did not explicitly request to be
  cleared in rootless mode, allowing for easier use of bind-mounts for
  rootless containers.
- Container configurations using bind-mounts with superblock mount flags
  (i.e. filesystem-specific mount flags, referred to as "data" in
  mount(2), as opposed to VFS generic mount flags like MS_NODEV) will
  now return an error. This is because superblock mount flags will also
  affect the host mount (as the superblock is shared when bind-mounting),
  which is obviously not acceptable. Previously, these flags were silently
  ignored so this change simply tells users that runc cannot fulfil their
  request rather than just ignoring it.

Deprecated

- runc option --criu is now ignored (with a warning), and the option will
  be removed entirely in a future release. Users who need a non-standard
  criu binary should rely on the standard way of looking up binaries in
  $PATH.
- runc kill option -a is now deprecated. Previously, it had to be specified
  to kill a container (with SIGKILL) which does not have its own private PID
  namespace (so that runc would send SIGKILL to all processes). Now, this is
  done automatically.
- github.com/opencontainers/runc/libcontainer/user is now deprecated, please
  use github.com/moby/sys/user instead. It will be removed in a future
  release.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e257856116)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-30 09:23:18 -07:00
Austin Vazquez
01c163d4ee Dockerfile: update containerd to v1.7.24
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
(cherry picked from commit 8cecf3a71c)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-30 09:23:15 -07:00
Sebastiaan van Stijn
7812180193 Merge pull request #49001 from austinvazquez/cherry-pick-fb6e650ab9dec7f9e8a67b278104881f03f63d08-to-25.0
[25.0 backport] integration: add wait
2024-11-30 09:59:18 +01:00
Sebastiaan van Stijn
cd20907cc5 Merge pull request #49003 from austinvazquez/cherry-pick-ci-updates-to-25.0
[25.0 backport] gha: more limits, update alpine version, and some minor improvements
2024-11-30 09:51:10 +01:00
Sebastiaan van Stijn
708c8dc304 gha: shorter time limits for smoke, validate
- validate-prepare and smoke-prepare took 10 seconds; limiting to 10 minutes
- smoke tests took less than 3 minutes; limiting to 10 minutes
- validate: most took under a minute, but "deprecate-integration-cli" took
  14 minutes; limiting to 30 minutes

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a051aba82e)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-30 05:59:38 +00:00
Sebastiaan van Stijn
f6bcbab7a1 gha: use "ubuntu-24.04" instead of "ubuntu-latest"
To be more explicit on what we're using.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 91c448bfb5)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-30 05:59:27 +00:00
Sebastiaan van Stijn
2de8143fa6 gha: dco: small tweaks to running the container
- add `--quiet` to suppress pull progress output
- use `./` instead of `$(pwd)` now that relative paths are supported
- set the working directory on the container, so that we don't have to `cd`

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9a14299540)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-30 05:59:18 +00:00
Sebastiaan van Stijn
e0857ef530 gha: dco: update ALPINE_VERSION to 3.20
Alpine 3.16 has been EOL for some time. Update to the latest version.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3cb98d759d)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-30 05:59:03 +00:00
Sebastiaan van Stijn
1b7b596513 gha: build (binary), build (dynbinary): limit to 20 minutes
Regular runs are under 5 minutes.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit cfe0d2a131)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-30 05:58:53 +00:00
Sebastiaan van Stijn
2e43cd5450 gha: dco: limit to 10 minutes
Regular runs are under a minute.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e75f7aca2f)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-30 05:58:43 +00:00
Akihiro Suda
bdb21cd779 integration: add wait
Cherry-picked several WIP commits from
b0a592798f/

Originally-authored-by: Rodrigo Campos <rodrigoca@microsoft.com>
Co-Authored-by: Kir Kolyshkin <kolyshkin@gmail.com>
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit fb6e650ab9)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-30 05:23:41 +00:00
Austin Vazquez
cf1608cf12 Merge pull request #48997 from thaJeztah/25.0_backport_modprobe_br_netfilter
[25.0 backport] Jenkinsfile: modprobe br_netfilter
2024-11-29 19:44:49 -08:00
Sebastiaan van Stijn
911478fb28 Jenkinsfile: modprobe br_netfilter
Make sure the module is loaded, as we're not able to load it from within
the dev-container;

    time="2024-11-29T20:40:42Z" level=error msg="Running modprobe br_netfilter failed with message: modprobe: WARNING: Module br_netfilter not found in directory /lib/modules/5.15.0-1072-aws\n" error="exit status 1"

Also moving these steps _before_ the "print info" step, so that docker info
doesn't show warnings that bridge-nf-call-iptables and bridge-nf-call-ip6tables
are not loaded.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit cce5dfe1e7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-11-29 23:28:49 +01:00
Austin Vazquez
1bb77b9532 Merge pull request #48928 from thaJeztah/25.0_backport_own-cgroup-path
[25.0 backport] daemon: use OwnCgroupPath in withCgroups
2024-11-22 08:24:21 -08:00
Kir Kolyshkin
2278d180a7 daemon: use OwnCgroupPath in withCgroups
Note: this usage comes from commit 56f77d5ade (part of PR 23430).

cgroups.InitCgroupPath is removed from runc (see [1]), and it is
suggested that users use OwnCgroupPath instead, because using init's is
problematic when in host PID namespace (see [2]) and is generally not
the right thing to do (see [3]).

[1]: https://github.com/opencontainers/runc/commit/fd5debf3
[2]: https://github.com/opencontainers/runc/commit/2b28b3c2
[3]: https://github.com/opencontainers/runc/commit/54e20217

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 6be2074aef)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-11-22 10:44:11 +01:00
Sebastiaan van Stijn
2440ce0527 Merge pull request #48920 from austinvazquez/cherry-pick-1eccc326deec9e39916c227b2684329b7f010bfd-to-25.0
[25.0 backport] vendor: github.com/golang-jwt/jwt/v4@v4.5.1
2024-11-22 10:18:06 +01:00
Austin Vazquez
a6d1d0693f vendor: github.com/golang-jwt/jwt/v4@v4.5.1
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
(cherry picked from commit 1eccc326de)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-21 18:37:27 +00:00
Akihiro Suda
5b6e0e970e Merge pull request #48876 from austinvazquez/cherry-pick-0e4ab47f232391954a4deb8b781cc8cb25d88469-to-25.0
[25.0 backport] update to go1.22.9
2024-11-14 22:30:49 -07:00
Paweł Gronowski
0ed4861f9c update to go1.22.9
- https://github.com/golang/go/issues?q=milestone%3AGo1.22.9+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.22.8...go1.22.9

go1.22.9 (released 2024-11-06) includes fixes to the linker. See the
[Go 1.22.9 milestone](https://github.com/golang/go/issues?q=milestone%3AGo1.22.9+label%3ACherryPickApproved)
milestone for details.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 0e4ab47f23)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-14 14:27:27 -07:00
Sebastiaan van Stijn
1c354d1f7a Merge pull request #48803 from austinvazquez/cherry-pick-runc-updates-to-25.0
[25.0 backport] Dockerfile: update runc to v1.1.14
2024-10-31 08:56:37 +01:00
Sebastiaan van Stijn
2df019330c update runc binary to 1.1.14
Update the runc binary that's used in CI and for the static packages.

diff: https://github.com/opencontainers/runc/compare/v1.1.13...v1.1.14

Release Notes:

- Fix CVE-2024-45310, a low-severity attack that allowed maliciously configured containers to create empty files and directories on the host.
- Add support for Go 1.23.
- Revert "allow overriding VERSION value in Makefile" and add EXTRA_VERSION.
- rootfs: consolidate mountpoint creation logic.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2189aa2426)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-10-30 20:31:09 -07:00
Sebastiaan van Stijn
e6de0b8f3b update runc binary to v1.1.13
Update the runc binary that's used in CI and for the static packages.

full diff: https://github.com/opencontainers/runc/compare/v1.1.12...v1.1.13

Release notes:

* If building with Go 1.22.x, make sure to use 1.22.4 or a later version.

* Support go 1.22.4+.
* runc list: fix race with runc delete.
* Fix set nofile rlimit error.
* libct/cg/fs: fix setting rt_period vs rt_runtime.
* Fix a debug msg for user ns in nsexec.
* script/*: fix gpg usage wrt keyboxd.
* CI fixes and misc backports.
* Fix codespell warnings.

* Silence security false positives from golang/net.
* libcontainer: allow containers to make apps think fips is enabled/disabled for testing.
* allow overriding VERSION value in Makefile.
* Vagrantfile.fedora: bump Fedora to 39.
* ci/cirrus: rm centos stream 8.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9101392309)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-10-30 20:30:44 -07:00
Sebastiaan van Stijn
b7902b3391 Merge pull request #48787 from austinvazquez/cherry-pick-10d57fde4497fb1e141f2020697528acece38425-to-25.0
[25.0 backport] volume/mounts: fix anonymous volume not being labeled
2024-10-28 22:41:12 +01:00
Sebastiaan van Stijn
cb56070132 volume: VolumesService.Create: fix log-level for debug logs
These log-entries were added in 10d57fde44,
but it looks like I accidentally left them as Error-logs following some
debugging (whoops!).

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 352b4ff2f1)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-10-28 15:22:15 +00:00
Sebastiaan van Stijn
480b01a532 volume/mounts: fix anonymous volume not being labeled
`Parser.ParseMountRaw()` labels anonymous volumes with a `AnonymousLabel` label
(`com.docker.volume.anonymous`) label based on whether a volume has a name
(named volume) or no name (anonymous) (see [1]).

However both `VolumesService.Create()` (see [1]) and `Parser.ParseMountRaw()`
(see [2], [3]) were generating a random name for anonymous volumes. The latter
is called before `VolumesService.Create()` is called, resulting in such volumes
not being labeled as anonymous.

Generating the name was originally done in Create (fc7b904dce),
but duplicated in b3b7eb2723 with the introduction
of the new Mounts field in HostConfig. Duplicating this effort didn't have a
real effect until (`Create` would just skip generating the name), until
618f26ccbc introduced the `AnonymousLabel` in
(v24.0.0, backported to v23.0.0).

Parsing generally should not fill in defaults / generate names, so this patch;

- Removes generating volume names from  `Parser.ParseMountRaw()`
- Adds a debug-log entry to `VolumesService.Create()`
- Touches up some logs to use structured logs for easier correlating logs

With this patch applied:

    docker run --rm --mount=type=volume,target=/toto hello-world

    DEBU[2024-10-24T22:50:36.359990376Z] creating anonymous volume                     volume-name=0cfd63d4df363571e7b3e9c04e37c74054cc16ff1d00d9a005232d83e92eda02
    DEBU[2024-10-24T22:50:36.360069209Z] probing all drivers for volume                volume-name=0cfd63d4df363571e7b3e9c04e37c74054cc16ff1d00d9a005232d83e92eda02
    DEBU[2024-10-24T22:50:36.360341209Z] Registering new volume reference              driver=local volume-name=0cfd63d4df363571e7b3e9c04e37c74054cc16ff1d00d9a005232d83e92eda02

[1]: 032721ff75/volume/service/service.go (L72-L83)
[2]: 032721ff75/volume/mounts/linux_parser.go (L330-L336)
[3]: 032721ff75/volume/mounts/windows_parser.go (L394-L400)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 10d57fde44)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-10-28 15:21:40 +00:00
Sebastiaan van Stijn
f7b7ec14b8 volume/service: change some logs to use structured logs
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4e840b9e29)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-10-28 15:21:27 +00:00
Albin Kerouanton
7de3a1f2ac Merge pull request #48717 from pendo324/25.x-backport/48560-setup-user-chains
[25.0 backport] Fix: setup user chains during libnetwork controller initialization
2024-10-21 21:31:53 +02:00
Andrés Maldonado
60eece38cd Fix: setup user chains even if there are running containers
Currently, the DOCKER-USER chains are set up on firewall reload or network
creation. If there are running containers at startup, configureNetworking won't
be called (daemon/daemon_unix.go), so the user chains won't be setup.

This commit puts the setup logic on a separate function, and calls it on the
original place and on initNetworkController.

Signed-off-by: Andrés Maldonado <maldonado@codelutin.com>
(cherry picked from commit a8bfa83667)
Signed-off-by: Justin Alvarez <alvajus@amazon.com>
2024-10-21 11:24:50 -04:00
Sebastiaan van Stijn
9dc7e0b2ae Merge pull request #48711 from austinvazquez/cherry-pick-cca708546415fd3f2baaad2b5a86b51cb8668fc2-to-25.0
[25.0 backport] cmd/dockerd: Add workaround for OTEL meter leak
2024-10-21 10:20:03 +02:00
Paweł Gronowski
54ac8bbe37 cmd/dockerd: Add workaround for OTEL meter leak
OTEL meter implementation has a memory leak issue which causes each
meter counter invocation to create a new instrument when the meter
provider is not set.

Also add a test, which will fail once a fixed OTEL is vendored.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit cca7085464)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-10-20 18:04:19 -07:00
Sebastiaan van Stijn
f8383fa45e Merge pull request #48648 from austinvazquez/cherry-pick-c68c9aed8cb3916669de6d7f2c564279ec83663f-to-25.0
[25.0 backport] gha: add guardrails timeouts on all jobs
2024-10-12 16:40:30 +02:00
Sebastiaan van Stijn
6e1af3d5d8 gha: remove stray double empty line
Accidentally introduced in 6b7e2783d1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 037bac89fc)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-10-12 02:07:34 +00:00
Sebastiaan van Stijn
0eae0850ac gha: restrict cross and bin-image to 20 minutes
We had a couple of runs where these jobs got stuck and github
actions didn't allow terminating them, so that they were only
terminated after 120 minutes.

These jobs usually complete in 5 minutes, so let's give them
a shorter timeout. 20 minutes should be enough (don't @ me).

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c68c9aed8c)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-10-12 02:02:50 +00:00
Sebastiaan van Stijn
e6a2c9bebb gha: add guardrails timeouts on all jobs
We had a few "runaway jobs" recently, where the job got stuck, and kept
running for 6 hours (in one case even 24 hours, probably due some github
outage). Some of those jobs could not be terminated.

While running these actions on public repositories doesn't cost us, it's
still not desirable to have jobs running for that long (as they can still
hold up the queue).

This patch adds a blanket "2 hours" time-limit to all jobs that didn't
have a limit set. We should look at tweaking those limits to actually
expected duration, but having a default at least is a start.

Also changed the position of some existing timeouts so that we have a
consistent order in which it's set; making it easier to spot locations
where no limit is defined.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6b7e2783d1)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-10-12 02:02:23 +00:00
Sebastiaan van Stijn
5ff6cef316 Merge pull request #48626 from thaJeztah/25.0_backport_fix_buildkit_go_version
[25.0 backport] gha: buildkit: make sure expected Go version is installed
2024-10-10 13:47:33 +02:00
Sebastiaan van Stijn
4b98bfd07d gha: buildkit: make sure expected Go version is installed
The buildkit workflow uses Go to determine the version of Buildkit to run
integration-tests for. It currently uses on the default version that's
installed on the GitHub actions runners (1.21.13 currently), but this fails
if the go.mod/vendor.mod specify a higher version of Go as required version.

If this fails, the BUILDKIT_REF and REPO env-vars are not set / empty,
resulting in the workflow checking out the current (moby) repository instead
of buildkit, which fails.

This patch adds a step to explicitly install the expected version of Go.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 02d4fc3234)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-10 11:44:05 +02:00
Sebastiaan van Stijn
bd777a5806 Merge pull request #48582 from austinvazquez/cherry-pick-ca4c68ab956993b47fd0046b4d96eceab8b9a261-to-25.0
[25.0 backport] update to go1.22.8
2024-10-07 17:32:35 +02:00
Sebastiaan van Stijn
ae548176dc update to go1.22.8
go1.22.8 (released 2024-10-01) includes fixes to cgo, and the maps and syscall
packages. See the Go 1.22.8 milestone on our issue tracker for details;

- https://github.com/golang/go/issues?q=milestone%3AGo1.22.8+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.22.7...go1.22.8

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ca4c68ab95)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-10-04 19:51:58 +00:00
Sebastiaan van Stijn
0dd255c6f7 Merge pull request #48548 from dperny/25.0_backport_bump_containerd
[25.0 backport] bump containerd v1.7.22
2024-09-26 19:21:08 +02:00
Sebastiaan van Stijn
122682205f Dockerfile: update containerd binary to v1.7.22
Update the containerd binary that's used in CI and static binaries

- Update to go1.22.7, go1.23.1
- CRI: Cumulative stats can't decrease
- Fix bug where init exits were being dropped
- Update runc binary to 1.1.14

- diff: https://github.com/containerd/containerd/compare/v1.7.21...v1.7.22
- release notes: https://github.com/containerd/containerd/releases/tag/v1.7.22

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 92195c1333)
Signed-off-by: Drew Erny <derny@mirantis.com>
2024-09-25 14:07:47 -05:00
Paweł Gronowski
9f102b3b5b Dockerfile: update containerd binary to v1.7.21 (static binaries and CI only)
Update the containerd binary that's used in CI and static binaries

- full diff: https://github.com/containerd/containerd/compare/v1.7.20...v1.7.21
- release notes: https://github.com/containerd/containerd/releases/tag/v1.7.21

```markdown changelog
Update containerd (static binaries only) to [v1.7.21](https://github.com/containerd/containerd/releases/tag/v1.7.21)
```

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit de4fc1c927)
Signed-off-by: Drew Erny <derny@mirantis.com>
2024-09-25 13:58:22 -05:00
Sebastiaan van Stijn
6aa6d461da Merge pull request #48507 from thaJeztah/25.0_backport_man_dockerd_logformat
[25.0 backport] man: dockerd: add description for --log-format option
2024-09-16 16:52:06 +02:00
Sebastiaan van Stijn
58af0513c0 Merge pull request #48501 from austinvazquez/cherry-pick-8b0e94ffaf7ea7d42391a3961e795b33976256c9-25.0
[25.0 backport] Update dlv in the dev-env
2024-09-16 14:19:55 +02:00
Sebastiaan van Stijn
75891766e4 man: dockerd: add description for --log-format option
This option was added in a08abec9f8,
as part of Docker v25.0, but did not update the docs and manpage.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 45a9dde660)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-09-16 14:16:58 +02:00
Rob Murray
3ec9003a14 Update dlv in the dev-env
Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 8b0e94ffaf)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-14 07:35:33 -07:00
Sebastiaan van Stijn
bffbf551fc Merge pull request #48493 from austinvazquez/cherry-pick-51280071161a2319efae8e02a4373bb70e170587-to-25.0
[25.0 backport] Explicitly disable nvidia device injection for --gpus=0
2024-09-13 21:28:25 +02:00
Evan Lezar
caef5cc70c Explicitly disable nvidia device injection for --gpus=0
This change ensures that when --gpus=0 is selected, the injection of
NVIDIA device nodes and libraries is disabled by setting the
NVIDIA_VISIBLE_DEVICES environment variable to void instead of
leaving this unspecfied.

Signed-off-by: Evan Lezar <elezar@nvidia.com>
(cherry picked from commit 5128007116)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-13 14:25:25 +00:00
Sebastiaan van Stijn
5a91b941b8 Merge pull request #48465 from gdams/25
[25.0 backport] seccomp: add riscv64 mapping to seccomp_linux.go
2024-09-10 14:15:02 +02:00
George Adams
34471d3259 seccomp: add riscv64 mapping to seccomp_linux.go
Signed-off-by: George Adams <georgeadams1995@gmail.com>
(cherry picked from commit 1161b790cf)
Signed-off-by: George Adams <georgeadams1995@gmail.com>
2024-09-10 11:40:36 +01:00
Sebastiaan van Stijn
782843c0d1 Merge pull request #48437 from austinvazquez/cherry-pick-go1.22.7-to-25.0
[25.0 backport] update to go1.22.7
2024-09-09 08:35:33 +02:00
Paweł Gronowski
bec84c9c31 update to go1.22.7
- https://github.com/golang/go/issues?q=milestone%3AGo1.22.7+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.22.6...go1.22.7

These minor releases include 3 security fixes following the security policy:

- go/parser: stack exhaustion in all Parse* functions

    Calling any of the Parse functions on Go source code which contains deeply nested literals can cause a panic due to stack exhaustion.

    This is CVE-2024-34155 and Go issue https://go.dev/issue/69138.

- encoding/gob: stack exhaustion in Decoder.Decode

    Calling Decoder.Decode on a message which contains deeply nested structures can cause a panic due to stack exhaustion.

    This is a follow-up to CVE-2022-30635.

    Thanks to Md Sakib Anwar of The Ohio State University (anwar.40@osu.edu) for reporting this issue.

    This is CVE-2024-34156 and Go issue https://go.dev/issue/69139.

- go/build/constraint: stack exhaustion in Parse

    Calling Parse on a "// +build" build tag line with deeply nested expressions can cause a panic due to stack exhaustion.

    This is CVE-2024-34158 and Go issue https://go.dev/issue/69141.

View the release notes for more information:
https://go.dev/doc/devel/release#go1.23.1

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit a2e14dd8bd)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-06 22:24:21 +00:00
Sebastiaan van Stijn
2166ac395a Merge pull request #48428 from austinvazquez/cherry-pick-go1.22.6-to-25.0
[25.0 backport] update to go1.22.6
2024-09-06 09:18:48 +02:00
Sebastiaan van Stijn
d0315c9824 golangci-lint: temporarily disable G115: integer overflow conversion
it produces many hits, some of which may be false positives, but we need to
look into these, e.g.;

    container/container.go:517:72: G115: integer overflow conversion int -> uint32 (gosec)
        shouldRestart, _, _ := container.RestartManager().ShouldRestart(uint32(container.ExitCode()), container.HasBeenManuallyStopped, container.FinishedAt.Sub(container.StartedAt))
                                                                              ^
    container/view.go:401:25: G115: integer overflow conversion int -> uint16 (gosec)
                        PrivatePort: uint16(p),
                                           ^
    container/view.go:413:25: G115: integer overflow conversion int -> uint16 (gosec)
                        PrivatePort: uint16(p),
                                           ^
    container/view.go:414:25: G115: integer overflow conversion int -> uint16 (gosec)
                        PublicPort:  uint16(h),
                                           ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f5108e9c6b)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:43 +00:00
Sebastiaan van Stijn
ff546aff14 update golangci-lint to v1.60.2
Update to add go1.23 support

full diff: https://github.com/golangci/golangci-lint/compare/v1.59.1...v1.60.2
Changelog: https://golangci-lint.run/product/changelog/#1602

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9b11bb507b)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:42 +00:00
Sebastiaan van Stijn
15db81eeaa update to go1.22.6
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3df59c9dcf)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:17 +00:00
Cory Snider
23af4b75e9 hack/make/.binary: set CGO_LDFLAGS=-latomic for arm/v5
cross-compiling for arm/v5 was failing;

    #56 84.12 /usr/bin/arm-linux-gnueabi-clang -marm -o $WORK/b001/exe/a.out -Wl,--export-dynamic-symbol=_cgo_panic -Wl,--export-dynamic-symbol=_cgo_topofstack -Wl,--export-dynamic-symbol=crosscall2 -Qunused-arguments -Wl,--compress-debug-sections=zlib /tmp/go-link-759578347/go.o /tmp/go-link-759578347/000000.o /tmp/go-link-759578347/000001.o /tmp/go-link-759578347/000002.o /tmp/go-link-759578347/000003.o /tmp/go-link-759578347/000004.o /tmp/go-link-759578347/000005.o /tmp/go-link-759578347/000006.o /tmp/go-link-759578347/000007.o /tmp/go-link-759578347/000008.o /tmp/go-link-759578347/000009.o /tmp/go-link-759578347/000010.o /tmp/go-link-759578347/000011.o /tmp/go-link-759578347/000012.o /tmp/go-link-759578347/000013.o /tmp/go-link-759578347/000014.o /tmp/go-link-759578347/000015.o /tmp/go-link-759578347/000016.o /tmp/go-link-759578347/000017.o /tmp/go-link-759578347/000018.o -O2 -g -O2 -g -O2 -g -lpthread -O2 -g -no-pie -static
    #56 84.12 ld.lld: error: undefined symbol: __atomic_load_4
    #56 84.12 >>> referenced by gcc_libinit.c
    #56 84.12 >>>               /tmp/go-link-759578347/000009.o:(_cgo_wait_runtime_init_done)
    #56 84.12 >>> referenced by gcc_libinit.c
    #56 84.12 >>>               /tmp/go-link-759578347/000009.o:(_cgo_wait_runtime_init_done)
    #56 84.12 >>> referenced by gcc_libinit.c
    #56 84.12 >>>               /tmp/go-link-759578347/000009.o:(_cgo_wait_runtime_init_done)
    #56 84.12 >>> referenced 2 more times
    #56 84.12
    #56 84.12 ld.lld: error: undefined symbol: __atomic_store_4
    #56 84.12 >>> referenced by gcc_libinit.c
    #56 84.12 >>>               /tmp/go-link-759578347/000009.o:(_cgo_wait_runtime_init_done)
    #56 84.12 >>> referenced by gcc_libinit.c
    #56 84.12 >>>               /tmp/go-link-759578347/000009.o:(x_cgo_notify_runtime_init_done)
    #56 84.12 >>> referenced by gcc_libinit.c
    #56 84.12 >>>               /tmp/go-link-759578347/000009.o:(x_cgo_set_context_function)
    #56 84.12 clang: error: linker command failed with exit code 1 (use -v to see invocation)

From discussion on GitHub;
https://github.com/moby/moby/pull/46982#issuecomment-2206992611

The arm/v5 build failure looks to be due to libatomic not being included
in the link. For reasons probably buried in mailing list archives,
[gcc](https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81358) and clang don't
bother to implicitly auto-link libatomic. This is not a big deal on many
modern platforms with atomic intrinsics as the compiler generates inline
instruction sequences, avoiding any libcalls into libatomic. ARMv5 is not
one of those platforms: all atomic operations require a libcall.

In theory, adding `CGO_LDFLAGS=-latomic` should fix arm/v5 builds.

While it could be argued that cgo should automatically link against
libatomic in the same way that it automatically links against libpthread,
the Go maintainers would have a valid counter-argument that it should be
the C toolchain's responsibility to link against libatomic automatically,
just like it does with libgcc or compiler-rt.

Co-authored-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 4cd5c2b643)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:17 +00:00
Cory Snider
da8bfd963e hack/make/.binary: set CCGO_CFLAGS=-Wno-atomic-alignment for arm/v5
cross-compiling for arm/v5 fails on go1.22; a fix is included for this
in go1.23 (https://github.com/golang/go/issues/65290), but for go1.22
we can set the correct option manually.

    1.189 + go build -mod=vendor -modfile=vendor.mod -o /tmp/bundles/binary-daemon/dockerd -tags 'netgo osusergo static_build journald' -ldflags '-w -X "github.com/docker/docker/dockerversion.Version=dev" -X "github.com/docker/docker/dockerversion.GitCommit=HEAD" -X "github.com/docker/docker/dockerversion.BuildTime=2024-08-29T16:59:57.000000000+00:00" -X "github.com/docker/docker/dockerversion.PlatformName=" -X "github.com/docker/docker/dockerversion.ProductName=" -X "github.com/docker/docker/dockerversion.DefaultProductLicense=" -extldflags -static ' -gcflags= github.com/docker/docker/cmd/dockerd
    67.78 # runtime/cgo
    67.78 gcc_libinit.c:44:8: error: large atomic operation may incur significant performance penalty; the access size (4 bytes) exceeds the max lock-free size (0 bytes) [-Werror,-Watomic-alignment]
    67.78 gcc_libinit.c:47:6: error: large atomic operation may incur significant performance penalty; the access size (4 bytes) exceeds the max lock-free size (0 bytes) [-Werror,-Watomic-alignment]
    67.78 gcc_libinit.c:49:10: error: large atomic operation may incur significant performance penalty; the access size (4 bytes) exceeds the max lock-free size (0 bytes) [-Werror,-Watomic-alignment]
    67.78 gcc_libinit.c:69:9: error: large atomic operation may incur significant performance penalty; the access size (4 bytes) exceeds the max lock-free size (0 bytes) [-Werror,-Watomic-alignment]
    67.78 gcc_libinit.c:71:3: error: large atomic operation may incur significant performance penalty; the access size (4 bytes) exceeds the max lock-free size (0 bytes) [-Werror,-Watomic-alignment]
    78.20 + rm -f /go/src/github.com/docker/docker/go.mod

Co-authored-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit e853c093bf)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:17 +00:00
Sebastiaan van Stijn
0ce4415ff2 daemon: fix non-constant format string in call (govet)
daemon/daemon.go:942:21: printf: non-constant format string in call to (*github.com/docker/docker/vendor/github.com/sirupsen/logrus.Entry).Errorf (govet)
            log.G(ctx).Errorf(err.Error())
                              ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1ad5b5abb2)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:17 +00:00
Sebastiaan van Stijn
14a48ac308 api/types: fix non-constant format string in call (govet)
api/types/container/hostconfig.go:328:43: printf: non-constant format string in call to fmt.Errorf (govet)
                return &errInvalidParameter{fmt.Errorf(msg)}
                                                       ^
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 005b488506)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:17 +00:00
Sebastiaan van Stijn
c50e7e6ca2 api/server/router: fix non-constant format string in call (govet)
api/server/router/container/container_routes.go:943:22: printf: non-constant format string in call to fmt.Fprintf (govet)
                fmt.Fprintf(conn, "HTTP/1.1 101 UPGRADED\r\nContent-Type: "+contentType+"\r\nConnection: Upgrade\r\nUpgrade: tcp\r\n\r\n")
                                  ^
    api/server/router/image/image_routes.go:144:50: printf: non-constant format string in call to github.com/docker/docker/pkg/streamformatter.FormatStatus (govet)
                output.Write(streamformatter.FormatStatus("", id.String()))
                                                              ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0fd3a53c12)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:17 +00:00
Sebastiaan van Stijn
2a4ea4749d container/stream: fix non-constant format string in call (govet)
container/stream/streams.go:111:21: printf: non-constant format string in call to fmt.Errorf (govet)
            return fmt.Errorf(strings.Join(errors, "\n"))
                              ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4a93233b88)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:17 +00:00
Sebastiaan van Stijn
b536253047 libnetwork/drivers/bridge: fix non-constant format string in call (govet)
libnetwork/drivers/bridge/setup_ip_tables_linux.go:385:23: printf: non-constant format string in call to fmt.Errorf (govet)
                    return fmt.Errorf(msg)
                                      ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 068c1bf3be)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:17 +00:00
Sebastiaan van Stijn
3216abd8db volume/testutils: fix non-constant format string in call (govet)
volume/testutils/testutils.go:98:26: printf: non-constant format string in call to fmt.Errorf (govet)
            return nil, fmt.Errorf(opts["error"])
                                   ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f434cdd14a)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:16 +00:00
Sebastiaan van Stijn
dd5a6fdbac builder/dockerfile: parseChownFlag: fix non-constant format string in call (govet)
builder/dockerfile/internals_linux.go:38:48: printf: non-constant format string in call to github.com/docker/docker/vendor/github.com/pkg/errors.Wrapf (govet)
            return idtools.Identity{}, errors.Wrapf(err, "can't find uid for user "+userStr)
                                                         ^
    builder/dockerfile/internals_linux.go:42:48: printf: non-constant format string in call to github.com/docker/docker/vendor/github.com/pkg/errors.Wrapf (govet)
            return idtools.Identity{}, errors.Wrapf(err, "can't find gid for group "+grpStr)
                                                         ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 81a1ca0217)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:16 +00:00
Sebastiaan van Stijn
0c5e131330 layer: ignore G602: slice index out of range (gosec)
This looks to be a false positive;

    layer/layer.go:202:47: G602: slice index out of range (gosec)
            return createChainIDFromParent(ChainID(dgsts[0]), dgsts[1:]...)
                                                        ^
    layer/layer.go:205:69: G602: slice index out of range (gosec)
        dgst := digest.FromBytes([]byte(string(parent) + " " + string(dgsts[0])))
                                                                           ^
    layer/layer.go:206:53: G602: slice bounds out of range (gosec)
        return createChainIDFromParent(ChainID(dgst), dgsts[1:]...)
                                                           ^
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b56c58a860)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:16 +00:00
Sebastiaan van Stijn
b50a85d0ed cmd/dockerd: fix non-constant format string in call (govet)
cmd/dockerd/required.go:17:24: printf: non-constant format string in call to github.com/docker/docker/vendor/github.com/pkg/errors.Errorf (govet)
            return errors.Errorf("\n" + strings.TrimRight(cmd.UsageString(), "\n"))
                                 ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 06bfe8bab3)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:16 +00:00
Sebastiaan van Stijn
8105391708 libnetwork: fix non-constant format string in call (govet)
libnetwork/controller.go:1054:32: printf: non-constant format string in call to github.com/docker/docker/libnetwork/types.NotFoundErrorf (govet)
                return types.NotFoundErrorf(err.Error())
                                            ^
    libnetwork/controller.go:1073:32: printf: non-constant format string in call to github.com/docker/docker/libnetwork/types.NotFoundErrorf (govet)
                return types.NotFoundErrorf(err.Error())
                                            ^
    libnetwork/sandbox_externalkey_unix.go:113:21: printf: non-constant format string in call to fmt.Errorf (govet)
            return fmt.Errorf(string(buf[0:n]))
                              ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6008c42ca2)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:16 +00:00
Sebastiaan van Stijn
6209d5bd68 integration-cli: fix non-constant format string in call (govet)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b79a4696ee)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:16 +00:00
Sebastiaan van Stijn
25cffb9dec integration-cli: DockerSwarmSuite: rm redundant Fprintf, handle errors
Also fix some unhandled errors.

    integration-cli/docker_cli_swarm_test.go:697:19: printf: non-constant format string in call to fmt.Fprintf (govet)
                fmt.Fprintf(w, `{"Error":"failed to add veth pair: `+err.Error()+`"}`)
                               ^
    integration-cli/docker_cli_swarm_test.go:731:18: printf: non-constant format string in call to fmt.Fprintf (govet)
            fmt.Fprintf(w, `{"LocalDefaultAddressSpace":"`+lAS+`", "GlobalDefaultAddressSpace": "`+gAS+`"}`)
                           ^
    integration-cli/docker_cli_swarm_test.go:742:19: printf: non-constant format string in call to fmt.Fprintf (govet)
                fmt.Fprintf(w, `{"Error":"Unknown address space in pool request: `+poolRequest.AddressSpace+`"}`)
                               ^
    integration-cli/docker_cli_swarm_test.go:746:19: printf: non-constant format string in call to fmt.Fprintf (govet)
                fmt.Fprintf(w, `{"PoolID":"`+poolID+`", "Pool":"`+pool+`"}`)
                               ^
    integration-cli/docker_cli_swarm_test.go:763:19: printf: non-constant format string in call to fmt.Fprintf (govet)
                fmt.Fprintf(w, `{"Address":"`+gw+`"}`)
                               ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6bbacbec26)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:16 +00:00
Sebastiaan van Stijn
21279f652e integration-cli: DockerNetworkSuite: rm redundant Fprintf, handle errors
Also rename some variables that shadowed imports, and fix some
unhandled errors.

    integration-cli/docker_cli_network_unix_test.go:102:19: printf: non-constant format string in call to fmt.Fprintf (govet)
                fmt.Fprintf(w, `{"Error":"failed to add veth pair: `+err.Error()+`"}`)
                               ^
    integration-cli/docker_cli_network_unix_test.go:136:18: printf: non-constant format string in call to fmt.Fprintf (govet)
            fmt.Fprintf(w, `{"LocalDefaultAddressSpace":"`+lAS+`", "GlobalDefaultAddressSpace": "`+gAS+`"}`)
                           ^
    integration-cli/docker_cli_network_unix_test.go:147:19: printf: non-constant format string in call to fmt.Fprintf (govet)
                fmt.Fprintf(w, `{"Error":"Unknown address space in pool request: `+poolRequest.AddressSpace+`"}`)
                               ^
    integration-cli/docker_cli_network_unix_test.go:151:19: printf: non-constant format string in call to fmt.Fprintf (govet)
                fmt.Fprintf(w, `{"PoolID":"`+poolID+`", "Pool":"`+pool+`"}`)
                               ^
    integration-cli/docker_cli_network_unix_test.go:168:19: printf: non-constant format string in call to fmt.Fprintf (govet)
                fmt.Fprintf(w, `{"Address":"`+gw+`"}`)
                               ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3ca38f0b5e)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:16 +00:00
Sebastiaan van Stijn
a27066d1ca integration-cli: use erors.New() instead of fmt.Errorf
integration-cli/benchmark_test.go:49:27: printf: non-constant format string in call to fmt.Errorf (govet)
                            chErr <- fmt.Errorf(out)
                                                ^
    integration-cli/benchmark_test.go:62:27: printf: non-constant format string in call to fmt.Errorf (govet)
                            chErr <- fmt.Errorf(out)
                                                ^
    integration-cli/benchmark_test.go:68:27: printf: non-constant format string in call to fmt.Errorf (govet)
                            chErr <- fmt.Errorf(out)
                                                ^
    integration-cli/benchmark_test.go:73:27: printf: non-constant format string in call to fmt.Errorf (govet)
                            chErr <- fmt.Errorf(out)
                                                ^
    integration-cli/benchmark_test.go:78:27: printf: non-constant format string in call to fmt.Errorf (govet)
                            chErr <- fmt.Errorf(out)
                                                ^
    integration-cli/benchmark_test.go:84:27: printf: non-constant format string in call to fmt.Errorf (govet)
                            chErr <- fmt.Errorf(out)
                                                ^
    integration-cli/benchmark_test.go:94:27: printf: non-constant format string in call to fmt.Errorf (govet)
                            chErr <- fmt.Errorf(out)
                                                ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2b7a687554)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:15 +00:00
Sebastiaan van Stijn
e88d4ea298 libnetwork: TestDNSOptions: remove redundant skip check
libnetwork/sandbox_dns_unix_test.go:17:13: SA4032: due to the file's build constraints, runtime.GOOS will never equal "windows" (staticcheck)
        skip.If(t, runtime.GOOS == "windows", "test only works on linux")
                   ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c7b36f8953)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:15 +00:00
Sebastiaan van Stijn
613d955d38 integration-cli: remove redundant platform checks
This condition was added in 0215a62d5b, which
removed pkg/homedir as abstraction, but didn't consider that this test
is currently only ran on Unix.

    integration-cli/docker_cli_run_unix_test.go:254:5: SA4032: due to the file's build constraints, runtime.GOOS will never equal "windows" (staticcheck)
        if runtime.GOOS == "windows" {
           ^
    integration-cli/docker_cli_run_unix_test.go:338:5: SA4032: due to the file's build constraints, runtime.GOOS will never equal "windows" (staticcheck)
        if runtime.GOOS == "windows" {
           ^

Added a TODO, because this functionality should also be tested on Windows,
probably as part of tests in docker/cli instead.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6bd7835cb6)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:15 +00:00
Paweł Gronowski
e962b3e06e update to go1.21.13
- https://github.com/golang/go/issues?q=milestone%3AGo1.21.13+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.21.12...go1.21.13

go1.21.13 (released 2024-08-06) includes fixes to the go command, the
covdata command, and the bytes package. See the Go 1.21.13 milestone on
our issue tracker for details.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit b24c2e95e5)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:15 +00:00
Sebastiaan van Stijn
33dbea3c37 vendor: github.com/Microsoft/go-winio v0.6.2
- fileinfo: internally fix FileBasicInfo memory alignment (fixes compatibility
  with go1.22)

full diff: https://github.com/Microsoft/go-winio/compare/v0.6.1...v0.6.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e3c59640d5)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:15 +00:00
Sebastiaan van Stijn
5e46424b29 vendor: golang.org/x/tools v0.16.0
It's not used in our code, but some dependencies have a "tools.go" to
force it; updating to a version that doesn't depend on golang.org/x/sys/execabs

full diff: https://github.com/golang/tools/compare/v0.14.0...v0.16.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2140e7e0f5)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:15 +00:00
Sebastiaan van Stijn
5ca50f5c24 vendor: golang.org/x/mod v0.17.0
no changes in vendored codee

full diff: https://github.com/golang/mod/compare/v0.13.0...v0.17.0

- modfile: do not collapse if there are unattached comments within blocks
- modfile: fix crash on AddGoStmt in empty File
- modfile: improve directory path detection and error text consistency
- modfile: use new go version string format in WorkFile.add error
- sumdb: replace globsMatchPath with module.MatchPrefixPatterns
- sumdb/tlog: make NewTiles only generate strictly necessary tiles

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 85c9900377)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:15 +00:00
Sebastiaan van Stijn
a599caf7e9 update golangci-lint to v1.59.1
full diff: https://github.com/golangci/golangci-lint/compare/v1.55.2...v1.59.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 95fae036ae)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:51:07 +00:00
Sebastiaan van Stijn
89903672a7 pkg/archive: reformat code to make #nosec comment work again
Looks like the way it picks up #nosec comments changed, causing the
linter error to re-appear;

    pkg/archive/archive_linux.go:57:17: G305: File traversal when extracting zip/tar archive (gosec)
                    Name:       filepath.Join(hdr.Name, WhiteoutOpaqueDir),
                                ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d4160d5aa7)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:50:55 +00:00
Sebastiaan van Stijn
dbf6db9306 builder/remotecontext: reformat code to make #nosec comment work again
Looks like the way it picks up #nosec comments changed, causing the
linter error to re-appear;

    builder/remotecontext/remote.go:48:17: G107: Potential HTTP request made with variable url (gosec)
        if resp, err = http.Get(address); err != nil {
                       ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 04bf0e3d69)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-04 02:50:10 +00:00
Sebastiaan van Stijn
122e5e1442 Merge pull request #48379 from corhere/backport-25.0/dockerd-manpage
[25.0 backport] Move dockerd man page back from docker/cli
2024-08-30 14:14:12 +02:00
Cory Snider
55a4cadaa5 man: create parent directories in install recipe
Support the use of `make install` in packaging scripts, where the
$mandir tree might not exist under $DESTDIR.

For portability, create the parent directories using a separate install
command instead of relying on the non-portable `-D` flag.

Set errexit so the install target fails if any install step fails.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 88b118688e)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-29 16:21:04 -04:00
Cory Snider
042dad56d0 man: support bringing your own go-md2man
Set the GO_MD2MAN make variable to elide building go-md2man from
vendored sources and use the specified command instead.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit edfde78355)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Cory Snider
553d915ef4 man: build dockerd man pages using make
Vendor the go-md2man tool used to generate the man pages so that the
only dependency is a Go toolchain.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 05d7008419)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Grace Choi
c70f626351 Removed all mentions of "please" from docs and messages
Signed-off-by: Grace Choi <gracechoi@utexas.edu>
Signed-off-by: Pranjal Rai <pranjalrai@utexas.edu>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b4cee5c3ee)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
David Karlsson
5966382473 docs: add default-network-opt daemon option
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
(cherry picked from commit f1ec84314d)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Sebastiaan van Stijn
3edc25412a docs: remove devicemapper
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 23812190c3)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Kir Kolyshkin
65906e44b0 man/dockerd.8: assorted formatting fixes
Mostly, this makes sure that literals (such as true, false, host,
private, examples of options usage etc.) are typeset in bold, except for
filenames, which are typeset in italic.

While at it,
 - remove some default values from synopsis as it should not
   be there;
 - fix man pages references (page name in bold, volume number in
   regular).

This is not a complete fix, but a step in the right direction.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 690d166632)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Kir Kolyshkin
a298720e8f man/dockerd.8: escape asterisks and underscores
1. Escape asterisks and underscores, that have special meaning in
   Markdown. While most markdown processors are smart enough to
   distinguish whether it's a literal * or _ or a formatting directive,
   escaping makes things more explicit.

2. Fix using wrong level of headings in some dm options (most are ####,
   but some were #####).

3. Do not use sub-heading for examples in some dm options (this is how
   it's done in the rest of the man page).

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 374b779dd1)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Sebastiaan van Stijn
88a3e540c9 docs: update dockerd usage output for new proxy-options
Adds documentation for the options that were added in
427c7cc5f8

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 386d0c0fbc)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Ashly Mathew
90fc11f69a Fix styling of arguments
Signed-off-by: Ashly Mathew <ashlymathew93@gmail.com>
(cherry picked from commit 54971ac807)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Luis Henrique Mulinari
182df40d13 Fix the max-concurrent-downloads and max-concurrent-uploads configs documentation
This fix tries to address issues raised in moby/moby#44346.
The max-concurrent-downloads and max-concurrent-uploads limits are applied for the whole engine and not for each pull/push command.

Signed-off-by: Luis Henrique Mulinari <luis.mulinari@gmail.com>
(cherry picked from commit a8b8f9b288)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Sebastiaan van Stijn
2544c68655 docs: remove documentation about deprecated cluster-store
This removes documentation related to legacy overlay networks using
an external k/v store.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 68e9223289)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Anca Iordache
be77069539 Document --validate daemon option
Signed-off-by: Anca Iordache <anca.iordache@docker.com>
(cherry picked from commit 6c702167bf)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Sebastiaan van Stijn
0299ca1d73 Update man-page source MarkDown to work with go-md2man v2
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit af45195a21)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Akihiro Suda
aff4659c67 docs: update for cgroup v2 and rootless
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 562a6d2b13)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Rob Gulewich
c47231e5cf docker run: specify cgroup namespace mode with --cgroupns
Signed-off-by: Rob Gulewich <rgulewich@netflix.com>
(cherry picked from commit 7cf2132655)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Lukas Heeren
962f331e76 daemon: document --max-download-attempts option
update docs based on PR 39949

Signed-off-by: Lukas Heeren <lukas-heeren@hotmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1cbcd5d47a)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
taiji-tech
71f9bfe47f Update document links and title.
Signed-off-by: taiji-tech <csuhqg@foxmail.com>
(cherry picked from commit 3cfa74724c)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
selansen
017213c2b0 Allow user to specify default address pools for docker networks This is separate commit for CLI files to address PR 36054
Signed-off-by: selansen <elango.siva@docker.com>
(cherry picked from commit 462f38bd8b)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Sebastiaan van Stijn
210f03082b Update docs and completion-scripts for deprecated features
- the `--disable-legacy-registry` daemon flag was removed
- duplicate keys with conflicting values for engine labels
  now produce an error instead of a warning.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 13ff896b38)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Renaud Gaubert
2f78133a0a Added docs for dockerd
Signed-off-by: Renaud Gaubert <renaud.gaubert@gmail.com>
(cherry picked from commit f3c3b05b50)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Abdur Rehman
675593bb4f fix a number of minor typos
Fix 19 typos, grammatical errors and duplicated words.

These fixes have minimal impact on the code as these are either in the
doc files or in comments inside the code files.

Signed-off-by: Abdur Rehman <abdur_rehman@mentor.com>
(cherry picked from commit 20f8455562)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Kir Kolyshkin
9c291b1745 Introduce/document new IPC modes
This builds (and depends) on https://github.com/moby/moby/pull/34087

Version 2:
 - remove --ipc argument validation (it is now done by daemon)
 - add/document 'none' value
 - docs/reference/run.md: add a table with better modes description
 - dockerd(8) typesetting fixes

Version 3:
 - remove ipc mode tests from cli/command/container/opts_test.go

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit c23d4b017a)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Aleksa Sarai
a23ff1bb1a docs: add documentation for dm.libdm_log_level
This is a new option added specifically to allow for debugging of bugs
in Docker's storage drivers or libdm itself.

Signed-off-by: Aleksa Sarai <asarai@suse.de>
(cherry picked from commit 25baee8ab9)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Cory Snider
c78cecd77f Restore dockerd man page
Prepare to move the dockerd man page back to this repository from
docker/cli, retaining history.

This partially reverts commit b5579a4ce3.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 7d3f09a9c3)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:29:44 -04:00
Cory Snider
f95f4c7d22 Merge pull request #48253 from austinvazquez/backport-healthcheck-startinterval-swarm-to-25.0
[25.0 backport] api: adjust health start interval on swarm update
2024-08-09 18:04:20 -04:00
Sebastiaan van Stijn
508e20b4a0 Merge pull request #48296 from austinvazquez/cherry-pick-2b5ffa0b63c76e8bb4ebb253d7e4db5c7af918c0-to-25.0
[25.0 backport] gha: set permissions to read-only by default
2024-08-07 15:27:35 +02:00
Sebastiaan van Stijn
f14cf10618 gha: set permissions to read-only by default
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2b5ffa0b63)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-08-06 16:14:21 +00:00
Cory Snider
0cd951e4dd api: adjust health start interval on swarm update
The health-check start interval added in API v1.44, and the start
interval option is ignored when creating a Swarm service using an older
API version. However, due to an oversight, the option is not ignored
when older API clients _update_ a Swarm service. Fix this oversight by
moving the adjustment code into the adjustForAPIVersion function used by
both the createService and updateService handler functions.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit c8e7fcf91a)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-07-27 05:21:31 +00:00
Sebastiaan van Stijn
b08a51fe16 Merge pull request #48231 from austinvazquez/backport-vendor-otel-v0.46.1-to-25.0
[25.0 backport] vendor: OTEL v0.46.1 / v1.21.0
2024-07-25 00:57:17 +02:00
Sebastiaan van Stijn
d151b0f87f vendor: OTEL v0.46.1 / v1.21.0
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c516804d6f)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-07-24 15:02:12 +00:00
Sebastiaan van Stijn
c6ba9a5124 Merge pull request #48225 from austinvazquez/backport-workflow-artifact-retention-policy-updates-to-25.0
[25.0 backport] ci: update workflow artifacts retention
2024-07-24 10:44:09 +02:00
Sebastiaan van Stijn
4673a3ca2c Merge pull request #48227 from austinvazquez/backport-backport-branch-check-to-25.0
[25.0 backport] github/ci: Check if backport is opened against the expected branch
2024-07-24 09:59:29 +02:00
Paweł Gronowski
30f8908102 github/ci: Check if backport is opened against the expected branch
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 61269e718f)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-07-24 04:41:41 +00:00
CrazyMax
7454d6a2e6 ci: update workflow artifacts retention
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit aff003139c)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-07-24 04:22:58 +00:00
Sebastiaan van Stijn
65cc597cea Merge commit from fork
[25.0] AuthZ plugin security fixes
2024-07-23 21:36:28 +02:00
Sebastiaan van Stijn
b722836927 Merge pull request #48199 from austinvazquez/update-containerd-binary-to-1.7.20
[25.0 backport] Update containerd binary to 1.7.20
2024-07-20 02:21:49 +02:00
Sebastiaan van Stijn
e8ecb9c76d update containerd binary to v1.7.20
Update the containerd binary that's used in CI and for the static packages.

release notes: https://github.com/containerd/containerd/releases/tag/v1.7.20
full diff: https://github.com/containerd/containerd/compare/v1.7.18...v1.7.19

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit fbbda057ac)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-07-19 22:14:11 +00:00
Sebastiaan van Stijn
e6cae1f237 update containerd binary to v1.7.19
Update the containerd binary that's used in CI and for the static packages.

- release notes: https://github.com/containerd/containerd/releases/tag/v1.7.19
- full diff: https://github.com/containerd/containerd/compare/v1.7.18...v1.7.19

Welcome to the v1.7.19 release of containerd!

The nineteenth patch release for containerd 1.7 contains various updates and
splits the main module from the api module in preparation for the same change
in containerd 2.0. Splitting the modules will allow 1.7 and 2.x to both exist
as transitive dependencies without running into API registration errors.
Projects should use this version as the minimum 1.7 version in preparing to
use containerd 2.0 or to be imported alongside it.

Highlights

- Fix support for OTLP config
- Add API go module
- Remove overlayfs volatile option on temp mounts
- Update runc binary to v1.1.13
- Migrate platforms package to github.com/containerd/platforms
- Migrate reference/docker package to github.com/distribution/reference

Container Runtime Interface (CRI)

- Fix panic in NRI from nil CRI reference
- Fix Windows HPC working directory

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 398e15b7de)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-07-19 22:13:45 +00:00
Sebastiaan van Stijn
8ec448db6b update containerd binary to v1.7.18
Update the containerd binary that's used in CI and for the static packages.

- release notes: https://github.com/containerd/containerd/releases/tag/v1.7.18
- full diff: https://github.com/containerd/containerd/compare/v1.7.17...v1.7.18

Welcome to the v1.7.18 release of containerd!

The eighteenth patch release for containerd 1.7 contains various updates along
with an updated version of Go. Go 1.22.4 and 1.21.11 include a fix for a symlink
time of check to time of use race condition during directory removal.

Highlights

- Update Go version to 1.21.11
- Remove uses of platforms.Platform alias
- Migrate log imports to github.com/containerd/log
- Migrate errdefs package to github.com/containerd/errdefs
- Fix usage of "unknown" platform

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5318c38eae)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-07-19 22:13:10 +00:00
Paweł Gronowski
274310807e integration/TestDiskUsage: Make 4096 also a 'empty' value
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 3847da374b)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-07-19 22:13:05 +00:00
Paweł Gronowski
886e726984 Dockerfile: update containerd binary to v1.7.17 (static binaries and CI only)
Update the containerd binary that's used in CI and static binaries

- full diff: https://github.com/containerd/containerd/compare/v1.7.15...v1.7.17
- release notes: https://github.com/containerd/containerd/releases/tag/v1.7.17

```markdown changelog
Update containerd (static binaries only) to [v1.7.17](https://github.com/containerd/containerd/releases/tag/v1.7.17)
```

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 4f0cb7d964)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-07-19 22:10:27 +00:00
Paweł Gronowski
a0f0f7e77e update containerd binary to v1.7.15
Update the containerd binary that's used in CI

- full diff: https://github.com/containerd/containerd/compare/v1.7.13...v1.7.15
- release notes: https://github.com/containerd/containerd/releases/tag/v1.7.15

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 3485cfbb1e)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-07-19 22:09:38 +00:00
Jameson Hyde
91903e81ca If url includes scheme, urlPath will drop hostname, which would not match the auth check
Signed-off-by: Jameson Hyde <jameson.hyde@docker.com>
(cherry picked from commit 754fb8d9d03895ae3ab60d2ad778152b0d835206)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 5282cb25d0)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-07-17 13:09:47 +02:00
Jameson Hyde
ccfe0a41d4 Authz plugin security fixes for 0-length content and path validation
Signed-off-by: Jameson Hyde <jameson.hyde@docker.com>

fix comments

(cherry picked from commit 9659c3a52bac57e615b5fb49b0652baca448643e)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 2ac8a479c5)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-07-17 13:09:46 +02:00
Cory Snider
ed11c9c562 Merge pull request #48146 from austinvazquez/cherry-pick-go-updates-to-25.0
[25.0 backport] update to Go 1.21.12
2024-07-16 14:41:53 -04:00
Paweł Gronowski
d046451b34 update to go1.21.12 [part 2]
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 837289ba62)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-07-16 15:31:57 +00:00
Paweł Gronowski
e16a25e442 update to go1.21.12
- https://github.com/golang/go/issues?q=milestone%3AGo1.21.12+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.21.11...go1.21.12

These minor releases include 1 security fixes following the security policy:

net/http: denial of service due to improper 100-continue handling

The net/http HTTP/1.1 client mishandled the case where a server responds to a request with an "Expect: 100-continue" header with a non-informational (200 or higher) status. This mishandling could leave a client connection in an invalid state, where the next request sent on the connection will fail.

An attacker sending a request to a net/http/httputil.ReverseProxy proxy can exploit this mishandling to cause a denial of service by sending "Expect: 100-continue" requests which elicit a non-informational response from the backend. Each such request leaves the proxy with an invalid connection, and causes one subsequent request using that connection to fail.

Thanks to Geoff Franks for reporting this issue.

This is CVE-2024-24791 and Go issue https://go.dev/issue/67555.

View the release notes for more information:
https://go.dev/doc/devel/release#go1.21.12

**- Description for the changelog**

```markdown changelog
Update Go runtime to 1.21.12
```

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 4d1d7c3ebe)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-07-16 15:31:33 +00:00
Sebastiaan van Stijn
b1aac1b134 update to go1.21.11
go1.21.11 (released 2024-06-04) includes security fixes to the archive/zip
and net/netip packages, as well as bug fixes to the compiler, the go command,
the runtime, and the os package. See the Go 1.21.11 milestone on our issue
tracker for details;

- https://github.com/golang/go/issues?q=milestone%3AGo1.21.11+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.21.10...go1.21.11

From the security announcement;

We have just released Go versions 1.22.4 and 1.21.11, minor point releases.
These minor releases include 2 security fixes following the security policy:

- archive/zip: mishandling of corrupt central directory record

  The archive/zip package's handling of certain types of invalid zip files
  differed from the behavior of most zip implementations. This misalignment
  could be exploited to create an zip file with contents that vary depending
  on the implementation reading the file. The archive/zip package now rejects
  files containing these errors.

  Thanks to Yufan You for reporting this issue.

  This is CVE-2024-24789 and Go issue https://go.dev/issue/66869.

- net/netip: unexpected behavior from Is methods for IPv4-mapped IPv6 addresses

  The various Is methods (IsPrivate, IsLoopback, etc) did not work as expected
  for IPv4-mapped IPv6 addresses, returning false for addresses which would
  return true in their traditional IPv4 forms.

  Thanks to Enze Wang of Alioth and Jianjun Chen of Zhongguancun Lab
  for reporting this issue.

  This is CVE-2024-24790 and Go issue https://go.dev/issue/67680.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 91e2c29865)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-07-16 15:30:07 +00:00
Paweł Gronowski
fffbe84ded Makefile: Pass PAGER/GIT_PAGER variable
Allow to override the PAGER/GIT_PAGER variables inside the container.
Use `cat` as pager when running in Github Actions (to avoid things like
`git diff` stalling the CI).

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 8761bffcaf)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-07-16 15:29:49 +00:00
Sebastiaan van Stijn
c55eeb3cfa Merge pull request #47987 from vvoland/v25.0-47985
[25.0 backport] builder/mobyexporter: Add missing nil check
2024-06-14 18:33:24 +02:00
Paweł Gronowski
9f6600deed builder/mobyexporter: Add missing nil check
Add a nil check to handle a case where the image config JSON would
deserialize into a nil map.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 642242a26b)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-06-14 16:18:36 +02:00
Sebastiaan van Stijn
f26fd4a73a Merge pull request #47890 from thaJeztah/25.0_backport_platforms_err_handling
[25.0 backport] don't depend on containerd platform.Parse to return a typed error
2024-06-03 17:06:58 +02:00
Sebastiaan van Stijn
70fe516b46 don't depend on containerd platform.Parse to return a typed error
We currently depend on the containerd platform-parsing to return typed
errdefs errors; the new containerd platforms module does not return such
errors, and documents that errors returned should not be used as sentinel
errors; c1438e911a/errors.go (L21-L30)

Let's type these errors ourselves, so that we don't depend on the error-types
returned by containerd, and consider that eny platform string that results in
an error is an invalid parameter.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit cd1ed46d73)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-03 13:05:09 +02:00
Sebastiaan van Stijn
303e26dce7 Merge pull request #47869 from dperny/25.0-47854
[25.0 backport] Fix issue where node promotion could fail
2024-05-29 15:04:14 +02:00
Drew Erny
f7ce828e9e Fix issue where node promotion could fail
If a node is promoted right after another node is demoted, there exists
the possibility of a race, by which the newly promoted manager attempts
to connect to the newly demoted manager for its initial Raft membership.
This connection fails, and the whole swarm Node object exits.

At this point, the daemon nodeRunner sees the exit and restarts the
Node.

However, if the address of the no-longer-manager is recorded in the
nodeRunner's config.joinAddr, the Node again attempts to connect to the
no-longer-manager, and crashes again. This repeats. The solution is to
remove the node entirely and rejoin the Swarm as a new node.

This change erases config.joinAddr from the restart of the nodeRunner,
if the node has previously become Ready. The node becoming Ready
indicates that at some point, it did successfully join the cluster, in
some fashion. If it has successfully joined the cluster, then Swarm has
its own persistent record of known manager addresses. If no joinAddr is
provided, then Swarm will choose from its persisted list of managers to
join, and will join a functioning manager.

Signed-off-by: Drew Erny <derny@mirantis.com>
(cherry picked from commit 16e5c41591)
Signed-off-by: Drew Erny <derny@mirantis.com>
2024-05-24 12:29:33 -05:00
Sebastiaan van Stijn
085fa9bf66 Merge pull request #47808 from vvoland/v25.0-47805
[25.0 backport] update to go1.21.10
2024-05-22 17:38:50 +02:00
Sebastiaan van Stijn
577ca9b076 Merge pull request #47830 from vvoland/v25.0-47749
[25.0 backport] apparmor: Allow confined runc to kill containers
2024-05-15 11:13:28 +02:00
Tomáš Virtus
98ddccbbfe apparmor: Allow confined runc to kill containers
/usr/sbin/runc is confined with "runc" profile[1] introduced in AppArmor
v4.0.0. This change breaks stopping of containers, because the profile
assigned to containers doesn't accept signals from the "runc" peer.
AppArmor >= v4.0.0 is currently part of Ubuntu Mantic (23.10) and later.

In the case of Docker, this regression is hidden by the fact that
dockerd itself sends SIGKILL to the running container after runc fails
to stop it. It is still a regression, because graceful shutdowns of
containers via "docker stop" are no longer possible, as SIGTERM from
runc is not delivered to them. This can be seen in logs from dockerd
when run with debug logging enabled and also from tracing signals with
killsnoop utility from bcc[2] (in bpfcc-tools package in Debian/Ubuntu):

  Test commands:

    root@cloudimg:~# docker run -d --name test redis
    ba04c137827df8468358c274bc719bf7fc291b1ed9acf4aaa128ccc52816fe46
    root@cloudimg:~# docker stop test

  Relevant syslog messages (with wrapped long lines):

    Apr 23 20:45:26 cloudimg kernel: audit:
      type=1400 audit(1713905126.444:253): apparmor="DENIED"
      operation="signal" class="signal" profile="docker-default" pid=9289
      comm="runc" requested_mask="receive" denied_mask="receive"
      signal=kill peer="runc"
    Apr 23 20:45:36 cloudimg dockerd[9030]:
      time="2024-04-23T20:45:36.447016467Z"
      level=warning msg="Container failed to exit within 10s of kill - trying direct SIGKILL"
      container=ba04c137827df8468358c274bc719bf7fc291b1ed9acf4aaa128ccc52816fe46
      error="context deadline exceeded"

  Killsnoop output after "docker stop ...":

    root@cloudimg:~# killsnoop-bpfcc
    TIME      PID      COMM             SIG  TPID     RESULT
    20:51:00  9631     runc             3    9581     -13
    20:51:02  9637     runc             9    9581     -13
    20:51:12  9030     dockerd          9    9581     0

This change extends the docker-default profile with rules that allow
receiving signals from processes that run confined with either runc or
crun profile (crun[4] is an alternative OCI runtime that's also confined
in AppArmor >= v4.0.0, see [1]). It is backward compatible because the
peer value is a regular expression (AARE) so the referenced profile
doesn't have to exist for this profile to successfully compile and load.

Note that the runc profile has an attachment to /usr/sbin/runc. This is
the path where the runc package in Debian/Ubuntu puts the binary. When
the docker-ce package is installed from the upstream repository[3], runc
is installed as part of the containerd.io package at /usr/bin/runc.
Therefore it's still running unconfined and has no issues sending
signals to containers.

[1] https://gitlab.com/apparmor/apparmor/-/commit/2594d936
[2] https://github.com/iovisor/bcc/blob/master/tools/killsnoop.py
[3] https://download.docker.com/linux/ubuntu
[4] https://github.com/containers/crun

Signed-off-by: Tomáš Virtus <nechtom@gmail.com>
(cherry picked from commit 5ebe2c0d6b)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-05-14 11:41:56 +02:00
Paweł Gronowski
03ecc6f5e6 Merge pull request #47753 from austinvazquez/cherry-pick-1ca89d7eae84346a7241f9d7033a7f591ff3a1fa-to-25.0
[25.0 backport] vendor: google.golang.org/protobuf v1.33.0, github.com/golang/protobu…
2024-05-09 13:04:27 +02:00
Paweł Gronowski
637205391b update to go1.21.10
- https://github.com/golang/go/issues?q=milestone%3AGo1.21.10+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.21.9...go1.21.10

These minor releases include 2 security fixes following the security policy:

- cmd/go: arbitrary code execution during build on darwin
On Darwin, building a Go module which contains CGO can trigger arbitrary code execution when using the Apple version of ld, due to usage of the -lto_library flag in a "#cgo LDFLAGS" directive.
Thanks to Juho Forsén of Mattermost for reporting this issue.
This is CVE-2024-24787 and Go issue https://go.dev/issue/67119.

- net: malformed DNS message can cause infinite loop
A malformed DNS message in response to a query can cause the Lookup functions to get stuck in an infinite loop.
Thanks to long-name-let-people-remember-you on GitHub for reporting this issue, and to Mateusz Poliwczak for bringing the issue to our attention.
This is CVE-2024-24788 and Go issue https://go.dev/issue/66754.

View the release notes for more information:
https://go.dev/doc/devel/release#go1.22.3

**- Description for the changelog**

```markdown changelog
Update Go runtime to 1.21.10
```

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 6c97e0e0b5)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-05-08 09:45:39 +02:00
Paweł Gronowski
d16d8bd448 Merge pull request #47703 from vvoland/v25.0-47682
[25.0 backport] ci/validate-pr: Use `::error::` command to print errors
2024-05-07 10:38:43 +02:00
Paweł Gronowski
2ebb5ca1c0 Merge pull request #47759 from austinvazquez/cherry-pick-ab570ab3d62038b3d26f96a9bb585d0b6095b9b4-to-25.0
[25.0 backport] fix: avoid nil dereference on image history Created value
2024-04-25 17:43:08 +02:00
Sebastiaan van Stijn
3d56d734db vendor: google.golang.org/protobuf v1.33.0, github.com/golang/protobuf v1.5.4
full diffs:

- https://github.com/protocolbuffers/protobuf-go/compare/v1.31.0...v1.33.0
- https://github.com/golang/protobuf/compare/v1.5.3...v1.5.4

From the Go security announcement list;

> Version v1.33.0 of the google.golang.org/protobuf module fixes a bug in
> the google.golang.org/protobuf/encoding/protojson package which could cause
> the Unmarshal function to enter an infinite loop when handling some invalid
> inputs.
>
> This condition could only occur when unmarshaling into a message which contains
> a google.protobuf.Any value, or when the UnmarshalOptions.UnmarshalUnknown
> option is set. Unmarshal now correctly returns an error when handling these
> inputs.
>
> This is CVE-2024-24786.

In a follow-up post;

> A small correction: This vulnerability applies when the UnmarshalOptions.DiscardUnknown
> option is set (as well as when unmarshaling into any message which contains a
> google.protobuf.Any). There is no UnmarshalUnknown option.
>
> In addition, version 1.33.0 of google.golang.org/protobuf inadvertently
> introduced an incompatibility with the older github.com/golang/protobuf
> module. (https://github.com/golang/protobuf/issues/1596) Users of the older
> module should update to github.com/golang/protobuf@v1.5.4.

govulncheck results in our code:

    govulncheck ./...
    Scanning your code and 1221 packages across 204 dependent modules for known vulnerabilities...

    === Symbol Results ===

    Vulnerability #1: GO-2024-2611
        Infinite loop in JSON unmarshaling in google.golang.org/protobuf
      More info: https://pkg.go.dev/vuln/GO-2024-2611
      Module: google.golang.org/protobuf
        Found in: google.golang.org/protobuf@v1.31.0
        Fixed in: google.golang.org/protobuf@v1.33.0
        Example traces found:
          #1: daemon/logger/gcplogs/gcplogging.go:154:18: gcplogs.New calls logging.Client.Ping, which eventually calls json.Decoder.Peek
          #2: daemon/logger/gcplogs/gcplogging.go:154:18: gcplogs.New calls logging.Client.Ping, which eventually calls json.Decoder.Read
          #3: daemon/logger/gcplogs/gcplogging.go:154:18: gcplogs.New calls logging.Client.Ping, which eventually calls protojson.Unmarshal

    Your code is affected by 1 vulnerability from 1 module.
    This scan found no other vulnerabilities in packages you import or modules you
    require.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1ca89d7eae)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-04-24 20:24:46 -07:00
Sebastiaan van Stijn
0a2f5085ee vendor: cloud.google.com/go/logging v1.8.1
full diff: https://github.com/googleapis/google-cloud-go/compare/logging/v1.7.0...logging/v1.8.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 10a72f2504)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-04-24 20:24:46 -07:00
Sebastiaan van Stijn
3141ea5c8b vendor: golang.org/x/mod v0.13.0, golang.org/x/tools v0.13.0
full diff:

- https://github.com/golang/mod/comopare/v0.11.0...v0.13.0
- https://github.com/golang/tools/comopare/v0.10.0...v0.13.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2799417da1)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-04-24 20:24:45 -07:00
Sebastiaan van Stijn
4f25076181 vendor: golang.org/x/sync v0.5.0
full diff: https://github.com/golang/sync/comopare/v0.3.0...v0.5.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-04-24 20:24:38 -07:00
Christopher Petito
d93cc7edc0 nil dereference fix on image history Created value
Issue was caused by the changes here https://github.com/moby/moby/pull/45504
First released in v25.0.0-beta.1

Signed-off-by: Christopher Petito <47751006+krissetto@users.noreply.github.com>
(cherry picked from commit ab570ab3d6)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-04-23 23:20:30 +00:00
Cory Snider
5beae56515 Merge pull request #47746 from austinvazquez/cherry-pick-d66589496e5ab42d31f3fddaf8075fb37f1b77c6-to-25.0
[25.0 backport] vendor: golang.org/x/net v0.23.0
2024-04-23 16:16:21 -04:00
Sebastiaan van Stijn
ee5909c2d0 vendor: golang.org/x/net v0.23.0
full diff: https://github.com/golang/net/compare/v0.22.0...v0.23.0

Includes a fix for CVE-2023-45288, which is also addressed in go1.22.2
and go1.21.9;

> http2: close connections when receiving too many headers
>
> Maintaining HPACK state requires that we parse and process
> all HEADERS and CONTINUATION frames on a connection.
> When a request's headers exceed MaxHeaderBytes, we don't
> allocate memory to store the excess headers but we do
> parse them. This permits an attacker to cause an HTTP/2
> endpoint to read arbitrary amounts of data, all associated
> with a request which is going to be rejected.
>
> Set a limit on the amount of excess header frames we
> will process before closing a connection.
>
> Thanks to Bartek Nowotarski for reporting this issue.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d66589496e)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-04-23 16:58:23 +00:00
Sebastiaan van Stijn
f37d6f5f48 vendor: golang.org/x/net v0.22.0, golang.org/x/crypto v0.21.0
full diffs changes relevant to vendored code:

- https://github.com/golang/net/compare/v0.18.0...v0.22.0
    - websocket: add support for dialing with context
    - http2: remove suspicious uint32->v conversion in frame code
    - http2: send an error of FLOW_CONTROL_ERROR when exceed the maximum octets
- https://github.com/golang/crypto/compare/v0.17.0...v0.21.0
    - internal/poly1305: drop Go 1.12 compatibility
    - internal/poly1305: improve sum_ppc64le.s
    - ocsp: don't use iota for externally defined constants

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e1ca74361b)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-04-23 16:50:56 +00:00
Akihiro Suda
fd828b6766 go.mod: golang.org/x/sys v0.18.0
https://github.com/golang/sys/compare/v0.16.0...v0.18.0

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 83cda67f73)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-04-23 16:30:38 +00:00
Sebastiaan van Stijn
7ac688aa0f Merge pull request #47724 from austinvazquez/cherry-pick-aws-sdk-go-v2-update-to-25.0
[25.0 backport] vendor: bump github.com/aws/aws-sdk-go-v2 to v1.24.1
2024-04-22 20:11:32 +02:00
Paweł Gronowski
584a30c772 awslogs: Replace depreacted WithEndpointResolver usage
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 2aa13e950d)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-04-16 20:10:43 -07:00
Paweł Gronowski
60605eb1da vendor: bump github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs to v1.32.0
v1.33.0 is also available, but it would also cause
`github.com/aws/aws-sdk-go-v2` change from v1.24.1 to v1.25.0

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 70a4a9c969)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-04-16 20:10:31 -07:00
Paweł Gronowski
71b8e0339c vendor: bump github.com/aws/aws-sdk-go-v2 to v1.24.1
In preparation for buildkit v0.13

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 999f90ac1c)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-04-16 20:09:47 -07:00
Paweł Gronowski
08e8912d7c ci/validate-pr: Use ::error:: command to print errors
This will make Github render the log line as an error.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit fb92caf2aa)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-04-09 13:54:29 +02:00
Paweł Gronowski
aee8b332bf Merge pull request #47681 from vvoland/v25.0-47423
[25.0 backport] ci: Require changelog description
2024-04-09 13:54:14 +02:00
Sebastiaan van Stijn
12c4e03288 Merge pull request #47697 from vvoland/v25.0-47658
[25.0 backport] Fix cases where we are wrapping a nil error
2024-04-09 13:30:27 +02:00
Brian Goff
e2e670299f Fix cases where we are wrapping a nil error
This was using `errors.Wrap` when there was no error to wrap, meanwhile
we are supposed to be creating a new error.

Found this while investigating some log corruption issues and
unexpectedly getting a nil reader and a nil error from `getTailReader`.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 0a48d26fbc)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-04-09 10:20:24 +02:00
Sebastiaan van Stijn
f42f65b464 Merge pull request #47695 from cpuguy83/25_oci_tar_no_platform
[25.0] save: Remove platform from config descriptor
2024-04-09 09:46:07 +02:00
Brian Goff
935787c19c save: Remove platform from config descriptor
This was brought up by bmitch that its not expected to have a platform
object in the config descriptor.
Also checked with tianon who agreed, its not _wrong_ but is unexpected
and doesn't neccessarily make sense to have it there.

Also, while technically incorrect, ECR is throwing an error when it sees
this.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 9160b9fda6)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2024-04-08 17:12:31 +00:00
Paweł Gronowski
bd19301d9e ci: Require changelog description
Any PR that is labeled with any `impact/*` label should have a
description for the changelog and an `area/*` label.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 1d473549e8)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-04-05 12:08:32 +02:00
Sebastiaan van Stijn
79c31c12fc Merge pull request #47672 from vvoland/v25.0-47670
[25.0 backport] update to go1.21.9
2024-04-04 14:30:45 +02:00
Paweł Gronowski
50bd133ad3 update to go1.21.9
go1.21.9 (released 2024-04-03) includes a security fix to the net/http
package, as well as bug fixes to the linker, and the go/types and
net/http packages. See the [Go 1.21.9 milestone](https://github.com/golang/go/issues?q=milestone%3AGo1.21.9+label%3ACherryPickApproved)
for more details.

These minor releases include 1 security fixes following the security policy:

- http2: close connections when receiving too many headers

Maintaining HPACK state requires that we parse and process all HEADERS
and CONTINUATION frames on a connection. When a request's headers exceed
MaxHeaderBytes, we don't allocate memory to store the excess headers but
we do parse them. This permits an attacker to cause an HTTP/2 endpoint
to read arbitrary amounts of header data, all associated with a request
which is going to be rejected. These headers can include Huffman-encoded
data which is significantly more expensive for the receiver to decode
than for an attacker to send.

Set a limit on the amount of excess header frames we will process before
closing a connection.

Thanks to Bartek Nowotarski (https://nowotarski.info/) for reporting this issue.

This is CVE-2023-45288 and Go issue https://go.dev/issue/65051.

View the release notes for more information:
https://go.dev/doc/devel/release#go1.22.2

- https://github.com/golang/go/issues?q=milestone%3AGo1.21.9+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.21.8...go1.21.9

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 329d403e20)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-04-04 10:17:28 +02:00
Paweł Gronowski
e63daec867 Merge pull request #47589 from vvoland/v25.0-47538
[25.0 backport] libnet: Don't forward to upstream resolvers on internal nw
2024-03-19 15:12:29 +01:00
Paweł Gronowski
817bccb1c6 Merge pull request #47588 from vvoland/v25.0-47558
[25.0 backport] plugin: fix mounting /etc/hosts when running in UserNS
2024-03-19 15:12:15 +01:00
Bjorn Neergaard
2a0601e84e Merge pull request #47587 from vvoland/v25.0-47559
[25.0 backport] rootless: fix `open /etc/docker/plugins: permission denied`
2024-03-19 07:24:36 -06:00
Sebastiaan van Stijn
9df9ccc06f Merge pull request #47586 from vvoland/v25.0-47569
[25.0 backport] Makefile: generate-files: fix check for empty TMP_OUT
2024-03-19 12:45:46 +01:00
Albin Kerouanton
a987bc5ad0 libnet: Don't forward to upstream resolvers on internal nw
Commit cbc2a71c2 makes `connect` syscall fail fast when a container is
only attached to an internal network. Thanks to that, if such a
container tries to resolve an "external" domain, the embedded resolver
returns an error immediately instead of waiting for a timeout.

This commit makes sure the embedded resolver doesn't even try to forward
to upstream servers.

Co-authored-by: Albin Kerouanton <albinker@gmail.com>
Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 790c3039d0)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-19 11:21:46 +00:00
Rob Murray
20c205fd3a Environment variable to override resolv.conf path.
If env var DOCKER_TEST_RESOLV_CONF_PATH is set, treat it as an override
for the 'resolv.conf' path.

Added as part of resolv.conf refactoring, but needed by back-ported test
TestInternalNetworkDNS.

Signed-off-by: Rob Murray <rob.murray@docker.com>
2024-03-19 11:21:46 +00:00
Sebastiaan van Stijn
4be97233cc daemon: move getUnprivilegedMountFlags to internal package
This code is currently only used in the daemon, but is also needed in other
places. We should consider moving this code to github.com/moby/sys, so that
BuildKit can also use the same implementation instead of maintaining a fork;
moving it to internal allows us to reuse this code inside the repository, but
does not allow external consumers to depend on it (which we don't want as
it's not a permanent location).

As our code only uses this in linux files, I did not add a stub for other
platforms (but we may decide to do that in the moby/sys repository).

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7b414f5703)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-19 10:46:49 +01:00
Akihiro Suda
7ed7e6caf6 plugin: fix mounting /etc/hosts when running in UserNS
Fix `error mounting "/etc/hosts" to rootfs at "/etc/hosts": mount
/etc/hosts:/etc/hosts (via /proc/self/fd/6), flags: 0x5021: operation
not permitted`.

This error was introduced in 7d08d84b03
(`dockerd-rootless.sh: set rootlesskit --state-dir=DIR`) that changed
the filesystem of the state dir from /tmp to /run (in a typical setup).

Fix issue 47248

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 762ec4b60c)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-19 10:46:48 +01:00
Akihiro Suda
81ad7062f0 rootless: fix open /etc/docker/plugins: permission denied
Fix issue 47436

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit d742659877)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-19 10:12:38 +01:00
Sebastiaan van Stijn
02d4ee3f9a Makefile: generate-files: fix check for empty TMP_OUT
commit c655b7dc78 added a check to make sure
the TMP_OUT variable was not set to an empty value, as such a situation would
perform an `rm -rf /**` during cleanup.

However, it was a bit too eager, because Makefile conditionals (`ifeq`) are
evaluated when parsing the Makefile, which happens _before_ the make target
is executed.

As a result `$@_TMP_OUT` was always empty when the `ifeq` was evaluated,
making it not possible to execute the `generate-files` target.

This patch changes the check to use a shell command to evaluate if the var
is set to an empty value.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 25c9e6e8df)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-19 10:11:47 +01:00
Sebastiaan van Stijn
5901652edd Merge pull request #47533 from vvoland/v25.0-47530
[25.0 backport] volume: Don't decrement refcount below 0
2024-03-08 14:00:37 +01:00
Paweł Gronowski
478f6b097d volume: Don't decrement refcount below 0
With both rootless and live restore enabled, there's some race condition
which causes the container to be `Unmount`ed before the refcount is
restored.

This makes sure we don't underflow the refcount (uint64) when
decrementing it.

The root cause of this race condition still needs to be investigated and
fixed, but at least this unflakies the `TestLiveRestore`.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 294fc9762e)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-08 12:51:45 +01:00
Bjorn Neergaard
98b171fd4d Merge pull request #47527 from vvoland/v25.0-47523
[25.0 backport] builder-next: fix missing lock in ensurelayer
2024-03-07 07:08:59 -07:00
Tonis Tiigi
d250e13945 builder-next: fix missing lock in ensurelayer
When this was called concurrently from the moby image
exporter there could be a data race where a layer was
written to the refs map when it was already there.

In that case the reference count got mixed up and on
release only one of these layers was actually released.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 37545cc644)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-07 12:18:07 +01:00
Paweł Gronowski
061aa95809 Merge pull request #47513 from vvoland/v25.0-47498
[25.0 backport] daemon: overlay2: remove world writable permission from the lower file
2024-03-06 14:58:50 +01:00
Jaroslav Jindrak
d0d85f6438 daemon: overlay2: remove world writable permission from the lower file
In de2447c, the creation of the 'lower' file was changed from using
os.Create to using ioutils.AtomicWriteFile, which ignores the system's
umask. This means that even though the requested permission in the
source code was always 0666, it was 0644 on systems with default
umask of 0022 prior to de2447c, so the move to AtomicFile potentially
increased the file's permissions.

This is not a security issue because the parent directory does not
allow writes into the file, but it can confuse security scanners on
Linux-based systems into giving false positives.

Signed-off-by: Jaroslav Jindrak <dzejrou@gmail.com>
(cherry picked from commit cadb124ab6)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-06 13:11:41 +01:00
Paweł Gronowski
5d6679345c Merge pull request #47508 from vvoland/v25.0-47504
[25.0 backport] update RootlessKit to 2.0.2
2024-03-06 12:55:15 +01:00
Paweł Gronowski
ef1fa235cd Merge pull request #47510 from akerouanton/25.0-47441_mac_addr_config_migration
[25.0 backport] Don't create endpoint config for MAC addr config migration
2024-03-06 12:15:50 +01:00
Rob Murray
0451b287dc Don't create endpoint config for MAC addr config migration
In a container-create API request, HostConfig.NetworkMode (the identity
of the "main" network) may be a name, id or short-id.

The configuration for that network, including preferred IP address etc,
may be keyed on network name or id - it need not match the NetworkMode.

So, when migrating the old container-wide MAC address to the new
per-endpoint field - it is not safe to create a new EndpointSettings
entry unless there is no possibility that it will duplicate settings
intended for the same network (because one of the duplicates will be
discarded later, dropping the settings it contains).

This change introduces a new API restriction, if the deprecated container
wide field is used in the new API, and EndpointsConfig is provided for
any network, the NetworkMode and key under which the EndpointsConfig is
store must be the same - no mixing of ids and names.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit a580544d82)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-06 11:20:10 +01:00
Akihiro Suda
d27fe2558d dockerd-rootless-setuptool.sh: check RootlessKit functionality
RootlessKit will print hints if something is still unsatisfied.

e.g., `kernel.apparmor_restrict_unprivileged_userns` constraint
rootless-containers/rootlesskit@33c3e7ca6c

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit b32cfc3b3a)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-06 10:38:17 +01:00
Akihiro Suda
77de535364 Dockerfile: update RootlessKit to v2.0.2
https://github.com/rootless-containers/rootlesskit/compare/v2.0.1...v2.0.2

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 49fd8df9b9)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-06 10:38:15 +01:00
Paweł Gronowski
9e526bc394 Merge pull request #47503 from vvoland/v25.0-47502
[25.0 backport] update to go1.21.8
2024-03-05 21:58:50 +01:00
Paweł Gronowski
2d347024d1 update to go1.21.8
go1.21.8 (released 2024-03-05) includes 5 security fixes

- crypto/x509: Verify panics on certificates with an unknown public key algorithm (CVE-2024-24783, https://go.dev/issue/65390)
- net/http: memory exhaustion in Request.ParseMultipartForm (CVE-2023-45290, https://go.dev/issue/65383)
- net/http, net/http/cookiejar: incorrect forwarding of sensitive headers and cookies on HTTP redirect (CVE-2023-45289, https://go.dev/issue/65065)
- html/template: errors returned from MarshalJSON methods may break template escaping (CVE-2024-24785, https://go.dev/issue/65697)
- net/mail: comments in display names are incorrectly handled (CVE-2024-24784, https://go.dev/issue/65083)

View the release notes for more information:
https://go.dev/doc/devel/release#go1.22.1

- https://github.com/golang/go/issues?q=milestone%3AGo1.21.8+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.21.7...go1.21.8

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 57b7ffa7f6)
2024-03-05 19:23:00 +01:00
Albin Kerouanton
51e876cd96 Merge pull request #47493 from akerouanton/25.0-47370_windows_natnw_dns_test
[25.0 backport] Test DNS on Windows 'nat' networks
2024-03-01 17:02:36 +01:00
Sebastiaan van Stijn
3fa0cedce3 Merge pull request #47484 from akerouanton/25.0-c8d-pull-fslayer
[25.0 backport] c8d/pull: Progress fixes
2024-03-01 14:06:52 +01:00
Sebastiaan van Stijn
4e7d8531ed Merge pull request #47491 from akerouanton/25.0-c8d-skip-last-windows-tests
[25.0 backport] c8d/windows: Temporarily skip two failing tests
2024-03-01 14:04:41 +01:00
Rob Murray
f66b5f642e Test DNS on Windows 'nat' networks
Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 9083c2f10d)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 13:51:20 +01:00
Albin Kerouanton
41fde13f64 Merge pull request #47490 from akerouanton/25.0-47370_windows_nat_network_dns
[25.0 backport] Set up DNS names for Windows default network
2024-03-01 13:13:02 +01:00
Sebastiaan van Stijn
0db1c6d8bb Merge pull request #47488 from akerouanton/25.0-run_macvlan_ipvlan_tests
[25.0 backport] Run the macvlan/ipvlan integration tests
2024-03-01 12:59:32 +01:00
Paweł Gronowski
33a29c0135 Merge pull request #47489 from akerouanton/25.0-ci-codecov-token
[25.0 backport] ci: set codecov token
2024-03-01 12:44:46 +01:00
Albin Kerouanton
30545de83e Merge pull request #47393 from vvoland/rro-backwards-compatible-25
[25.0 backport] api/pre-1.44: Default `ReadOnlyNonRecursive` to true
2024-03-01 12:15:21 +01:00
Paweł Gronowski
fa4ea308f0 c8d/windows: Temporarily skip two failing tests
They're failing the CI and we have a tracking ticket: #47107

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 44167988c3)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 12:07:21 +01:00
Rob Murray
d66e0fb7b1 Set up DNS names for Windows default network
DNS names were only set up for user-defined networks. On Linux, none
of the built-in networks (bridge/host/none) have built-in DNS, so they
don't need DNS names.

But, on Windows, the default network is "nat" and it does need the DNS
names.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 443f56efb0)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 12:03:16 +01:00
Sebastiaan van Stijn
30ecc0ea8a Merge pull request #47482 from akerouanton/25.0-swarm-ipam-validation
[25.0 backport] Don't enforce new validation rules for existing networks
2024-03-01 11:58:18 +01:00
Sebastiaan van Stijn
06767446fe Merge pull request #47481 from akerouanton/25.0-internal-bridge
[25.0 backport] Make 'internal' bridge networks accessible from host
2024-03-01 11:57:38 +01:00
Sebastiaan van Stijn
7048a63686 Merge pull request #47486 from akerouanton/25.0-go-1.21.7
[25.0 backport] update to go1.21.7
2024-03-01 11:56:23 +01:00
Paweł Gronowski
81fb7f9986 Merge pull request #47487 from akerouanton/25.0-integration-testdaemonproxy-reset-otel
[25.0 backport] integration: Reset `OTEL_EXPORTER_OTLP_ENDPOINT` for sub-daemons
2024-03-01 11:46:42 +01:00
Sebastiaan van Stijn
b77bb69f87 Merge pull request #47483 from akerouanton/25.0-best-effort-xattrs-classic-builder
[25.0 backport] builder/dockerfile: ADD with best-effort xattrs
2024-03-01 11:03:34 +01:00
CrazyMax
7a4abb8c77 ci: set codecov token
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit 38827ba290)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:38:53 +01:00
Rob Murray
81a83f0544 Simplify macvlan/ipvlan integration test structure
Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 9faf4855d5)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:34:52 +01:00
Rob Murray
abcd6f8a46 Run the macvlan/ipvlan integration tests
The problem was accidentally introduced in:
  e8dc902781

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 4eb95d01bc)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:34:52 +01:00
Paweł Gronowski
f7be6dcba6 integration: Reset OTEL_EXPORTER_OTLP_ENDPOINT for sub-daemons
When creating a new daemon in the `TestDaemonProxy`, reset the
`OTEL_EXPORTER_OTLP_ENDPOINT` to an empty value to disable OTEL
collection to avoid it hitting the proxy.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 5fe96e234d)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:30:13 +01:00
Sebastiaan van Stijn
10609544e5 update to go1.21.7
go1.21.7 (released 2024-02-06) includes fixes to the compiler, the go command,
the runtime, and the crypto/x509 package. See the Go 1.21.7 milestone on our
issue tracker for details:

- https://github.com/golang/go/issues?q=milestone%3AGo1.21.7+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.21.6...go1.21.7

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7c2975d2df)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:20:49 +01:00
Paweł Gronowski
be59afce2d c8d/pull: Output truncated id for Pulling fs layer
All other progress updates are emitted with truncated id.

```diff
$ docker pull --platform linux/amd64 alpine
Using default tag: latest
latest: Pulling from library/alpine
-sha256:4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8: Pulling fs layer
+4abcf2066143: Download complete
Digest: sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b
Status: Image is up to date for alpine:latest
docker.io/library/alpine:latest
```

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 16aa7dd67f)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:10:04 +01:00
Paweł Gronowski
97951c39fb c8d/pull: Don't emit Downloading with 0 progress
To align with the graphdrivers behavior and don't send unnecessary
progress messages.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 14df52b709)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:09:41 +01:00
Paweł Gronowski
2001813571 c8d/pull: Emit Pulling fs layer
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit ff5f780f2b)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:09:37 +01:00
Paweł Gronowski
8e3bcf1974 pkg/streamformatter: Make progressOutput concurrency safe
Sync access to the underlying `io.Writer` with a mutex.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 5689dabfb3)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:09:32 +01:00
Cory Snider
27f36f42a4 builder/dockerfile: ADD with best-effort xattrs
Archives being unpacked by Dockerfiles may have been created on other
OSes with different conventions and semantics for xattrs, making them
impossible to apply when extracting. Restore the old best-effort xattr
behaviour users have come to depend on in the classic builder.

The (archive.Archiver).UntarPath function does not allow the options
passed to Untar to be customized. It also happens to be a trivial
wrapper around the Untar function. Inline the function body and add the
option.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 5bcd2f6860)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:01:09 +01:00
Rob Murray
1ae019fca2 Don't enforce new validation rules for existing networks
Non-swarm networks created before network-creation-time validation
was added in 25.0.0 continued working, because the checks are not
re-run.

But, swarm creates networks when needed (with 'agent=true'), to
ensure they exist on each agent - ignoring the NetworkNameError
that says the network already existed.

By ignoring validation errors on creation of a network with
agent=true, pre-existing swarm networks with IPAM config that would
fail the new checks will continue to work too.

New swarm (overlay) networks are still validated, because they are
initially created with 'agent=false'.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 571af915d5)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 09:54:46 +01:00
Rob Murray
c761353e7c Make 'internal' bridge networks accessible from host
Prior to release 25.0.0, the bridge in an internal network was assigned
an IP address - making the internal network accessible from the host,
giving containers on the network access to anything listening on the
bridge's address (or INADDR_ANY on the host).

This change restores that behaviour. It does not restore the default
route that was configured in the container, because packets sent outside
the internal network's subnet have always been dropped. So, a 'connect()'
to an address outside the subnet will still fail fast.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 419f5a6372)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 09:29:41 +01:00
Paweł Gronowski
00b2e1072b Merge pull request #47476 from vvoland/ci-report-timeout-25
[25.0 backport] ci: Update `teststat` to v0.1.25
2024-02-29 20:40:06 +01:00
Paweł Gronowski
10bc347b03 ci: Update teststat to v0.1.25
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit fc0e5401f2)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-29 16:01:20 +01:00
Sebastiaan van Stijn
6d675b429e Merge pull request #47471 from vvoland/ci-reports-better-find-25
[25.0 backport] ci: Make `find` for test reports more specific
2024-02-29 15:56:26 +01:00
Paweł Gronowski
9f1b47c597 Merge pull request #47470 from neersighted/backport/47440/25.0
[25.0 backport] client: fix connection-errors being shadowed by API version errors
2024-02-29 15:53:41 +01:00
Sebastiaan van Stijn
94137f6df5 client: fix connection-errors being shadowed by API version mismatch errors
Commit e6907243af applied a fix for situations
where the client was configured with API-version negotiation, but did not yet
negotiate a version.

However, the checkVersion() function that was implemented copied the semantics
of cli.NegotiateAPIVersion, which ignored connection failures with the
assumption that connection errors would still surface further down.

However, when using the result of a failed negotiation for NewVersionError,
an API version mismatch error would be produced, masking the actual connection
error.

This patch changes the signature of checkVersion to return unexpected errors,
including failures to connect to the API.

Before this patch:

    docker -H unix:///no/such/socket.sock secret ls
    "secret list" requires API version 1.25, but the Docker daemon API version is 1.24

With this patch applied:

    docker -H unix:///no/such/socket.sock secret ls
    Cannot connect to the Docker daemon at unix:///no/such/socket.sock. Is the docker daemon running?

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6aea26b431)
Conflicts: client/image_list.go
    client/image_list_test.go
Signed-off-by: Bjorn Neergaard <bjorn.neergaard@docker.com>
2024-02-29 06:02:06 -07:00
Paweł Gronowski
dd5faa9d4f ci: Make find for test reports more specific
Don't use all `*.json` files blindly, take only these that are likely to
be reports from go test.
Also, use `find ... -exec` instead of piping results to `xargs`.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit e4de4dea5c)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-29 10:07:44 +01:00
Sebastiaan van Stijn
012bfd33e5 client: doRequest: make sure we return a connection-error
This function has various errors that are returned when failing to make a
connection (due to permission issues, TLS mis-configuration, or failing to
resolve the TCP address).

The errConnectionFailed error is currently used as a special case when
processing Ping responses. The current code did not consistently treat
connection errors, and because of that could either absorb the error,
or process the empty response.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 913478b428)
Signed-off-by: Bjorn Neergaard <bjorn.neergaard@docker.com>
2024-02-29 01:17:13 -07:00
Sebastiaan van Stijn
3ec1946ce1 client: NegotiateAPIVersion: do not ignore (connection) errors from Ping
NegotiateAPIVersion was ignoring errors returned by Ping. The intent here
was to handle API responses from a daemon that may be in an unhealthy state,
however this case is already handled by Ping itself.

Ping only returns an error when either failing to connect to the API (daemon
not running or permissions errors), or when failing to parse the API response.

Neither of those should be ignored in this code, or considered a successful
"ping", so update the code to return

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 901b90593d)
Signed-off-by: Bjorn Neergaard <bjorn.neergaard@docker.com>
2024-02-29 01:17:13 -07:00
Sebastiaan van Stijn
200a2c3576 client: fix TestPingWithError
This test was added in 27ef09a46f, which changed
the Ping handling to ignore internal server errors. That case is tested in
TestPingFail, which verifies that we accept the Ping response if a 500
status code was received.

The TestPingWithError test was added to verify behavior if a protocol
(connection) error occurred; however the mock-client returned both a
response, and an error; the error returned would only happen if a connection
error occurred, which means that the server would not provide a reply.

Running the test also shows that returning a response is unexpected, and
ignored:

    === RUN   TestPingWithError
    2024/02/23 14:16:49 RoundTripper returned a response & error; ignoring response
    2024/02/23 14:16:49 RoundTripper returned a response & error; ignoring response
    --- PASS: TestPingWithError (0.00s)
    PASS

This patch updates the test to remove the response.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 349abc64ed)
Signed-off-by: Bjorn Neergaard <bjorn.neergaard@docker.com>
2024-02-29 01:17:09 -07:00
Paweł Gronowski
cb66214dfd Merge pull request #47466 from huang-jl/25.0_backport_fix_restore_digest
[25.0 backport] libcontainerd: change the digest used when restoring
2024-02-28 17:42:18 +01:00
huang-jl
70c05fe10c libcontainerd: change the digest used when restoring
For current implementation of Checkpoint Restore (C/R) in docker, it
will write the checkpoint to content store. However, when restoring
libcontainerd uses .Digest().Encoded(), which will remove the info
of alg, leading to error.

Signed-off-by: huang-jl <1046678590@qq.com>
(cherry picked from commit da643c0b8a)
Signed-off-by: huang-jl <1046678590@qq.com>
2024-02-28 23:27:17 +08:00
Paweł Gronowski
e85cef89fa api/pre-1.44: Default ReadOnlyNonRecursive to true
Don't change the behavior for older clients and keep the same behavior.
Otherwise client can't opt-out (because `ReadOnlyNonRecursive` is
unsupported before 1.44).

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 432390320e)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-28 10:00:22 +01:00
Paweł Gronowski
a72294a668 mounts/validate: Don't check source exists with CreateMountpoint
Don't error out when mount source doesn't exist and mounts has
`CreateMountpoint` option enabled.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 05b883bdc8)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-27 23:31:36 +01:00
Paweł Gronowski
9ee331235a integration: Add container.Output utility
Extracted from bfb810445c

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-27 23:31:35 +01:00
Bjorn Neergaard
6fb71a9764 Merge pull request #47451 from neersighted/image_created_omitempty_25.0
[25.0 backport] api: omit missing Created field from ImageInspect response
2024-02-26 15:09:06 -07:00
Bjorn Neergaard
5d9e13bc84 api: omit missing Created field from ImageInspect response
Signed-off-by: Bjorn Neergaard <bjorn.neergaard@docker.com>
2024-02-26 10:44:35 -07:00
Bjorn Neergaard
36d02bf488 Merge pull request #47387 from neersighted/backport/47374/25.0
[25.0 backport] Set `Created` to `0001-01-01T00:00:00Z` on older API versions
2024-02-14 11:43:13 -07:00
Paweł Gronowski
bb66c3ca04 api/history: Mention empty Created
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 903412d0fc)
Signed-off-by: Bjorn Neergaard <bjorn.neergaard@docker.com>
2024-02-14 09:11:28 -07:00
Tianon Gravi
fa3a64f2bc Set Created to 0001-01-01T00:00:00Z on older API versions
This matches the prior behavior before 2a6ff3c24f.

This also updates the Swagger documentation for the current version to note that the field might be the empty string and what that means.

Signed-off-by: Tianon Gravi <admwiggin@gmail.com>
(cherry picked from commit b4fbe226e8)
Signed-off-by: Bjorn Neergaard <bjorn.neergaard@docker.com>
2024-02-14 09:11:21 -07:00
Sebastiaan van Stijn
f417435e5f Merge pull request #47348 from rumpl/25.0_backport-history-config
[25.0 backport]  c8d: Use the same logic to get the present images
2024-02-06 19:43:34 +01:00
Djordje Lukic
acd023d42b c8d: Use the same logic to get the present images
Inspect and history used two different ways to find the present images.
This made history fail in some cases where image inspect would work (if
a configuration of a manifest wasn't found in the content store).

With this change we now use the same logic for both inspect and history.

Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
2024-02-06 17:17:51 +01:00
Sebastiaan van Stijn
7a075cacf9 Merge pull request #47344 from thaJeztah/25.0_backport_seccomp_updates
[25.0 backport] profiles/seccomp: add syscalls for kernel v5.17 - v6.6, match containerd's profile
2024-02-06 16:46:41 +01:00
Sebastiaan van Stijn
aff7177ee7 Merge pull request #47337 from vvoland/cache-fix-older-windows-25
[25.0 backport] image/cache: Ignore Build and Revision on Windows
2024-02-06 16:18:02 +01:00
Sebastiaan van Stijn
ed7c26339e seccomp: add futex_wake syscall (kernel v6.7, libseccomp v2.5.5)
Add this syscall to match the profile in containerd

containerd: a6e52c74fa
libseccomp: 53267af3fb
kernel: 9f6c532f59

    futex: Add sys_futex_wake()

    To complement sys_futex_waitv() add sys_futex_wake(). This syscall
    implements what was previously known as FUTEX_WAKE_BITSET except it
    uses 'unsigned long' for the bitmask and takes FUTEX2 flags.

    The 'unsigned long' allows FUTEX2_SIZE_U64 on 64bit platforms.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d69729e053)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:25:59 +01:00
Sebastiaan van Stijn
74e3b4fb2e seccomp: add futex_wait syscall (kernel v6.7, libseccomp v2.5.5)
Add this syscall to match the profile in containerd

containerd: a6e52c74fa
libseccomp: 53267af3fb
kernel: cb8c4312af

    futex: Add sys_futex_wait()

    To complement sys_futex_waitv()/wake(), add sys_futex_wait(). This
    syscall implements what was previously known as FUTEX_WAIT_BITSET
    except it uses 'unsigned long' for the value and bitmask arguments,
    takes timespec and clockid_t arguments for the absolute timeout and
    uses FUTEX2 flags.

    The 'unsigned long' allows FUTEX2_SIZE_U64 on 64bit platforms.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 10d344d176)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:25:59 +01:00
Sebastiaan van Stijn
4cc0416534 seccomp: add futex_requeue syscall (kernel v6.7, libseccomp v2.5.5)
Add this syscall to match the profile in containerd

containerd: a6e52c74fa
libseccomp: 53267af3fb
kernel: 0f4b5f9722

    futex: Add sys_futex_requeue()

    Finish off the 'simple' futex2 syscall group by adding
    sys_futex_requeue(). Unlike sys_futex_{wait,wake}() its arguments are
    too numerous to fit into a regular syscall. As such, use struct
    futex_waitv to pass the 'source' and 'destination' futexes to the
    syscall.

    This syscall implements what was previously known as FUTEX_CMP_REQUEUE
    and uses {val, uaddr, flags} for source and {uaddr, flags} for
    destination.

    This design explicitly allows requeueing between different types of
    futex by having a different flags word per uaddr.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit df57a080b6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:25:59 +01:00
Sebastiaan van Stijn
f9f9e7ff9a seccomp: add map_shadow_stack syscall (kernel v6.6, libseccomp v2.5.5)
Add this syscall to match the profile in containerd

containerd: a6e52c74fa
libseccomp: 53267af3fb
kernel: c35559f94e

    x86/shstk: Introduce map_shadow_stack syscall

    When operating with shadow stacks enabled, the kernel will automatically
    allocate shadow stacks for new threads, however in some cases userspace
    will need additional shadow stacks. The main example of this is the
    ucontext family of functions, which require userspace allocating and
    pivoting to userspace managed stacks.

    Unlike most other user memory permissions, shadow stacks need to be
    provisioned with special data in order to be useful. They need to be setup
    with a restore token so that userspace can pivot to them via the RSTORSSP
    instruction. But, the security design of shadow stacks is that they
    should not be written to except in limited circumstances. This presents a
    problem for userspace, as to how userspace can provision this special
    data, without allowing for the shadow stack to be generally writable.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 8826f402f9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:25:59 +01:00
Sebastiaan van Stijn
5fb4eb941d seccomp: add fchmodat2 syscall (kernel v6.6, libseccomp v2.5.5)
Add this syscall to match the profile in containerd

containerd: a6e52c74fa
libseccomp: 53267af3fb
kernel: 09da082b07

    fs: Add fchmodat2()

    On the userspace side fchmodat(3) is implemented as a wrapper
    function which implements the POSIX-specified interface. This
    interface differs from the underlying kernel system call, which does not
    have a flags argument. Most implementations require procfs [1][2].

    There doesn't appear to be a good userspace workaround for this issue
    but the implementation in the kernel is pretty straight-forward.

    The new fchmodat2() syscall allows to pass the AT_SYMLINK_NOFOLLOW flag,
    unlike existing fchmodat.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6f242f1a28)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:25:59 +01:00
Sebastiaan van Stijn
67e9aa6d4d seccomp: add cachestat syscall (kernel v6.5, libseccomp v2.5.5)
Add this syscall to match the profile in containerd

containerd: a6e52c74fa
libseccomp: 53267af3fb
kernel: cf264e1329

    NAME
        cachestat - query the page cache statistics of a file.

    SYNOPSIS
        #include <sys/mman.h>

        struct cachestat_range {
            __u64 off;
            __u64 len;
        };

        struct cachestat {
            __u64 nr_cache;
            __u64 nr_dirty;
            __u64 nr_writeback;
            __u64 nr_evicted;
            __u64 nr_recently_evicted;
        };

        int cachestat(unsigned int fd, struct cachestat_range *cstat_range,
            struct cachestat *cstat, unsigned int flags);

    DESCRIPTION
        cachestat() queries the number of cached pages, number of dirty
        pages, number of pages marked for writeback, number of evicted
        pages, number of recently evicted pages, in the bytes range given by
        `off` and `len`.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4d0d5ee10d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:25:58 +01:00
Sebastiaan van Stijn
61b82be580 seccomp: add set_mempolicy_home_node syscall (kernel v5.17, libseccomp v2.5.4)
This syscall is gated by CAP_SYS_NICE, matching the profile in containerd.

containerd: a6e52c74fa
libseccomp: d83cb7ac25
kernel: c6018b4b25

    mm/mempolicy: add set_mempolicy_home_node syscall
    This syscall can be used to set a home node for the MPOL_BIND and
    MPOL_PREFERRED_MANY memory policy.  Users should use this syscall after
    setting up a memory policy for the specified range as shown below.

      mbind(p, nr_pages * page_size, MPOL_BIND, new_nodes->maskp,
            new_nodes->size + 1, 0);
      sys_set_mempolicy_home_node((unsigned long)p, nr_pages * page_size,
                    home_node, 0);

    The syscall allows specifying a home node/preferred node from which
    kernel will fulfill memory allocation requests first.
    ...

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1251982cf7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:25:56 +01:00
Paweł Gronowski
0227d95f99 image/cache: Use Platform from ocispec
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 2c01d53d96)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-06 14:29:54 +01:00
Paweł Gronowski
fa9c5c55e1 image/cache: Ignore Build and Revision on Windows
The compatibility depends on whether `hyperv` or `process` container
isolation is used.
This fixes cache not being used when building images based on older
Windows versions on a newer Windows host.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 91ea04089b)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-06 13:13:37 +01:00
Sebastiaan van Stijn
df96d8d0bd Merge pull request #47334 from thaJeztah/25.0_backport_rootlesskit_binary_2.0.1
[25.0 backport] Dockerfile: update RootlessKit to v2.0.1
2024-02-06 13:00:20 +01:00
Akihiro Suda
1652559be4 Dockerfile: update RootlessKit to v2.0.1
https://github.com/rootless-containers/rootlesskit/releases/tag/v2.0.1

Fix issue 47327 (`rootless lxc-user-nic: /etc/resolv.conf missing ip`)

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 7f1b700227)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 09:09:01 +01:00
Sebastiaan van Stijn
ab29279200 Merge pull request #47294 from vvoland/fix-save-manifests-25
[25.0 backport] image/save: Fix untagged images not present in index.json
2024-02-05 19:59:57 +01:00
Paweł Gronowski
147b5388dd integration/save: Add tests checking OCI archive output
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 2ef0b53e51)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-05 14:28:02 +01:00
Sebastiaan van Stijn
60103717bc Merge pull request #47316 from thaJeztah/25.0_backport_update_dev_cli_compose
[25.0 backport] Dockerfile: update docker-cli to v25.0.2, docker compose v2.24.5
2024-02-05 11:09:34 +01:00
Sebastiaan van Stijn
45dede440e Merge pull request #47323 from thaJeztah/25.0_backport_plugin-install-digest
[25.0 backport] plugins: Fix panic when fetching by digest
2024-02-05 10:42:20 +01:00
Laura Brehm
ba4a2dab16 plugins: fix panic installing from repo w/ digest
Only print the tag when the received reference has a tag, if
we can't cast the received tag to a `reference.Tagged` then
skip printing the tag as it's likely a digest.

Fixes panic when trying to install a plugin from a reference
with a digest such as
`vieux/sshfs@sha256:1d3c3e42c12138da5ef7873b97f7f32cf99fb6edde75fa4f0bcf9ed277855811`

Signed-off-by: Laura Brehm <laurabrehm@hey.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-05 09:40:57 +01:00
Laura Brehm
51133117fb tests: add plugin install test w/ digest
Adds a test case for installing a plugin from a remote in the form
of `plugin-content-trust@sha256:d98f2f8061...`, which is currently
causing the daemon to panic, as we found while running the CLI e2e
tests:

```
docker plugin install registry:5000/plugin-content-trust@sha256:d98f2f806144bf4ba62d4ecaf78fec2f2fe350df5a001f6e3b491c393326aedb
```

Signed-off-by: Laura Brehm <laurabrehm@hey.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-05 09:40:54 +01:00
Sebastiaan van Stijn
341a7978a5 Merge pull request #47313 from thaJeztah/25.0_backport_libc8d_fix_startup_data_race
[25.0 backport] libcontainerd/supervisor: fix data race
2024-02-03 14:37:57 +01:00
Sebastiaan van Stijn
10e3bfd0ac Merge pull request #47220 from thaJeztah/25.0_backport_more_gocompat
[25.0 backport] add more //go:build directives to prevent downgrading to go1.16 language
2024-02-03 14:36:55 +01:00
Sebastiaan van Stijn
269a0d8feb Dockerfile: update docker compose to v2.24.5
Update the version of compose used in CI to the latest version.

- full diff: https://github.com/docker/compose/compare/v2.24.3...v2.24.5
- release notes: https://github.com/docker/compose/releases/tag/v2.24.5

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 10d6f5213a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-03 13:54:14 +01:00
Sebastiaan van Stijn
876b1d1dcd Dockerfile: update dev-shell version of the cli to v25.0.2
Update the docker CLI that's available for debugging in the dev-shell
to the v25 release.

full diff: https://github.com/docker/cli/compare/v25.0.1...v25.0.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9c92c07acf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-03 13:54:13 +01:00
Sebastiaan van Stijn
0bcd64689b Dockerfile: update docker compose to v2.24.3
Update the version of compose used in CI to the latest version.

- full diff: https://github.com/docker/compose/compare/v2.24.2...v2.24.3
- release notes: https://github.com/docker/compose/releases/tag/v2.24.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 388ba9a69c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-03 13:54:13 +01:00
Sebastiaan van Stijn
8d454710cd Dockerfile: update dev-shell version of the cli to v25.0.1
Update the docker CLI that's available for debugging in the dev-shell
to the v25 release.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3eb1527fdb)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-03 13:54:13 +01:00
Sebastiaan van Stijn
6cf694fe70 Merge pull request #47243 from corhere/backport-25.0/fix-journald-logs-systemd-255
[25.0 backport] logger/journald: fix tailing logs with systemd 255
2024-02-03 13:29:42 +01:00
Cory Snider
c12bbf549b libcontainerd/supervisor: fix data race
The monitorDaemon() goroutine calls startContainerd() then blocks on
<-daemonWaitCh to wait for it to exit. The startContainerd() function
would (re)initialize the daemonWaitCh so a restarted containerd could be
waited on. This implementation was race-free because startContainerd()
would synchronously initialize the daemonWaitCh before returning. When
the call to start the managed containerd process was moved into the
waiter goroutine, the code to initialize the daemonWaitCh struct field
was also moved into the goroutine. This introduced a race condition.

Move the daemonWaitCh initialization to guarantee that it happens before
the startContainerd() call returns.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit dd20bf4862)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-03 11:40:54 +01:00
Sebastiaan van Stijn
1ae115175c Merge pull request #47311 from akerouanton/25.0-libnet-bridge-mtu-ignore-einval
[25.0 backport] libnet: bridge: ignore EINVAL when configuring bridge MTU
2024-02-03 11:36:47 +01:00
Sebastiaan van Stijn
a7f9907f5f Merge pull request #47310 from akerouanton/25.0-revert-automatically-enable-ipv6
[25.0 backport] Revert "daemon: automatically set network EnableIPv6 if needed"
2024-02-03 11:31:11 +01:00
Cory Snider
9150d0115e d/logger/journald: quit waiting when logger closes
If a reader has caught up to the logger and is waiting for the next
message, it should stop waiting when the logger is closed. Otherwise
the reader will unnecessarily wait the full closedDrainTimeout for no
log messages to arrive.

This case was overlooked when the journald reader was recently
overhauled to be compatible with systemd 255, and the reader tests only
failed when a logical race happened to settle in such a way to exercise
the bugged code path. It was only after implicit flushing on close was
added to the journald test harness that the Follow tests would
repeatably fail due to this bug. (No new regression tests are needed.)

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 987fe37ed1)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-02-02 19:10:41 -05:00
Cory Snider
9af7c8ec0a d/logger/journald: sync logger on close in tests
The journald reader test harness injects an artificial asynchronous
delay into the logging pipeline: a logged message won't be written to
the journal until at least 150ms after the Log() call returns. If a test
returns while log messages are still in flight to be written, the logs
may attempt to be written after the TempDir has been cleaned up, leading
to spurious errors.

The logger read tests which interleave writing and reading have to
include explicit synchronization points to work reliably with this delay
in place. On the other hand, tests should not be required to sync the
logger explicitly before returning. Override the Close() method in the
test harness wrapper to wait for in-flight logs to be flushed to disk.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit d53b7d7e46)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-02-02 19:10:41 -05:00
Cory Snider
3344c502da d/logger/loggertest: improve TestConcurrent
- Check the return value when logging messages
- Log the stream (stdout/stderr) and list of messages that were not read
- Wait until the logger is closed before returning early (panic/fatal)

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 39c5c16521)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-02-02 19:10:41 -05:00
Cory Snider
6c9fafdda7 d/logger/journald: log journal-remote cmd output
Writing the systemd-journal-remote command output directly to os.Stdout
and os.Stderr makes it nearly impossible to tell which test case the
output is related to when the tests are not run in verbose mode. Extend
the journald sender fake to redirect output to the test log so they
interleave with the rest of the test output.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 5792bf7ab3)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-02-02 19:10:41 -05:00
Cory Snider
f8a8cdaf9e d/logger/journald: fix data race in test harness
The Go race detector was detecting a data race when running the
TestLogRead/Follow/Concurrent test against the journald logging driver.
The race was in the test harness, specifically syncLogger. The waitOn
field would be reassigned each time a log entry is sent to the journal,
which is not concurrency-safe. Make it concurrency-safe using the same
patterns that are used in the log follower implementation to synchronize
with the logger.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 982e777d49)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-02-02 19:10:41 -05:00
Albin Kerouanton
7a659049b8 libnet: bridge: ignore EINVAL when configuring bridge MTU
Since 964ab7158c, we explicitly set the bridge MTU if it was specified.
Unfortunately, kernel <v4.17 have a check preventing us to manually set
the MTU to anything greater than 1500 if no links is attached to the
bridge, which is how we do things -- create the bridge, set its MTU and
later on, attach veths to it.

Relevant kernel commit: 804b854d37

As we still have to support CentOS/RHEL 7 (and their old v3.10 kernels)
for a few more months, we need to ignore EINVAL if the MTU is > 1500
(but <= 65535).

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
(cherry picked from commit 89470a7114)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-02-02 19:51:11 +01:00
Albin Kerouanton
0ccf1c2a93 api/t/network: ValidateIPAM: ignore v6 subnet when IPv6 is disabled
Commit 4f47013feb introduced a new validation step to make sure no
IPv6 subnet is configured on a network which has EnableIPv6=false.

Commit 5d5eeac310 then removed that validation step and automatically
enabled IPv6 for networks with a v6 subnet. But this specific commit
was reverted in c59e93a67b and now the error introduced by 4f47013feb
is re-introduced.

But it turns out some users expect a network created with an IPv6
subnet and EnableIPv6=false to actually have no IPv6 connectivity.
This restores that behavior.

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
(cherry picked from commit e37172c613)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-02-02 19:37:27 +01:00
Albin Kerouanton
28c1a8bc2b Revert "daemon: automatically set network EnableIPv6 if needed"
This reverts commit 5d5eeac310.

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
(cherry picked from commit c59e93a67b)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-02-02 19:37:27 +01:00
Sebastiaan van Stijn
5b5a58b2cd Merge pull request #47304 from akerouanton/25.0-duplicate_mac_addrs2
[25.0 backport] Only restore a configured MAC addr on restart.
2024-02-02 19:04:09 +01:00
Sebastiaan van Stijn
282891f70c Merge pull request #47303 from akerouanton/25.0-backport-internal-bridge-firewalld
[25.0 backport] Add internal n/w bridge to firewalld docker zone
2024-02-02 19:02:57 +01:00
Rob Murray
bbe6f09afc No inspect 'Config.MacAddress' unless configured.
Do not set 'Config.MacAddress' in inspect output unless the MAC address
is configured.

Also, make sure it is filled in for a configured address on the default
network before the container is started (by translating the network name
from 'default' to 'config' so that the address lookup works).

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 8c64b85fb9)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-02-02 09:10:04 +01:00
Rob Murray
5b13a38144 Only restore a configured MAC addr on restart.
The API's EndpointConfig struct has a MacAddress field that's used for
both the configured address, and the current address (which may be generated).

A configured address must be restored when a container is restarted, but a
generated address must not.

The previous attempt to differentiate between the two, without adding a field
to the API's EndpointConfig that would show up in 'inspect' output, was a
field in the daemon's version of EndpointSettings, MACOperational. It did
not work, MACOperational was set to true when a configured address was
used. So, while it ensured addresses were regenerated, it failed to preserve
a configured address.

So, this change removes that code, and adds DesiredMacAddress to the wrapped
version of EndpointSettings, where it is persisted but does not appear in
'inspect' results. Its value is copied from MacAddress (the API field) when
a container is created.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit dae33031e0)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-02-02 09:10:04 +01:00
Rob Murray
990e95dcf0 Add internal n/w bridge to firewalld docker zone
Containers attached to an 'internal' bridge network are unable to
communicate when the host is running firewalld.

Non-internal bridges are added to a trusted 'docker' firewalld zone, but
internal bridges were not.

DOCKER-ISOLATION iptables rules are still configured for an internal
network, they block traffic to/from addresses outside the network's subnet.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 2cc627932a)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-02-02 08:38:44 +01:00
Sebastiaan van Stijn
a140d0d95f Merge pull request #47201 from thaJeztah/25.0_backport_swarm_rotate_key_flake
[25.0 backport] De-flake TestSwarmClusterRotateUnlockKey... again... maybe?
2024-02-02 00:13:42 +01:00
Sebastiaan van Stijn
91a8312fb7 Merge pull request #47295 from vvoland/api-build-version-25
[25.0 backport] api: Document `version` in `/build`
2024-02-01 21:07:57 +01:00
Sebastiaan van Stijn
cf03e96354 Merge pull request #47298 from thaJeztah/25.0_backport_rm_dash_rf
[25.0 backport] Assert temp output directory is not an empty string
2024-02-01 19:13:50 +01:00
Paweł Gronowski
c48b67160d api: Document version in /build
It was introduced in API v1.38 but wasn't documented.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 0c3b8ccda7)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-01 17:00:31 +01:00
Paweł Gronowski
225e043196 c8d/save: Handle digested reference same as ID
When saving an image treat `image@sha256:abcdef...` the same as
`abcdef...`, this makes it:

- Not export the digested tag as the image name
- Not try to export all tags from the image repository

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 5e13f54f57)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-01 16:48:36 +01:00
Paweł Gronowski
78174d2e74 image/save: Fix untagged images not present in index.json
Saving an image via digested reference, ID or truncated ID doesn't store
the image reference in the archive. This also causes the save code to
not add the image's manifest to the index.json.
This commit explicitly adds the untagged manifests to the index.json if
no tagged manifests were added.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit d131f00fff)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-01 16:48:34 +01:00
Sebastiaan van Stijn
622e66684a Merge pull request #47296 from dvdksn/25.0_backport_api_docs_broken_links
[25.0 backport] docs: remove dead links from api verison history
2024-02-01 16:09:01 +01:00
voloder
85f4e6151a Assert temp output directory is not an empty string
Signed-off-by: voloder <110066198+voloder@users.noreply.github.com>
(cherry picked from commit c655b7dc78)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 16:04:29 +01:00
Sebastiaan van Stijn
3e358447f5 Merge pull request #47291 from thaJeztah/25.0_backport_update_actions
[25.0 backport] gha: update actions to account for node 16 deprecation
2024-02-01 15:49:06 +01:00
David Karlsson
dd4de8f388 docs: remove dead links from api verison history
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
(cherry picked from commit 7f94acb6ab)
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-02-01 15:02:46 +01:00
CrazyMax
f5ef4e76b3 ci: update to docker/bake-action@v4
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit a2026ee442)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:42 +01:00
CrazyMax
6c5e5271c1 ci: update to codecov/codecov-action@v4
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit 5a3c463a37)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:42 +01:00
CrazyMax
693fca6199 ci: update to actions/download-artifact@v4 and actions/upload-artifact@v4
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit 9babc02283)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:42 +01:00
CrazyMax
49487e996a ci: update to actions/cache@v3
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit a83557d747)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:41 +01:00
Sebastiaan van Stijn
0358f31dc2 gha: update to crazy-max/ghaction-github-runtime@v3
- Node 20 as default runtime (requires Actions Runner v2.308.0 or later)
- full diff: https://github.com/crazy-max/ghaction-github-runtime/compare/v2.2.0...v3.0.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3a8191225a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:13 +01:00
Sebastiaan van Stijn
081cffb3fa gha: update to docker/login-action@v3
- Node 20 as default runtime (requires Actions Runner v2.308.0 or later)
- full diff https://github.com/docker/login-action/compare/v2.2.0...v3.0.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 08251978a8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:13 +01:00
Sebastiaan van Stijn
9de19554c7 gha: update to docker/setup-qemu-action@v3
- Node 20 as default runtime (requires Actions Runner v2.308.0 or later)
- full diff https://github.com/docker/setup-qemu-action/compare/v2.2.0...v3.0.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5d396e0533)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:13 +01:00
Sebastiaan van Stijn
2a80b8a7b2 gha: update to docker/bake-action@v4
- Node 20 as default runtime (requires Actions Runner v2.308.0 or later)
- full diff https://github.com/docker/bake-action/compare/v2.3.0...v4.1.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4a1839ef1d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:13 +01:00
Sebastiaan van Stijn
61ffecfa3b gha: update to docker/setup-buildx-action@v3
- Node 20 as default runtime (requires Actions Runner v2.308.0 or later)
- full diff: https://github.com/docker/setup-buildx-action/compare/v2.10.0...v3.0.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b7fd571b0a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:13 +01:00
Sebastiaan van Stijn
02cd8dec03 gha: update to docker/metadata-action@v5
- Node 20 as default runtime (requires Actions Runner v2.308.0 or later)
- full diff: https://github.com/docker/metadata-action/compare/v4.6.0...v5.5.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 00a2626b56)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:12 +01:00
Sebastiaan van Stijn
1d7df5ecc0 gha: update to actions/setup-go@v5
- full diff: https://github.com/actions/setup-go/compare/v3.5.0...v5.0.0

v5

In scope of this release, we change Nodejs runtime from node16 to node20.
Moreover, we update some dependencies to the latest versions.

Besides, this release contains such changes as:

- Fix hosted tool cache usage on windows
- Improve documentation regarding dependencies caching

V4

The V4 edition of the action offers:

- Enabled caching by default
- The action will try to enable caching unless the cache input is explicitly
  set to false.

Please see "Caching dependency files and build outputs" for more information:
https://github.com/actions/setup-go#caching-dependency-files-and-build-outputs

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e27a785f43)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:12 +01:00
Sebastiaan van Stijn
4e68a265ed gha: update to actions/github-script@v7
- full diff: https://github.com/actions/github-script/compare/v6.4.1...v7.0.1

breaking changes: https://github.com/actions/github-script?tab=readme-ov-file#v7

> Version 7 of this action updated the runtime to Node 20
> https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#runs-for-javascript-actions
>
> All scripts are now run with Node 20 instead of Node 16 and are affected
> by any breaking changes between Node 16 and 20
>
> The previews input now only applies to GraphQL API calls as REST API previews
> are no longer necessary
> https://github.blog/changelog/2021-10-14-rest-api-preview-promotions/.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit fb53ee6ba3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:12 +01:00
Sebastiaan van Stijn
e437f890ba gha: update to actions/checkout@v4
Release notes:

- https://github.com/actions/checkout/compare/v3.6.0...v4.1.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0ffddc6bb8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:09 +01:00
Sebastiaan van Stijn
5a0015f72c Merge pull request #47225 from s4ke/bump-swarmkit-generic-resources#25.0
[25.0 backport] Fix HasResource inverted boolean error - vendor swarmkit v2.0.0-20240125134710-dcda100a8261
2024-02-01 04:20:24 +01:00
Sebastiaan van Stijn
5babfee371 Merge pull request #47222 from vvoland/pkg-pools-close-noop-25
[25.0 backport] pkg/ioutils: Make subsequent Close attempts noop
2024-02-01 04:19:12 +01:00
Sebastiaan van Stijn
fce6e0ca9b Merge pull request from GHSA-xw73-rw38-6vjc
[25.0 backport] image/cache: Restrict cache candidates to locally built images
2024-02-01 01:12:24 +01:00
Sebastiaan van Stijn
d838e68300 Merge pull request #47269 from thaJeztah/25.0_backport_bump_runc_binary_1.1.12
[25.0 backport] update runc binary to v1.1.12
2024-02-01 00:05:10 +01:00
Sebastiaan van Stijn
fa0d4159c7 Merge pull request #47280 from thaJeztah/25.0_backport_bump_containerd_binary_1.7.13
[25.0 backport] update containerd binary to v1.7.13
2024-01-31 23:53:46 +01:00
Sebastiaan van Stijn
06e22dce46 Merge pull request #47275 from vvoland/vendor-bk-0.12.5-25
[25.0 backport] vendor: github.com/moby/buildkit v0.12.5
2024-01-31 22:47:49 +01:00
Sebastiaan van Stijn
b73ee94289 Merge pull request #47274 from thaJeztah/25.0_backport_bump_runc_1.1.12
[25.0 backport] vendor: github.com/opencontainers/runc v1.1.12
2024-01-31 22:46:35 +01:00
Sebastiaan van Stijn
fd6a419ad5 update containerd binary to v1.7.13
Update the containerd binary that's used in CI

- full diff: https://github.com/containerd/containerd/compare/v1.7.12...v1.7.13
- release notes: https://github.com/containerd/containerd/releases/tag/v1.7.13

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 835cdcac95)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-31 22:03:56 +01:00
Paweł Gronowski
13ce91825f vendor: github.com/moby/buildkit v0.12.5
full diff: https://github.com/moby/buildkit/compare/v0.12.4...v0.12.5

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-31 21:24:39 +01:00
Sebastiaan van Stijn
4b63c47c1e vendor: github.com/opencontainers/runc v1.1.12
- release notes: https://github.com/opencontainers/runc/releases/tag/v1.1.12
- full diff: https://github.com/opencontainers/runc/compare/v1.1.11...v1.1.12

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b20dccba5e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-31 21:19:02 +01:00
Sebastiaan van Stijn
4edb71bb83 update runc binary to v1.1.12
Update the runc binary that's used in CI and for the static packages, which
includes a fix for [CVE-2024-21626].

- release notes: https://github.com/opencontainers/runc/releases/tag/v1.1.12
- full diff: https://github.com/opencontainers/runc/compare/v1.1.11...v1.1.12

[CVE-2024-21626]: https://github.com/opencontainers/runc/security/advisories/GHSA-xr7r-f8xq-vfvv

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 44bf407d4d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-31 21:06:10 +01:00
Sebastiaan van Stijn
667bc3f803 Merge pull request #47265 from vvoland/ci-fix-makeps1-templatefail-25
[25.0 backport] hack/make.ps1: Fix go list pattern
2024-01-31 21:01:57 +01:00
Paweł Gronowski
1b47bfac02 hack/make.ps1: Fix go list pattern
The double quotes inside a single quoted string don't need to be
escaped.
Looks like different Powershell versions are treating this differently
and it started failing unexpectedly without any changes on our side.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit ecb217cf69)
2024-01-31 19:55:37 +01:00
Cory Snider
f2d0d87c46 logger/journald: drop errDrainDone sentinel
errDrainDone is a sentinel error which is never supposed to escape the
package. Consequently, it needs to be filtered out of returns all over
the place, adding boilerplate. Forgetting to filter out these errors
would be a logic bug which the compiler would not help us catch. Replace
it with boolean multi-valued returns as they can't be accidentally
ignored or propagated.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 905477c8ae)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-01-29 11:28:34 -05:00
Cory Snider
6ac38cdbeb logger/journald: wait no longer than the deadline
While it doesn't really matter if the reader waits for an extra
arbitrary period beyond an arbitrary hardcoded timeout, it's also
trivial and cheap to implement, and nice to have.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit d70fe8803c)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-01-29 11:28:34 -05:00
Cory Snider
d7bf237e29 logger/journald: use deadline for drain timeout
The journald reader uses a timer to set an upper bound on how long to
wait for the final log message of a stopped container. However, the
timer channel is only received from in non-blocking select statements!
There isn't enough benefit of using a timer to offset the cost of having
to manage the timer resource. Setting a deadline and comparing the
current time is just as effective, without having to manage the
lifecycle of any runtime resources.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit e94ec8068d)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-01-29 11:28:33 -05:00
Cory Snider
f41b342cbe l/journald: make tests compatible with systemd 255
Synthesize a boot ID for journal entries fed into
systemd-journal-remote, as required by systemd 255.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 71bfffdad1)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-01-29 11:28:33 -05:00
Cory Snider
f413ba6fdb daemon/logger/loggertest: expand log-follow tests
Following logs with a non-negative tail when the container log is empty
is broken on the journald driver when used with systemd 255. Add tests
which cover this edge case to our loggertest suite.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 931568032a)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-01-29 11:28:33 -05:00
Martin Braun
c2ef38f790 vendor swarmkit v2.0.0-20240125134710-dcda100a8261
Signed-off-by: Martin Braun <braun@neuroforge.de>
2024-01-25 16:28:59 +01:00
Paweł Gronowski
d5eebf9e19 builder/windows: Don't set ArgsEscaped for RUN cache probe
Previously this was done indirectly - the `compare` function didn't
check the `ArgsEscaped`.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 96d461d27e)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-25 16:07:14 +01:00
Paweł Gronowski
f3f5327b48 image/cache: Check image platform
Make sure the cache candidate platform matches the requested.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 877ebbe038)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-25 16:07:13 +01:00
Paweł Gronowski
05a370f52f image/cache: Restrict cache candidates to locally built images
Restrict cache candidates only to images that were built locally.
This doesn't affect builds using `--cache-from`.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 96ac22768a)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-25 16:07:11 +01:00
Paweł Gronowski
be7b60ef05 daemon/imageStore: Mark images built locally
Store additional image property which makes it possible to distinguish
if image was built locally.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit c6156dc51b)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-25 16:07:10 +01:00
Paweł Gronowski
6d05b9b65b image/cache: Compare all config fields
Add checks for some image config fields that were missing.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 537348763f)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-25 16:07:08 +01:00
Paweł Gronowski
c01bbbddeb pkg/ioutils: Make subsequent Close attempts noop
Turn subsequent `Close` calls into a no-op and produce a warning with an
optional stack trace (if debug mode is enabled).

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 585d74bad1)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-25 15:15:01 +01:00
Sebastiaan van Stijn
32635850ed add more //go:build directives to prevent downgrading to go1.16 language
This is a follow-up to 2cf230951f, adding
more directives to adjust for some new code added since:

Before this patch:

    make -C ./internal/gocompat/
    GO111MODULE=off go generate .
    GO111MODULE=on go mod tidy
    GO111MODULE=on go test -v

    # github.com/docker/docker/internal/sliceutil
    internal/sliceutil/sliceutil.go:3:12: type parameter requires go1.18 or later (-lang was set to go1.16; check go.mod)
    internal/sliceutil/sliceutil.go:3:14: predeclared comparable requires go1.18 or later (-lang was set to go1.16; check go.mod)
    internal/sliceutil/sliceutil.go:4:19: invalid map key type T (missing comparable constraint)

    # github.com/docker/docker/libnetwork
    libnetwork/endpoint.go:252:17: implicit function instantiation requires go1.18 or later (-lang was set to go1.16; check go.mod)

    # github.com/docker/docker/daemon
    daemon/container_operations.go:682:9: implicit function instantiation requires go1.18 or later (-lang was set to go1.16; check go.mod)
    daemon/inspect.go:42:18: implicit function instantiation requires go1.18 or later (-lang was set to go1.16; check go.mod)

With this patch:

    make -C ./internal/gocompat/
    GO111MODULE=off go generate .
    GO111MODULE=on go mod tidy
    GO111MODULE=on go test -v
    === RUN   TestModuleCompatibllity
        main_test.go:321: all packages have the correct go version specified through //go:build
    --- PASS: TestModuleCompatibllity (0.00s)
    PASS
    ok  	gocompat	0.031s
    make: Leaving directory '/go/src/github.com/docker/docker/internal/gocompat'

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit bd4ff31775)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-25 14:57:05 +01:00
Brian Goff
2cf1c762f8 De-flake TestSwarmClusterRotateUnlockKey... again... maybe?
This hopefully makes the test less flakey (or removes any flake that
would be caused by the test itself).

1. Adds tail of cluster daemon logs when there is a test failure so we
   can more easily see what may be happening
2. Scans the daemon logs to check if the key is rotated before
   restarting the daemon. This is a little hacky but a little better
   than assuming it is done after a hard-coded 3 seconds.
3. Cleans up the `node ls` check such that it uses a poll function

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit fbdc02534a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-24 10:06:10 +01:00
Sebastiaan van Stijn
71fa3ab079 Merge pull request #47196 from akerouanton/25.0-fix-multiple-rename-error
[25.0] daemon: rename: don't reload endpoint from datastore
2024-01-23 23:56:39 +01:00
Albin Kerouanton
5295e88ceb daemon: rename: don't reload endpoint from datastore
Commit 8b7af1d0f added some code to update the DNSNames of all
endpoints attached to a sandbox by loading a new instance of each
affected endpoints from the datastore through a call to
`Network.EndpointByID()`.

This method then calls `Network.getEndpointFromStore()`, that in
turn calls `store.GetObject()`, which then calls `cache.get()`,
which calls `o.CopyTo(kvObject)`. This effectively creates a fresh
new instance of an Endpoint. However, endpoints are already kept in
memory by Sandbox, meaning we now have two in-memory instances of
the same Endpoint.

As it turns out, libnetwork is built around the idea that no two objects
representing the same thing should leave in-memory, otherwise breaking
mutex locking and optimistic locking (as both instances will have a drifting
version tracking ID -- dbIndex in libnetwork parliance).

In this specific case, this bug materializes by container rename failing
when applied a second time for a given container. An integration test is
added to make sure this won't happen again.

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
(cherry picked from commit 80c44b4b2e)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-01-23 22:53:43 +01:00
Paweł Gronowski
6eef840b8a Merge pull request #47191 from vvoland/volume-cifs-resolve-optout-25
[25.0 backport] volume/local: Make host resolution backwards compatible
2024-01-23 19:13:39 +01:00
Sebastiaan van Stijn
e2ab4718c8 Merge pull request #47182 from akerouanton/25.0-fix-aliases-on-default-bridge
[25.0] daemon: only add short cid to aliases for custom networks
2024-01-23 18:28:58 +01:00
Paweł Gronowski
3de920a0b1 volume/local: Fix cifs url containing spaces
Unescapes the URL to avoid passing an URL encoded address to the kernel.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 250886741b)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-23 17:46:23 +01:00
Paweł Gronowski
a445aa95e5 volume/local: Add tests for parsing nfs/cifs mounts
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit f4beb130b0)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-23 17:46:18 +01:00
Paweł Gronowski
cb77e48229 volume/local: Break early if addr was specified
I made a mistake in the last commit - after resolving the IP from the
passed `addr` for CIFS it would still resolve the `device` part.

Apply only one name resolution

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit df43311f3d)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-23 17:45:09 +01:00
Albin Kerouanton
e8801fbe26 daemon: only add short cid to aliases for custom networks
Prior to 7a9b680a, the container short ID was added to the network
aliases only for custom networks. However, this logic wasn't preserved
in 6a2542d and now the cid is always added to the list of network
aliases.

This commit reintroduces the old logic.

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
(cherry picked from commit 9f37672ca8)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-01-23 17:09:11 +01:00
Sebastiaan van Stijn
613b6a12c1 Merge pull request #47192 from thaJeztah/25.0_backport_fix_gateway_ip
[25.0 backport] fix "host-gateway-ip" label not set for builder workers
2024-01-23 17:05:02 +01:00
Sebastiaan van Stijn
1b6738369f Merge pull request #47189 from vvoland/c8d-prefer-default-platform-snapshot-25
[25.0 release] c8d/snapshot: Create any platform if not specified
2024-01-23 16:17:16 +01:00
Sebastiaan van Stijn
b8cc2e8c66 fix "host-gateway-ip" label not set for builder workers
Commit 21e50b89c9 added a label on the buildkit
worker to advertise the host-gateway-ip. This option can be either set by the
user in the daemon config, or otherwise defaults to the gateway-ip.

If no value is set by the user, discovery of the gateway-ip happens when
initializing the network-controller (`NewDaemon`, `daemon.restore()`).

However d222bf097c changed how we handle the
daemon config. As a result, the `cli.Config` used when initializing the
builder only holds configuration information form the daemon config
(user-specified or defaults), but is not updated with information set
by `NewDaemon`.

This patch adds an accessor on the daemon to get the current daemon config.
An alternative could be to return the config by `NewDaemon` (which should
likely be a _copy_ of the config).

Before this patch:

    docker buildx inspect default
    Name:   default
    Driver: docker

    Nodes:
    Name:      default
    Endpoint:  default
    Status:    running
    Buildkit:  v0.12.4+3b6880d2a00f
    Platforms: linux/arm64, linux/amd64, linux/amd64/v2, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
    Labels:
     org.mobyproject.buildkit.worker.moby.host-gateway-ip: <nil>

After this patch:

    docker buildx inspect default
    Name:   default
    Driver: docker

    Nodes:
    Name:      default
    Endpoint:  default
    Status:    running
    Buildkit:  v0.12.4+3b6880d2a00f
    Platforms: linux/arm64, linux/amd64, linux/amd64/v2, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
    Labels:
     org.mobyproject.buildkit.worker.moby.host-gateway-ip: 172.18.0.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 00c9785e2e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-23 15:59:09 +01:00
Paweł Gronowski
fcccfeb811 c8d/snapshot: Create any platform if not specified
With containerd snapshotters enabled `docker run` currently fails when
creating a container from an image that doesn't have the default host
platform without an explicit `--platform` selection:

```
$ docker run image:amd64
Unable to find image 'asdf:amd64' locally
docker: Error response from daemon: pull access denied for asdf, repository does not exist or may require 'docker login'.
See 'docker run --help'.
```

This is confusing and the graphdriver behavior is much better here,
because it runs whatever platform the image has, but prints a warning:

```
$ docker run image:amd64
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
```

This commits changes the containerd snapshotter behavior to be the same
as the graphdriver. This doesn't affect container creation when platform
is specified explicitly.

```
$ docker run --rm --platform linux/arm64 asdf:amd64
Unable to find image 'asdf:amd64' locally
docker: Error response from daemon: pull access denied for asdf, repository does not exist or may require 'docker login'.
See 'docker run --help'.
```

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit e438db19d56bef55f9676af9db46cc04caa6330b)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-23 15:05:54 +01:00
Sebastiaan van Stijn
f8eaa14a18 pkg/platforms: internalize in daemon/containerd
This matcher was only used internally in the containerd implementation of
the image store. Un-export it, and make it a local utility in that package
to prevent external use.

This package was introduced in 1616a09b61
(v24.0), and there are no known external consumers of this package, so there
should be no need to deprecate / alias the old location.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 94b4765363)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-23 15:05:50 +01:00
Paweł Gronowski
ac76925ff2 volume/local: Make host resolution backwards compatible
Commit 8ae94cafa5 added a DNS resolution
of the `device` part of the volume option.

The previous way to resolve the passed hostname was to use `addr`
option, which was handled by the same code path as the `nfs` mount type.

The issue is that `addr` is also an SMB module option handled by kernel
and passing a hostname as `addr` produces an invalid argument error.

To fix that, restore the old behavior to handle `addr` the same way as
before, and only perform the new DNS resolution of `device` if there is
no `addr` passed.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 0d51cf9db8)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-23 15:01:44 +01:00
Akihiro Suda
c7a1d928c0 Merge pull request #47180 from thaJeztah/25.0_backport_update_compose
[25.0 backport] Dockerfile: update docker compose to v2.24.2
2024-01-23 22:08:15 +09:00
Sebastiaan van Stijn
2672baefd7 Merge pull request #47178 from thaJeztah/25.0_backport_richer_xattr_errors
[25.0 backport] pkg/system: return even richer xattr errors
2024-01-23 10:56:46 +01:00
Sebastiaan van Stijn
ff15b49b47 Dockerfile: update docker compose to v2.24.2
Update the version of compose used in CI to the latest version.

- full diff: docker/compose@v2.24.1...v2.24.2
- release notes: https://github.com/docker/compose/releases/tag/v2.24.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 05d952b246)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-23 10:25:49 +01:00
Cory Snider
c0573b133f pkg/system: return even richer xattr errors
The names of extended attributes are not completely freeform. Attributes
are namespaced, and the kernel enforces (among other things) that only
attributes whose names are prefixed with a valid namespace are
permitted. The name of the attribute therefore needs to be known in
order to diagnose issues with lsetxattr. Include the name of the
extended attribute in the errors returned from the Lsetxattr and
Lgetxattr so users and us can more easily troubleshoot xattr-related
issues. Include the name in a separate rich-error field to provide code
handling the error enough information to determine whether or not the
failure can be ignored.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 43bf65c174)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-23 09:27:49 +01:00
Sebastiaan van Stijn
c7466c0b52 Merge pull request #47170 from thaJeztah/25.0_backport_remove_deprecated_api_docs
[25.0 backport] docs: remove documentation for deprecated API versions (v1.23 and before)
2024-01-22 21:45:09 +01:00
Sebastiaan van Stijn
dde33d0dfe Merge pull request #47172 from thaJeztah/25.0_backport_fix-bad-http-code
[25.0 backport] daemon: return an InvalidParameter error when ep settings are wrong
2024-01-22 21:44:38 +01:00
Sebastiaan van Stijn
39fedb254b Merge pull request #47169 from thaJeztah/25.0_backport_test_fixes
[25.0 backport] backport test-fixes
2024-01-22 21:20:37 +01:00
Sebastiaan van Stijn
f0f5fc974a Merge pull request #47171 from thaJeztah/25.0_backport_47146-duplicate_mac_addrs
[25.0 backport] Remove generated MAC addresses on restart.
2024-01-22 20:53:52 +01:00
Albin Kerouanton
7c185a1e40 daemon: return an InvalidParameter error when ep settings are wrong
Since v25.0 (commit ff50388), we validate endpoint settings when
containers are created, instead of doing so when containers are started.
However, a container created prior to that release would still trigger
validation error at start-time. In such case, the API returns a 500
status code because the Go error isn't wrapped into an InvalidParameter
error. This is now fixed.

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
(cherry picked from commit fcc651972e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-22 20:50:10 +01:00
Rob Murray
2b036fb1da Remove generated MAC addresses on restart.
The MAC address of a running container was stored in the same place as
the configured address for a container.

When starting a stopped container, a generated address was treated as a
configured address. If that generated address (based on an IPAM-assigned
IP address) had been reused, the containers ended up with duplicate MAC
addresses.

So, remember whether the MAC address was explicitly configured, and
clear it if not.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit cd53b7380c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-22 19:51:54 +01:00
Sebastiaan van Stijn
1f24da70d8 docs/api: remove version matrices from swagger files
These tables linked to deprecated API versions, and an up-to-date version of
the matrix is already included at https://docs.docker.com/engine/api/#api-version-matrix

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 521123944a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-22 19:44:48 +01:00
Sebastiaan van Stijn
358fecb566 docs: remove documentation for deprecated API versions < v1.23
These versions are deprecated in v25.0.0, and disabled by default,
see 08e4e88482.

Users that need to refer to documentation for older API versions,
can use archived versions of the documentation on GitHub:

- API v1.23 and before: https://github.com/moby/moby/tree/v25.0.0/docs/api
- API v1.17 and before: https://github.com/moby/moby/tree/v1.9.1/docs/reference/api

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d54be2ee6d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-22 19:44:48 +01:00
Sebastiaan van Stijn
f030b25770 Dockerfile: update docker compose to v2.24.1
Update the version of compose used in CI to the latest version.

- full diff: https://github.com/docker/compose/compare/v2.24.0...v2.24.1
- release notes: https://github.com/docker/compose/releases/tag/v2.24.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 307fe9c716)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-22 19:41:15 +01:00
Sebastiaan van Stijn
e07aed0f77 Dockerfile: update dev-shell version of the cli to v25.0.0
Update the docker CLI that's available for debugging in the dev-shell
to the v25 release.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit dfced4b557)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-22 19:41:15 +01:00
Sebastiaan van Stijn
cdf3611cff integration-cli: TestInspectAPIMultipleNetworks: use current version
This test was added in f301c5765a to test
inspect output for API > v1.21, however, it was pinned to API v1.21,
which is now deprecated.

Remove the fixed version, as the intent was to test "current" API versions
(API v1.21 and up),

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a0466ca8e1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-22 19:40:57 +01:00
Sebastiaan van Stijn
05267e9e8c integration-cli: TestInspectAPIBridgeNetworkSettings121: use current version
This test was added in f301c5765a to test
inspect output for API > v1.21, however, it was pinned to API v1.21,
which is now deprecated.

Remove the fixed version, as the intent was to test "current" API versions
(API v1.21 and up),

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 13a384a6fa)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-22 19:40:56 +01:00
Sebastiaan van Stijn
e5edf62bca integration-cli: TestPutContainerArchiveErrSymlinkInVolumeToReadOnlyRootfs: use current API
This test was added in 75f6929b44, but pinned
to the API version that was current at the time (v1.20), which is now
deprecated.

Update the test to use the current API version.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 52e3fff828)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-22 19:40:42 +01:00
Sebastiaan van Stijn
e14d121d49 Merge pull request #47163 from vvoland/25-fix-swarm-startinterval-25
[25.0 backport] daemon/cluster/executer: Add missing `StartInterval`
2024-01-22 18:39:43 +01:00
Paweł Gronowski
e0acf1cd70 daemon/cluster/executer: Add missing StartInterval
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 6100190e5c)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-22 15:23:27 +01:00
Sebastiaan van Stijn
c2847b2eb2 Merge pull request #47136 from thaJeztah/25.0_backport_git-url-regex
[25.0 backport] Fix isGitURL regular expression
2024-01-22 15:14:04 +01:00
Sebastiaan van Stijn
0894f7fe69 Merge pull request #47161 from vvoland/save-fix-oci-diffids-25
[25.0 backport] image/save: Fix layers order in OCI manifest
2024-01-22 15:05:20 +01:00
Sebastiaan van Stijn
d25aa32c21 Merge pull request #47135 from thaJeztah/25.0_backport_allow-container-ip-outside-subpool
[25.0 backport] libnetwork: loosen container IPAM validation
2024-01-22 14:05:17 +01:00
Paweł Gronowski
1e335cfa74 image/save: Fix layers order in OCI manifest
Order the layers in OCI manifest by their actual apply order. This is
required by the OCI image spec.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 17fd6562bf)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-22 13:53:02 +01:00
Paweł Gronowski
4d287e9267 image/save: Change layers type to DiffID
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 4979605212)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-22 13:52:57 +01:00
David Dooling
0240f5675b Fix isGitURL regular expression
Escape period (.) so regular expression does not match any character before "git".

Signed-off-by: David Dooling <david.dooling@docker.com>
(cherry picked from commit 768146b1b0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-20 12:19:07 +01:00
Cory Snider
13964248f1 libnetwork: loosen container IPAM validation
Permit container network attachments to set any static IP address within
the network's IPAM master pool, including when a subpool is configured.
Users have come to depend on being able to statically assign container
IP addresses which are guaranteed not to collide with automatically-
assigned container addresses.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 058b30023f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-20 11:33:04 +01:00
8307 changed files with 356275 additions and 761268 deletions

View File

@@ -3,19 +3,11 @@
"build": {
"context": "..",
"dockerfile": "../Dockerfile",
"target": "devcontainer"
"target": "dev"
},
"workspaceFolder": "/go/src/github.com/docker/docker",
"workspaceMount": "source=${localWorkspaceFolder},target=/go/src/github.com/docker/docker,type=bind,consistency=cached",
"remoteUser": "root",
"runArgs": ["--privileged"],
"customizations": {
"vscode": {
"extensions": [
"golang.go"
]
}
}
"runArgs": ["--privileged"]
}

View File

@@ -1,6 +1,4 @@
/.git
# build artifacts
/bundles/
/cmd/dockerd/winresources/winres.json
/cmd/dockerd/*.syso
.git
bundles/
cli/winresources/**/winres.json
cli/winresources/**/*.syso

2
.gitattributes vendored
View File

@@ -1 +1,3 @@
Dockerfile* linguist-language=Dockerfile
vendor.mod linguist-language=Go-Module
vendor.sum linguist-language=Go-Checksums

View File

@@ -19,14 +19,11 @@ Please provide the following information:
**- How to verify it**
**- Human readable description for the release notes**
**- Description for the changelog**
<!--
Write a short (one line) summary that describes the changes in this
pull request for inclusion in the changelog.
It must be placed inside the below triple backticks section.
NOTE: Only fill this section if changes introduced in this PR are user-facing.
The PR must have a relevant impact/ label.
It must be placed inside the below triple backticks section:
-->
```markdown changelog

159
.github/labeler.yml vendored
View File

@@ -1,159 +0,0 @@
module/client:
- changed-files:
- any-glob-to-any-file: 'client/**'
module/api:
- changed-files:
- any-glob-to-any-file: 'api/**'
area/daemon:
- changed-files:
- any-glob-to-any-file: 'daemon/**'
area/builder/buildkit:
- changed-files:
- any-glob-to-any-file:
- '**/*buildkit*'
- 'daemon/internal/builder-next/**'
area/builder/classic-builder:
- changed-files:
- any-glob-to-any-file:
- 'daemon/images/*_build*'
- 'daemon/builder/**'
area/builder:
- labels:
- any-glob-to-any-file:
- '**/*buildkit*'
- 'daemon/internal/builder-next/**'
- 'daemon/images/*_build*'
- 'daemon/builder/**'
area/networking:
- changed-files:
- any-glob-to-any-file:
- 'daemon/network*'
- 'daemon/network/**'
- 'api/types/network/**'
- 'integration/network/**'
- 'integration/networking/**'
area/volumes:
- changed-files:
- any-glob-to-any-file:
- 'daemon/volume/**'
- 'api/types/volume/**'
- 'integration/volume/**'
area/swarm:
- changed-files:
- any-glob-to-any-file:
- 'daemon/cluster/**'
- 'api/types/swarm/**'
area/images:
- changed-files:
- any-glob-to-any-file:
- 'daemon/images/**'
- 'api/types/image/**'
- 'integration/image/**'
area/logging:
- changed-files:
- any-glob-to-any-file:
- 'daemon/logger/**'
- '**/*log*'
area/security:
- changed-files:
- any-glob-to-any-file:
- '**/*seccomp*'
- '**/*apparmor*'
- '**/*selinux*'
area/security/apparmor:
- changed-files:
- any-glob-to-any-file:
- '**/*apparmor*'
- 'contrib/apparmor/**'
area/security/selinux:
- changed-files:
- any-glob-to-any-file:
- '**/*selinux*'
- 'contrib/selinux/**'
area/security/seccomp:
- changed-files:
- any-glob-to-any-file: '**/*seccomp*'
area/systemd:
- changed-files:
- any-glob-to-any-file:
- '**/*systemd*'
- 'contrib/init/systemd/**'
area/contrib:
- changed-files:
- any-glob-to-any-file: 'contrib/**'
area/packaging:
- changed-files:
- any-glob-to-any-file:
# files used in packaging
- 'contrib/dockerd-rootless.sh'
- 'contrib/dockerd-rootless-setuptool.sh'
- 'contrib/init/systemd/**'
containerd-integration:
- changed-files:
- any-glob-to-any-file: 'daemon/containerd/**'
area/rootless:
- changed-files:
- any-glob-to-any-file:
- '**/*rootless*'
- 'contrib/dockerd-rootless*'
area/testing:
- changed-files:
- any-glob-to-all-files:
- 'integration/**'
- 'integration-cli/**'
- '**/*_test.go'
- 'internal/test*'
- 'internal/testutil/**'
area/docs:
- changed-files:
- any-glob-to-any-file:
- 'api/docs/*.yaml'
- 'docs/**'
- '**/*.md'
- 'man/**'
area/dependencies:
- all:
- changed-files:
- any-glob-to-any-file:
- 'go.mod'
- 'go.sum'
- 'vendor/**'
- all-globs-to-all-files:
- '!client/**'
- '!api/**'
area/ci:
- changed-files:
- any-glob-to-any-file: '.github/**'
platform/windows:
- changed-files:
- any-glob-to-any-file:
- '**/*_windows.go'
- 'Dockerfile.windows'
impact/changelog:
- changed-files:
- any-glob-to-any-file: 'api/docs/CHANGELOG.md'

View File

@@ -16,7 +16,7 @@ on:
workflow_call:
env:
ALPINE_VERSION: "3.22"
ALPINE_VERSION: "3.20"
jobs:
run:

45
.github/workflows/.test-prepare.yml vendored Normal file
View File

@@ -0,0 +1,45 @@
# reusable workflow
name: .test-prepare
# TODO: hide reusable workflow from the UI. Tracked in https://github.com/community/community/discussions/12025
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
workflow_call:
outputs:
matrix:
description: Test matrix
value: ${{ jobs.run.outputs.matrix }}
jobs:
run:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
outputs:
matrix: ${{ steps.set.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Create matrix
id: set
uses: actions/github-script@v7
with:
script: |
let matrix = ['graphdriver'];
if ("${{ contains(github.event.pull_request.labels.*.name, 'containerd-integration') || github.event_name != 'pull_request' }}" == "true") {
matrix.push('snapshotter');
}
await core.group(`Set matrix`, async () => {
core.info(`matrix: ${JSON.stringify(matrix)}`);
core.setOutput('matrix', JSON.stringify(matrix));
});

View File

@@ -1,123 +0,0 @@
# reusable workflow
name: .test-unit
# TODO: hide reusable workflow from the UI. Tracked in https://github.com/community/community/discussions/12025
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
workflow_call:
env:
GO_VERSION: "1.25.4"
GOTESTLIST_VERSION: v0.3.1
TESTSTAT_VERSION: v0.1.25
SETUP_BUILDX_VERSION: edge
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
jobs:
unit:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
strategy:
fail-fast: false
matrix:
mode:
- ""
- firewalld
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Prepare
run: |
CACHE_DEV_SCOPE=dev
if [[ "${{ matrix.mode }}" == *"firewalld"* ]]; then
echo "FIREWALLD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}firewalld"
fi
echo "CACHE_DEV_SCOPE=${CACHE_DEV_SCOPE}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=${{ env.CACHE_DEV_SCOPE }}
-
name: Test
run: |
make -o build test-unit
-
name: Prepare reports
if: always()
run: |
mkdir -p bundles /tmp/reports
find bundles -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C /tmp/reports
sudo chown -R $(id -u):$(id -g) /tmp/reports
tree -nh /tmp/reports
-
name: Send to Codecov
uses: codecov/codecov-action@v4
with:
directory: ./bundles
env_vars: RUNNER_OS
flags: unit
token: ${{ secrets.CODECOV_TOKEN }} # used to upload coverage reports: https://github.com/moby/buildkit/pull/4660#issue-2142122533
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-unit--${{ matrix.mode }}
path: /tmp/reports/*
retention-days: 1
unit-report:
runs-on: ubuntu-24.04
timeout-minutes: 10
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: always()
needs:
- unit
steps:
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: Download reports
uses: actions/download-artifact@v4
with:
pattern: test-reports-unit-*
path: /tmp/reports
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY

View File

@@ -21,17 +21,113 @@ on:
default: "graphdriver"
env:
GO_VERSION: "1.25.4"
GO_VERSION: "1.23.9"
GOTESTLIST_VERSION: v0.3.1
TESTSTAT_VERSION: v0.1.25
ITG_CLI_MATRIX_SIZE: 6
DOCKER_EXPERIMENTAL: 1
DOCKER_GRAPHDRIVER: ${{ inputs.storage == 'snapshotter' && 'overlayfs' || 'overlay2' }}
TEST_INTEGRATION_USE_GRAPHDRIVER: ${{ inputs.storage == 'graphdriver' && '1' || '' }}
SETUP_BUILDX_VERSION: edge
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
TEST_INTEGRATION_USE_SNAPSHOTTER: ${{ inputs.storage == 'snapshotter' && '1' || '' }}
jobs:
unit:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Prepare
run: |
CACHE_DEV_SCOPE=dev
if [[ "${{ matrix.mode }}" == *"rootless"* ]]; then
# In rootless mode, tests will run in the host's namspace not the rootlesskit
# namespace. So, probably no different to non-rootless unit tests and can be
# removed from the test matrix.
echo "DOCKER_ROOTLESS=1" >> $GITHUB_ENV
fi
if [[ "${{ matrix.mode }}" == *"firewalld"* ]]; then
echo "FIREWALLD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}firewalld"
fi
if [[ "${{ matrix.mode }}" == *"systemd"* ]]; then
echo "SYSTEMD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}systemd"
fi
echo "CACHE_DEV_SCOPE=${CACHE_DEV_SCOPE}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v6
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Test
run: |
make -o build test-unit
-
name: Prepare reports
if: always()
run: |
mkdir -p bundles /tmp/reports
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C /tmp/reports
sudo chown -R $(id -u):$(id -g) /tmp/reports
tree -nh /tmp/reports
-
name: Send to Codecov
uses: codecov/codecov-action@v4
with:
directory: ./bundles
env_vars: RUNNER_OS
flags: unit
token: ${{ secrets.CODECOV_TOKEN }} # used to upload coverage reports: https://github.com/moby/buildkit/pull/4660#issue-2142122533
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-unit-${{ inputs.storage }}
path: /tmp/reports/*
retention-days: 1
unit-report:
runs-on: ubuntu-24.04
timeout-minutes: 10
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: always()
needs:
- unit
steps:
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download reports
uses: actions/download-artifact@v4
with:
name: test-reports-unit-${{ inputs.storage }}
path: /tmp/reports
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY
docker-py:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
@@ -49,10 +145,6 @@ jobs:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
@@ -103,10 +195,6 @@ jobs:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
@@ -121,52 +209,25 @@ jobs:
env:
TEST_SKIP_INTEGRATION_CLI: 1
integration-prepare:
runs-on: ubuntu-24.04
timeout-minutes: 10 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
outputs:
includes: ${{ steps.set.outputs.includes }}
steps:
-
name: Create matrix includes
id: set
uses: actions/github-script@v7
with:
script: |
let includes = [
{ os: 'ubuntu-22.04', mode: '' },
{ os: 'ubuntu-22.04', mode: 'rootless' },
{ os: 'ubuntu-22.04', mode: 'systemd' },
{ os: 'ubuntu-24.04', mode: '' },
// { os: 'ubuntu-24.04', mode: 'rootless' }, // FIXME: https://github.com/moby/moby/pull/49579#issuecomment-2698622223
{ os: 'ubuntu-24.04', mode: 'systemd' },
// { os: 'ubuntu-24.04', mode: 'rootless-systemd' }, // FIXME: https://github.com/moby/moby/issues/44084
];
if ("${{ inputs.storage }}" == "snapshotter") {
includes.push({ os: 'ubuntu-24.04', mode: 'iptables+firewalld' });
includes.push({ os: 'ubuntu-24.04', mode: 'nftables' });
includes.push({ os: 'ubuntu-24.04', mode: 'nftables+firewalld' });
}
await core.group(`Set matrix`, async () => {
core.info(`matrix: ${JSON.stringify(includes)}`);
core.setOutput('includes', JSON.stringify(includes));
});
-
name: Show matrix
run: |
echo ${{ steps.set.outputs.includes }}
integration:
runs-on: ${{ matrix.os }}
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
needs:
- integration-prepare
strategy:
fail-fast: false
matrix:
include: ${{ fromJson(needs.integration-prepare.outputs.includes) }}
os:
- ubuntu-22.04
- ubuntu-24.04
mode:
- ""
- rootless
- systemd
- firewalld
#- rootless-systemd FIXME: https://github.com/moby/moby/issues/44084
exclude:
- os: ubuntu-24.04 # FIXME: https://github.com/moby/moby/pull/49579#issuecomment-2698622223
mode: rootless
steps:
-
name: Checkout
@@ -192,17 +253,10 @@ jobs:
echo "FIREWALLD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}firewalld"
fi
if [[ "${{ matrix.mode }}" == *"nftables"* ]]; then
echo "DOCKER_FIREWALL_BACKEND=nftables" >> $GITHUB_ENV
fi
echo "CACHE_DEV_SCOPE=${CACHE_DEV_SCOPE}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
@@ -270,7 +324,6 @@ jobs:
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: Download reports
uses: actions/download-artifact@v4
@@ -292,7 +345,7 @@ jobs:
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
outputs:
matrix: ${{ steps.set.outputs.matrix }}
matrix: ${{ steps.tests.outputs.matrix }}
steps:
-
name: Checkout
@@ -302,13 +355,12 @@ jobs:
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: Install gotestlist
run:
go install github.com/crazy-max/gotestlist/cmd/gotestlist@${{ env.GOTESTLIST_VERSION }}
-
name: Create test matrix
name: Create matrix
id: tests
working-directory: ./integration-cli
run: |
@@ -320,67 +372,9 @@ jobs:
matrix="$(gotestlist -d ${{ env.ITG_CLI_MATRIX_SIZE }} -o "./..." -o "DockerSwarmSuite" -o "DockerNetworkSuite|DockerExternalVolumeSuite" ./...)"
echo "matrix=$matrix" >> $GITHUB_OUTPUT
-
name: Create gha matrix
id: set
uses: actions/github-script@v7
with:
script: |
let matrix = {
test: ${{ steps.tests.outputs.matrix }},
include: [],
};
// For some reasons, GHA doesn't combine a dynamically defined
// 'include' with other matrix variables that aren't part of the
// include items.
// Moreover, since the goal is to run only relevant tests with
// firewalld/nftables enabled to minimize the number of CI jobs, we
// statically define the list of test suites that we want to run.
if ("${{ inputs.storage }}" == "snapshotter") {
matrix.include.push({
'mode': 'iptables+firewalld',
'test': 'DockerCLINetworkSuite|DockerCLIPortSuite|DockerDaemonSuite'
});
matrix.include.push({
'mode': 'iptables+firewalld',
'test': 'DockerSwarmSuite'
});
matrix.include.push({
'mode': 'iptables+firewalld',
'test': 'DockerNetworkSuite'
});
matrix.include.push({
'mode': 'nftables',
'test': 'DockerCLINetworkSuite|DockerCLIPortSuite|DockerDaemonSuite'
});
matrix.include.push({
'mode': 'nftables',
'test': 'DockerSwarmSuite'
});
matrix.include.push({
'mode': 'nftables',
'test': 'DockerNetworkSuite'
});
matrix.include.push({
'mode': 'nftables+firewalld',
'test': 'DockerCLINetworkSuite|DockerCLIPortSuite|DockerDaemonSuite'
});
matrix.include.push({
'mode': 'nftables+firewalld',
'test': 'DockerSwarmSuite'
});
matrix.include.push({
'mode': 'nftables+firewalld',
'test': 'DockerNetworkSuite'
});
}
await core.group(`Set matrix`, async () => {
core.info(`matrix: ${JSON.stringify(matrix)}`);
core.setOutput('matrix', JSON.stringify(matrix));
});
-
name: Show final gha matrix
name: Show matrix
run: |
echo ${{ steps.set.outputs.matrix }}
echo ${{ steps.tests.outputs.matrix }}
integration-cli:
runs-on: ubuntu-24.04
@@ -390,7 +384,8 @@ jobs:
- integration-cli-prepare
strategy:
fail-fast: false
matrix: ${{ fromJson(needs.integration-cli-prepare.outputs.matrix) }}
matrix:
test: ${{ fromJson(needs.integration-cli-prepare.outputs.matrix) }}
steps:
-
name: Checkout
@@ -409,17 +404,10 @@ jobs:
echo "FIREWALLD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}firewalld"
fi
if [[ "${{ matrix.mode }}" == *"nftables"* ]]; then
echo "DOCKER_FIREWALL_BACKEND=nftables" >> $GITHUB_ENV
fi
echo "CACHE_DEV_SCOPE=${CACHE_DEV_SCOPE}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
@@ -469,7 +457,7 @@ jobs:
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-integration-cli-${{ inputs.storage }}-${{ matrix.mode }}-${{ env.TESTREPORTS_NAME }}
name: test-reports-integration-cli-${{ inputs.storage }}-${{ env.TESTREPORTS_NAME }}
path: /tmp/reports/*
retention-days: 1
@@ -486,13 +474,12 @@ jobs:
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: Download reports
uses: actions/download-artifact@v4
with:
path: /tmp/reports
pattern: test-reports-integration-cli-${{ inputs.storage }}-${{ matrix.mode }}-*
pattern: test-reports-integration-cli-${{ inputs.storage }}-*
merge-multiple: true
-
name: Install teststat

View File

@@ -1,207 +0,0 @@
# reusable workflow
name: .vm
# TODO: hide reusable workflow from the UI. Tracked in https://github.com/community/community/discussions/12025
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
workflow_call:
inputs:
template:
required: true
type: string
env:
GO_VERSION: "1.25.4"
TESTSTAT_VERSION: v0.1.25
jobs:
integration:
runs-on: ubuntu-24.04
timeout-minutes: 60
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
strategy:
fail-fast: false
matrix:
mode:
- ""
- rootless
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Lima
uses: lima-vm/lima-actions/setup@03b96d61959e83b2c737e44162c3088e81de0886 # v1.0.1
id: lima-actions-setup
with:
version: v1.2.2
-
name: Cache ~/.cache/lima
uses: actions/cache@v4
with:
path: ~/.cache/lima
key: lima-${{ steps.lima-actions-setup.outputs.version }}-${{ inputs.template }}
-
name: Start the guest VM
run: |
# --plain is set because the built-in containerd support conflicts with Docker
limactl start \
--name=default \
--cpus=4 \
--memory=12 \
--plain \
${{ inputs.template }}
-
name: Load kernel modules in the guest VM
run: |
set -eux -o pipefail
cat <<-EOF | lima sudo tee /etc/modules-load.d/docker.conf
br_netfilter
bridge
ip6_tables
ip6table_filter
ip6table_nat
ip_tables
ip_vs
iptable_filter
iptable_nat
nf_tables
overlay
tap
tun
veth
x_tables
xt_addrtype
xt_comment
xt_conntrack
xt_mark
xt_multiport
xt_nat
xt_tcpudp
EOF
lima sudo systemctl restart systemd-modules-load.service
-
name: Install dockerd in the guest VM
run: |
set -eux -o pipefail
lima sudo mkdir -p /etc/systemd/system/docker.socket.d
cat <<-EOF | lima sudo tee /etc/systemd/system/docker.socket.d/override.conf
[Socket]
SocketUser=$(whoami)
EOF
# TODO: use native packages for AlmaLinux: https://github.com/docker/packaging/pull/138
lima sudo dnf config-manager --add-repo=https://download.docker.com/linux/rhel/docker-ce.repo
lima sudo dnf -q -y install --nobest docker-ce make
lima sudo systemctl enable --now docker
lima docker info
-
name: Copy the current directory
run: |
set -eux -o pipefail
limactl cp -r . default:/tmp/docker
-
name: Test
run: |
set -eux -o pipefail
DOCKER_ROOTLESS=
DOCKER_GRAPHDRIVER=overlay2
if [[ "${{ matrix.mode }}" == *"rootless"* ]]; then
DOCKER_ROOTLESS=1
if lima grep -q "AlmaLinux release 8" /etc/system-release; then
# kernel prior to 5.11 needs fuse-overlayfs
DOCKER_GRAPHDRIVER=fuse-overlayfs
fi
fi
DOCKER_IGNORE_BR_NETFILTER_ERROR=
if lima grep -q "AlmaLinux release 8" /etc/system-release; then
# DOCKER_IGNORE_BR_NETFILTER_ERROR=1 is set because /proc/sys/net/bridge does not appear in
# a container when the kernel is older than 5.3.
# https://web.archive.org/web/20201123224428/github.com/lxc/lxd/issues/3306#issuecomment-502857864
DOCKER_IGNORE_BR_NETFILTER_ERROR=1
fi
# TODO: just propagate the env from the host: https://github.com/lima-vm/lima/issues/3430
# TODO: enable GHA cache?
LIMA_WORKDIR=/tmp/docker lima \
TEST_SKIP_INTEGRATION_CLI=1 \
TEST_INTEGRATION_USE_GRAPHDRIVER=1 \
DOCKER_ROOTLESS=${DOCKER_ROOTLESS} \
DOCKER_GRAPHDRIVER=${DOCKER_GRAPHDRIVER} \
DOCKER_IGNORE_BR_NETFILTER_ERROR=${DOCKER_IGNORE_BR_NETFILTER_ERROR} \
make test-integration
-
name: Prepare reports
if: always()
run: |
set -eux -o pipefail
limactl cp -v -r default:/tmp/docker/bundles . || true
reportsName="$(basename ${{ inputs.template }})"
if [ -n "${{ matrix.mode }}" ]; then
reportsName="$reportsName-${{ matrix.mode }}"
fi
reportsPath="/tmp/reports/$reportsName"
echo "TESTREPORTS_NAME=$reportsName" >> $GITHUB_ENV
mkdir -p bundles $reportsPath
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C $reportsPath
sudo chown -R $(id -u):$(id -g) $reportsPath
tree -nh $reportsPath
-
name: Test daemon logs
if: always()
run: |
cat bundles/test-integration/docker.log
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-integration-${{ env.TESTREPORTS_NAME }}
path: /tmp/reports/*
retention-days: 1
integration-report:
runs-on: ubuntu-24.04
timeout-minutes: 10
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: always() && (github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only'))
needs:
- integration
steps:
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: Prepare reports
run: echo "TESTREPORTS_NAME=$(basename ${{ inputs.template }})*" >> $GITHUB_ENV
-
name: Download reports
uses: actions/download-artifact@v4
with:
path: /tmp/reports
pattern: test-reports-integration-${{ env.TESTREPORTS_NAME }}
merge-multiple: true
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY

View File

@@ -28,7 +28,7 @@ on:
default: false
env:
GO_VERSION: "1.25.4"
GO_VERSION: "1.23.9"
GOTESTLIST_VERSION: v0.3.1
TESTSTAT_VERSION: v0.1.25
WINDOWS_BASE_IMAGE: mcr.microsoft.com/windows/servercore
@@ -70,6 +70,18 @@ jobs:
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
-
name: Cache
uses: actions/cache@v4
with:
path: |
~\AppData\Local\go-build
~\go\pkg\mod
${{ github.workspace }}\go-build
${{ env.GOPATH }}\pkg\mod
key: ${{ inputs.os }}-${{ github.job }}-${{ hashFiles('**/vendor.sum') }}
restore-keys: |
${{ inputs.os }}-${{ github.job }}-
-
name: Docker info
run: |
@@ -80,12 +92,15 @@ jobs:
& docker build `
--build-arg WINDOWS_BASE_IMAGE `
--build-arg WINDOWS_BASE_IMAGE_TAG `
--build-arg GO_VERSION `
-t ${{ env.TEST_IMAGE_NAME }} `
-f Dockerfile.windows .
-
name: Build binaries
run: |
& docker run --name ${{ env.TEST_CTN_NAME }} -e "DOCKER_GITCOMMIT=${{ github.sha }}" `
-v "${{ github.workspace }}\go-build:C:\Users\ContainerAdministrator\AppData\Local\go-build" `
-v "${{ github.workspace }}\go\pkg\mod:C:\gopath\pkg\mod" `
${{ env.TEST_IMAGE_NAME }} hack\make.ps1 -Daemon -Client
-
name: Copy artifacts
@@ -135,6 +150,18 @@ jobs:
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
-
name: Cache
uses: actions/cache@v4
with:
path: |
~\AppData\Local\go-build
~\go\pkg\mod
${{ github.workspace }}\go-build
${{ env.GOPATH }}\pkg\mod
key: ${{ inputs.os }}-${{ github.job }}-${{ hashFiles('**/vendor.sum') }}
restore-keys: |
${{ inputs.os }}-${{ github.job }}-
-
name: Docker info
run: |
@@ -145,12 +172,15 @@ jobs:
& docker build `
--build-arg WINDOWS_BASE_IMAGE `
--build-arg WINDOWS_BASE_IMAGE_TAG `
--build-arg GO_VERSION `
-t ${{ env.TEST_IMAGE_NAME }} `
-f Dockerfile.windows .
-
name: Test
run: |
& docker run --name ${{ env.TEST_CTN_NAME }} -e "DOCKER_GITCOMMIT=${{ github.sha }}" `
-v "${{ github.workspace }}\go-build:C:\Users\ContainerAdministrator\AppData\Local\go-build" `
-v "${{ github.workspace }}\go\pkg\mod:C:\gopath\pkg\mod" `
-v "${{ env.GOPATH }}\src\github.com\docker\docker\bundles:C:\gopath\src\github.com\docker\docker\bundles" `
${{ env.TEST_IMAGE_NAME }} hack\make.ps1 -TestUnit
-
@@ -184,7 +214,6 @@ jobs:
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: Download artifacts
uses: actions/download-artifact@v4
@@ -214,7 +243,6 @@ jobs:
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: Install gotestlist
run:
@@ -267,12 +295,6 @@ jobs:
uses: actions/checkout@v4
with:
path: ${{ env.GOPATH }}/src/github.com/docker/docker
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: Set up Jaeger
run: |
@@ -325,12 +347,33 @@ jobs:
$ErrorActionPreference = "Stop"
Write-Host "Service removed"
}
-
name: Starting containerd
if: matrix.runtime == 'containerd'
run: |
Write-Host "Generating config"
& "${{ env.BIN_OUT }}\containerd.exe" config default | Out-File "$env:TEMP\ctn.toml" -Encoding ascii
Write-Host "Creating service"
New-Item -ItemType Directory "$env:TEMP\ctn-root" -ErrorAction SilentlyContinue | Out-Null
New-Item -ItemType Directory "$env:TEMP\ctn-state" -ErrorAction SilentlyContinue | Out-Null
Start-Process -Wait "${{ env.BIN_OUT }}\containerd.exe" `
-ArgumentList "--log-level=debug", `
"--config=$env:TEMP\ctn.toml", `
"--address=\\.\pipe\containerd-containerd", `
"--root=$env:TEMP\ctn-root", `
"--state=$env:TEMP\ctn-state", `
"--log-file=$env:TEMP\ctn.log", `
"--register-service"
Write-Host "Starting service"
Start-Service -Name containerd
Start-Sleep -Seconds 5
Write-Host "Service started successfully!"
-
name: Starting test daemon
run: |
Write-Host "Creating service"
If ("${{ matrix.runtime }}" -eq "containerd") {
$runtimeArg="--default-runtime=io.containerd.runhcs.v1"
$runtimeArg="--containerd=\\.\pipe\containerd-containerd"
echo "DOCKER_WINDOWS_CONTAINERD_RUNTIME=1" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
New-Item -ItemType Directory "$env:TEMP\moby-root" -ErrorAction SilentlyContinue | Out-Null
@@ -342,13 +385,10 @@ jobs:
"--exec-root=$env:TEMP\moby-exec", `
"--pidfile=$env:TEMP\docker.pid", `
"--register-service"
If ("${{ inputs.storage }}" -eq "graphdriver") {
If ("${{ inputs.storage }}" -eq "snapshotter") {
# Make the env-var visible to the service-managed dockerd, as there's no CLI flag for this option.
& reg add "HKLM\SYSTEM\CurrentControlSet\Services\docker" /v Environment /t REG_MULTI_SZ /s '@' /d "DOCKER_MIN_API_VERSION=1.24@TEST_INTEGRATION_USE_GRAPHDRIVER=1"
echo "TEST_INTEGRATION_USE_GRAPHDRIVER=1" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
} Else {
# Make the env-var visible to the service-managed dockerd, as there's no CLI flag for this option.
& reg add "HKLM\SYSTEM\CurrentControlSet\Services\docker" /v Environment /t REG_MULTI_SZ /s '@' /d DOCKER_MIN_API_VERSION=1.24
& reg add "HKLM\SYSTEM\CurrentControlSet\Services\docker" /v Environment /t REG_MULTI_SZ /s '@' /d TEST_INTEGRATION_USE_SNAPSHOTTER=1
echo "TEST_INTEGRATION_USE_SNAPSHOTTER=1" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
Write-Host "Starting service"
Start-Service -Name docker
@@ -373,17 +413,6 @@ jobs:
Start-Sleep -Seconds 1
}
Write-Host "Test daemon started and replied!"
If ("${{ matrix.runtime }}" -eq "containerd") {
$containerdProcesses = Get-Process -Name containerd -ErrorAction:SilentlyContinue
If (-not $containerdProcesses) {
Throw "containerd process is not running"
} else {
foreach ($process in $containerdProcesses) {
$processPath = (Get-Process -Id $process.Id -FileVersionInfo).FileName
Write-Output "Running containerd instance binary Path: $($processPath)"
}
}
}
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
@@ -407,6 +436,11 @@ jobs:
& "${{ env.BIN_OUT }}\docker" images
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: Test integration
if: matrix.test == './...'
@@ -414,6 +448,7 @@ jobs:
.\hack\make.ps1 -TestIntegration
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
GO111MODULE: "off"
TEST_CLIENT_BINARY: ${{ env.BIN_OUT }}\docker
-
name: Test integration-cli
@@ -422,6 +457,7 @@ jobs:
.\hack\make.ps1 -TestIntegrationCli
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
GO111MODULE: "off"
TEST_CLIENT_BINARY: ${{ env.BIN_OUT }}\docker
INTEGRATION_TESTRUN: ${{ matrix.test }}
-
@@ -440,6 +476,19 @@ jobs:
& "${{ env.BIN_OUT }}\docker" info
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Stop containerd
if: always() && matrix.runtime == 'containerd'
run: |
$ErrorActionPreference = "SilentlyContinue"
Stop-Service -Force -Name containerd
$ErrorActionPreference = "Stop"
-
name: Containerd logs
if: always() && matrix.runtime == 'containerd'
run: |
Copy-Item "$env:TEMP\ctn.log" -Destination ".\bundles\containerd.log"
Get-Content "$env:TEMP\ctn.log" | Out-Host
-
name: Stop daemon
if: always()
@@ -498,7 +547,6 @@ jobs:
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: Download reports
uses: actions/download-artifact@v4

View File

@@ -19,11 +19,10 @@ on:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
pull_request:
env:
GO_VERSION: "1.25.4"
GO_VERSION: "1.23.9"
TESTSTAT_VERSION: v0.1.25
DESTDIR: ./build
SETUP_BUILDX_VERSION: edge
@@ -37,7 +36,6 @@ jobs:
build:
runs-on: ubuntu-24.04-arm
timeout-minutes: 20 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- validate-dco
strategy:
@@ -71,7 +69,6 @@ jobs:
build-dev:
runs-on: ubuntu-24.04-arm
timeout-minutes: 120 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- validate-dco
steps:
@@ -95,7 +92,6 @@ jobs:
test-unit:
runs-on: ubuntu-24.04-arm
timeout-minutes: 120 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- build-dev
steps:
@@ -112,9 +108,6 @@ jobs:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Build dev image
uses: docker/bake-action@v6
@@ -156,7 +149,7 @@ jobs:
runs-on: ubuntu-24.04
timeout-minutes: 10
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: always() && (github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only'))
if: always()
needs:
- test-unit
steps:
@@ -165,7 +158,7 @@ jobs:
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
cache-dependency-path: vendor.sum
-
name: Download reports
uses: actions/download-artifact@v4
@@ -185,7 +178,6 @@ jobs:
runs-on: ubuntu-24.04-arm
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- build-dev
steps:
@@ -205,9 +197,6 @@ jobs:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Build dev image
uses: docker/bake-action@v6
@@ -259,7 +248,7 @@ jobs:
runs-on: ubuntu-24.04
timeout-minutes: 10
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: always() && (github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only'))
if: always()
needs:
- test-integration
steps:
@@ -268,7 +257,7 @@ jobs:
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
cache-dependency-path: vendor.sum
-
name: Download reports
uses: actions/download-artifact@v4

View File

@@ -19,10 +19,8 @@ on:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
tags:
- 'v*'
- 'docker-v*'
pull_request:
env:
@@ -32,18 +30,15 @@ env:
PLATFORM: Moby Engine - Nightly
PRODUCT: moby-bin
PACKAGER_NAME: The Moby Project
SETUP_BUILDX_VERSION: edge
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
jobs:
validate-dco:
if: ${{ !startsWith(github.ref, 'refs/tags/') }}
if: ${{ !startsWith(github.ref, 'refs/tags/v') }}
uses: ./.github/workflows/.dco.yml
prepare:
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
outputs:
platforms: ${{ steps.platforms.outputs.matrix }}
steps:
@@ -60,21 +55,20 @@ jobs:
### versioning strategy
## push semver tag v23.0.0
# moby/moby-bin:23.0.0
# moby/moby-bin:23.0
# moby/moby-bin:23
# moby/moby-bin:latest
## push semver prerelease tag v23.0.0-beta.1
## push semver prelease tag v23.0.0-beta.1
# moby/moby-bin:23.0.0-beta.1
## push on master
# moby/moby-bin:master
## push on 28.x branch
# moby/moby-bin:28.x
## push on 23.0 branch
# moby/moby-bin:23.0
## any push
# moby/moby-bin:sha-ad132f5
tags: |
type=semver,pattern={{version}}
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}},match=docker-(.*)
type=semver,pattern={{major}}.{{minor}},match=docker-(.*)
type=semver,pattern={{major}},match=docker-(.*)
type=sha
-
name: Rename meta bake definition file
# see https://github.com/docker/metadata-action/issues/381#issuecomment-1918607161
@@ -97,11 +91,11 @@ jobs:
build:
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
if: ${{ always() && !contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && (github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only')) }}
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- validate-dco
- prepare
if: always() && !contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled')
strategy:
fail-fast: false
matrix:
@@ -129,10 +123,6 @@ jobs:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Login to Docker Hub
if: github.event_name != 'pull_request' && github.repository == 'moby/moby'
@@ -173,10 +163,10 @@ jobs:
merge:
runs-on: ubuntu-24.04
timeout-minutes: 40 # guardrails timeout for the whole job
if: ${{ always() && !contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && github.event_name != 'pull_request' && github.repository == 'moby/moby' }}
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- build
if: always() && !contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && github.event_name != 'pull_request' && github.repository == 'moby/moby'
steps:
-
name: Download meta bake definition
@@ -194,10 +184,6 @@ jobs:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Login to Docker Hub
uses: docker/login-action@v3

View File

@@ -19,33 +19,26 @@ on:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
pull_request:
env:
GO_VERSION: "1.25.4"
GO_VERSION: "1.23.9"
ALPINE_VERSION: "3.20"
DESTDIR: ./build
SETUP_BUILDX_VERSION: edge
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
build-linux:
build:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- validate-dco
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v6
@@ -60,15 +53,11 @@ jobs:
if-no-files-found: error
retention-days: 1
test-linux:
test:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- build-linux
env:
TEST_IMAGE_BUILD: "0"
TEST_IMAGE_ID: "buildkit-tests"
- build
strategy:
fail-fast: false
matrix:
@@ -108,11 +97,13 @@ jobs:
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: BuildKit ref
run: |
echo "$(./hack/buildkit-ref)" >> $GITHUB_ENV
# FIXME(aepifanov) temporarily overriding version to use for tests; remove with the next release of buildkit
# echo "BUILDKIT_REF=$(./hack/buildkit-ref)" >> $GITHUB_ENV
echo "BUILDKIT_REPO=moby/buildkit" >> $GITHUB_ENV
echo "BUILDKIT_REF=b10aeed77fd8a370f6aec7ae4b212ab291914e08" >> $GITHUB_ENV
working-directory: moby
-
name: Checkout BuildKit ${{ env.BUILDKIT_REF }}
@@ -127,10 +118,6 @@ jobs:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Download binary artifacts
uses: actions/download-artifact@v4
@@ -144,15 +131,6 @@ jobs:
sudo service docker restart
docker version
docker info
-
name: Build test image
uses: docker/bake-action@v6
with:
source: .
workdir: ./buildkit
targets: integration-tests
set: |
*.output=type=docker,name=${{ env.TEST_IMAGE_ID }}
-
name: Test
run: |
@@ -164,210 +142,3 @@ jobs:
TESTPKGS: "./${{ matrix.pkg }}"
TESTFLAGS: "-v --parallel=1 --timeout=30m --run=//worker=${{ matrix.worker }}$"
working-directory: buildkit
build-windows:
runs-on: windows-2022
timeout-minutes: 120
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- validate-dco
env:
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
BIN_OUT: ${{ github.workspace }}\out
WINDOWS_BASE_IMAGE: mcr.microsoft.com/windows/servercore
WINDOWS_BASE_TAG_2022: ltsc2022
TEST_IMAGE_NAME: moby:test
TEST_CTN_NAME: moby
defaults:
run:
working-directory: ${{ env.GOPATH }}/src/github.com/docker/docker
steps:
- name: Checkout
uses: actions/checkout@v4
with:
path: ${{ env.GOPATH }}/src/github.com/docker/docker
- name: Env
run: |
Get-ChildItem Env: | Out-String
- name: Moby - Init
run: |
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go-build"
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go\pkg\mod"
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
- name: Docker info
run: |
docker info
- name: Build base image
run: |
& docker build `
--build-arg WINDOWS_BASE_IMAGE `
--build-arg WINDOWS_BASE_IMAGE_TAG `
-t ${{ env.TEST_IMAGE_NAME }} `
-f Dockerfile.windows .
- name: Build binaries
run: |
& docker run --name ${{ env.TEST_CTN_NAME }} -e "DOCKER_GITCOMMIT=${{ github.sha }}" `
-v "${{ github.workspace }}\go-build:C:\Users\ContainerAdministrator\AppData\Local\go-build" `
-v "${{ github.workspace }}\go\pkg\mod:C:\gopath\pkg\mod" `
${{ env.TEST_IMAGE_NAME }} hack\make.ps1 -Daemon -Client
go install github.com/distribution/distribution/v3/cmd/registry@latest
- name: Checkout BuildKit
uses: actions/checkout@v4
with:
repository: moby/buildkit
ref: master
path: buildkit
- name: Add buildctl to binaries
run: |
go install ./cmd/buildctl
working-directory: buildkit
- name: Copy artifacts
run: |
New-Item -ItemType "directory" -Path "${{ env.BIN_OUT }}"
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\gopath\src\github.com\docker\docker\bundles\docker.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\gopath\src\github.com\docker\docker\bundles\dockerd.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\gopath\bin\gotestsum.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\containerd\bin\containerd.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\containerd\bin\containerd-shim-runhcs-v1.exe" ${{ env.BIN_OUT }}\
cp ${{ env.GOPATH }}\bin\registry.exe ${{ env.BIN_OUT }}
cp ${{ env.GOPATH }}\bin\buildctl.exe ${{ env.BIN_OUT }}
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: build-windows
path: ${{ env.BIN_OUT }}/*
if-no-files-found: error
retention-days: 2
test-windows:
runs-on: windows-2022
timeout-minutes: 120 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- build-windows
env:
TEST_IMAGE_BUILD: "0"
TEST_IMAGE_ID: "buildkit-tests"
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
BIN_OUT: ${{ github.workspace }}\out
TESTFLAGS: "-v --timeout=90m"
TEST_DOCKERD: "1"
strategy:
fail-fast: false
matrix:
worker:
- dockerd-containerd
pkg:
- ./client#1-4
- ./client#2-4
- ./client#3-4
- ./client#4-4
- ./cmd/buildctl
- ./frontend
- ./frontend/dockerfile#1-12
- ./frontend/dockerfile#2-12
- ./frontend/dockerfile#3-12
- ./frontend/dockerfile#4-12
- ./frontend/dockerfile#5-12
- ./frontend/dockerfile#6-12
- ./frontend/dockerfile#7-12
- ./frontend/dockerfile#8-12
- ./frontend/dockerfile#9-12
- ./frontend/dockerfile#10-12
- ./frontend/dockerfile#11-12
- ./frontend/dockerfile#12-12
steps:
- name: Prepare
shell: bash
run: |
disabledFeatures="cache_backend_azblob,cache_backend_s3"
if [ "${{ matrix.worker }}" = "dockerd" ]; then
disabledFeatures="${disabledFeatures},merge_diff"
fi
echo "BUILDKIT_TEST_DISABLE_FEATURES=${disabledFeatures}" >> $GITHUB_ENV
- name: Expose GitHub Runtime
uses: crazy-max/ghaction-github-runtime@v3
- name: Checkout
uses: actions/checkout@v4
with:
path: moby
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
- name: BuildKit ref
shell: bash
run: |
echo "$(./hack/buildkit-ref)" >> $GITHUB_ENV
working-directory: moby
- name: Checkout BuildKit ${{ env.BUILDKIT_REF }}
uses: actions/checkout@v4
with:
repository: ${{ env.BUILDKIT_REPO }}
ref: ${{ env.BUILDKIT_REF }}
path: buildkit
- name: Download Moby artifacts
uses: actions/download-artifact@v4
with:
name: build-windows
path: ${{ env.BIN_OUT }}
- name: Add binaries to PATH
run: |
ls ${{ env.BIN_OUT }}
Write-Output "${{ env.BIN_OUT }}" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
- name: Test Prep
shell: bash
run: |
TESTPKG=$(echo "${{ matrix.pkg }}" | awk '-F#' '{print $1}')
echo "TESTPKG=$TESTPKG" >> $GITHUB_ENV
echo "TEST_REPORT_NAME=${{ github.job }}-$(echo "${{ matrix.pkg }}-${{ matrix.worker }}" | tr -dc '[:alnum:]-\n\r' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_ENV
testFlags="${{ env.TESTFLAGS }}"
testSlice=$(echo "${{ matrix.pkg }}" | awk '-F#' '{print $2}')
testSliceOffset=""
if [ -n "$testSlice" ]; then
testSliceOffset="slice=$testSlice/"
fi
if [ -n "${{ matrix.worker }}" ]; then
testFlags="${testFlags} --run=TestIntegration/$testSliceOffset.*/worker=${{ matrix.worker }}"
fi
echo "TESTFLAGS=${testFlags}" >> $GITHUB_ENV
- name: Test
shell: bash
run: |
mkdir -p ./bin/testreports
gotestsum \
--jsonfile="./bin/testreports/go-test-report-${{ env.TEST_REPORT_NAME }}.json" \
--junitfile="./bin/testreports/junit-report-${{ env.TEST_REPORT_NAME }}.xml" \
--packages="${{ env.TESTPKG }}" \
-- \
"-mod=vendor" \
"-coverprofile" "./bin/testreports/coverage-${{ env.TEST_REPORT_NAME }}.txt" \
"-covermode" "atomic" ${{ env.TESTFLAGS }}
working-directory: buildkit

View File

@@ -19,13 +19,10 @@ on:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
pull_request:
env:
DESTDIR: ./build
SETUP_BUILDX_VERSION: edge
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
jobs:
validate-dco:
@@ -46,10 +43,6 @@ jobs:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v6
@@ -67,7 +60,6 @@ jobs:
prepare-cross:
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- validate-dco
outputs:
@@ -90,7 +82,6 @@ jobs:
cross:
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- validate-dco
- prepare-cross
@@ -107,10 +98,6 @@ jobs:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v6
@@ -126,55 +113,3 @@ jobs:
name: Check artifacts
run: |
find ${{ env.DESTDIR }} -type f -exec file -e ascii -- {} +
govulncheck:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
# Always run security checks, even with 'ci/validate-only' label
permissions:
# required to write sarif report
security-events: write
# required to check out the repository
contents: read
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Run
uses: docker/bake-action@v6
with:
targets: govulncheck
env:
GOVULNCHECK_FORMAT: sarif
-
name: Upload SARIF report
if: ${{ github.event_name != 'pull_request' && github.repository == 'moby/moby' }}
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: ${{ env.DESTDIR }}/govulncheck.out
build-dind:
runs-on: ubuntu-24.04
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- validate-dco
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dind image
uses: docker/bake-action@v6
with:
targets: dind
set: |
*.output=type=cacheonly

View File

@@ -1,67 +0,0 @@
name: codeql
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
tags:
- 'v*'
- 'docker-v*'
pull_request:
# The branches below must be a subset of the branches above
branches: ["master"]
schedule:
# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of the month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)
# │ │ │ │ │
# │ │ │ │ │
# │ │ │ │ │
# * * * * *
- cron: '0 9 * * 4'
env:
GO_VERSION: "1.25.4"
jobs:
codeql:
runs-on: ubuntu-24.04
timeout-minutes: 10
permissions:
actions: read
contents: read
security-events: write
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 2
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: go
- name: Autobuild
uses: github/codeql-action/autobuild@v3
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:go"

View File

@@ -1,18 +0,0 @@
name: "Labeler"
on:
pull_request_target:
permissions:
contents: read
jobs:
labeler:
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
- name: Labels
uses: actions/labeler@v6
with:
sync-labels: false

View File

@@ -19,15 +19,12 @@ on:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
pull_request:
env:
GO_VERSION: "1.25.4"
GO_VERSION: "1.23.9"
GIT_PAGER: "cat"
PAGER: "cat"
SETUP_BUILDX_VERSION: edge
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
jobs:
validate-dco:
@@ -55,10 +52,6 @@ jobs:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
@@ -70,7 +63,6 @@ jobs:
*.output=type=cacheonly
test:
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- build-dev
- validate-dco
@@ -85,14 +77,6 @@ jobs:
with:
storage: ${{ matrix.storage }}
test-unit:
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- build-dev
- validate-dco
uses: ./.github/workflows/.test-unit.yml
secrets: inherit
validate-prepare:
runs-on: ubuntu-24.04
timeout-minutes: 10 # guardrails timeout for the whole job
@@ -137,10 +121,6 @@ jobs:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
@@ -156,7 +136,6 @@ jobs:
smoke-prepare:
runs-on: ubuntu-24.04
timeout-minutes: 10 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- validate-dco
outputs:
@@ -179,7 +158,6 @@ jobs:
smoke:
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- smoke-prepare
strategy:
@@ -198,10 +176,6 @@ jobs:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Test
uses: docker/bake-action@v6

View File

@@ -11,31 +11,26 @@ permissions:
on:
pull_request:
types: [opened, edited, labeled, unlabeled, synchronize]
types: [opened, edited, labeled, unlabeled]
jobs:
check-labels:
check-area-label:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
steps:
- name: Missing `area/` label
if: always() && contains(join(github.event.pull_request.labels.*.name, ','), 'impact/') && !contains(join(github.event.pull_request.labels.*.name, ','), 'area/')
if: contains(join(github.event.pull_request.labels.*.name, ','), 'impact/') && !contains(join(github.event.pull_request.labels.*.name, ','), 'area/')
run: |
echo "::error::Every PR with an 'impact/*' label should also have an 'area/*' label"
exit 1
- name: Missing `kind/` label
if: always() && contains(join(github.event.pull_request.labels.*.name, ','), 'impact/') && !contains(join(github.event.pull_request.labels.*.name, ','), 'kind/')
run: |
echo "::error::Every PR with an 'impact/*' label should also have a 'kind/*' label"
exit 1
- name: OK
run: exit 0
check-changelog:
if: contains(join(github.event.pull_request.labels.*.name, ','), 'impact/')
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
env:
HAS_IMPACT_LABEL: ${{ contains(join(github.event.pull_request.labels.*.name, ','), 'impact/') }}
PR_BODY: |
${{ github.event.pull_request.body }}
steps:
@@ -47,23 +42,15 @@ jobs:
# Strip empty lines
desc=$(echo "$block" | awk NF)
if [ "$HAS_IMPACT_LABEL" = "true" ]; then
if [ -z "$desc" ]; then
echo "::error::Changelog section is empty. Please provide a description for the changelog."
exit 1
fi
if [ -z "$desc" ]; then
echo "::error::Changelog section is empty. Please provide a description for the changelog."
exit 1
fi
len=$(echo -n "$desc" | wc -c)
if [[ $len -le 6 ]]; then
echo "::error::Description looks too short: $desc"
exit 1
fi
else
if [ -n "$desc" ]; then
echo "::error::PR has a changelog description, but no changelog label"
echo "::error::Please add the relevant 'impact/' label to the PR or remove the changelog description"
exit 1
fi
len=$(echo -n "$desc" | wc -c)
if [[ $len -le 6 ]]; then
echo "::error::Description looks too short: $desc"
exit 1
fi
echo "This PR will be included in the release notes with the following note:"
@@ -78,16 +65,10 @@ jobs:
# Backports or PR that target a release branch directly should mention the target branch in the title, for example:
# [X.Y backport] Some change that needs backporting to X.Y
# [X.Y] Change directly targeting the X.Y branch
- name: Check release branch
- name: Get branch from PR title
id: title_branch
run: |
# get the intended major version prefix ("[27.1 backport]" -> "27.") from the PR title.
[[ "$PR_TITLE" =~ ^\[([0-9]*\.)[^]]*\] ]] && branch="${BASH_REMATCH[1]}"
run: echo "$PR_TITLE" | sed -n 's/^\[\([0-9]*\.[0-9]*\)[^]]*\].*/branch=\1/p' >> $GITHUB_OUTPUT
# get major version prefix from the release branch ("27.x -> "27.")
[[ "$GITHUB_BASE_REF" =~ ^([0-9]*\.) ]] && target_branch="${BASH_REMATCH[1]}" || target_branch="$GITHUB_BASE_REF"
if [[ "$target_branch" != "$branch" ]] && ! [[ "$GITHUB_BASE_REF" == "master" && "$branch" == "" ]]; then
echo "::error::PR is opened against the $GITHUB_BASE_REF branch, but its title suggests otherwise."
exit 1
fi
- name: Check release branch
if: github.event.pull_request.base.ref != steps.title_branch.outputs.branch && !(github.event.pull_request.base.ref == 'master' && steps.title_branch.outputs.branch == '')
run: echo "::error::PR title suggests targetting the ${{ steps.title_branch.outputs.branch }} branch, but is opened against ${{ github.event.pull_request.base.ref }}" && exit 1

View File

@@ -1,46 +0,0 @@
name: vm
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
pull_request:
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
vm:
needs:
- validate-dco
uses: ./.github/workflows/.vm.yml
strategy:
fail-fast: false
matrix:
template:
# EL 8 is used for running the tests with cgroup v1.
# Do not upgrade this to EL 9 until formally deprecating the cgroup v1 support.
#
# FIXME: use almalinux-8, then probably no need to keep oraclelinux-8 here.
# On almalinux-8, port forwarding tests are failing:
# https://github.com/moby/moby/pull/49819#issuecomment-2815676000
- template://oraclelinux-8 # Oracle's kernel 5.15
# - template://almalinux-8 # kernel 4.18
with:
template: ${{ matrix.template }}

View File

@@ -22,17 +22,20 @@ jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
test-prepare:
uses: ./.github/workflows/.test-prepare.yml
needs:
- validate-dco
run:
needs: validate-dco
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- test-prepare
uses: ./.github/workflows/.windows.yml
secrets: inherit
strategy:
fail-fast: false
matrix:
storage:
- graphdriver
- snapshotter
storage: ${{ fromJson(needs.test-prepare.outputs.matrix) }}
with:
os: windows-2022
storage: ${{ matrix.storage }}

View File

@@ -26,17 +26,20 @@ jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
test-prepare:
uses: ./.github/workflows/.test-prepare.yml
needs:
- validate-dco
run:
needs: validate-dco
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- test-prepare
uses: ./.github/workflows/.windows.yml
secrets: inherit
strategy:
fail-fast: false
matrix:
storage:
- graphdriver
- snapshotter
storage: ${{ fromJson(needs.test-prepare.outputs.matrix) }}
with:
os: windows-2025
storage: ${{ matrix.storage }}

7
.gitignore vendored
View File

@@ -14,9 +14,10 @@ thumbs.db
.editorconfig
# build artifacts
/bundles/
/cmd/dockerd/winresources/winres.json
/cmd/dockerd/*.syso
bundles/
cli/winresources/*/*.syso
cli/winresources/*/winres.json
contrib/builder/rpm/*/changelog
# ci artifacts
*.exe

View File

@@ -1,359 +1,135 @@
version: "2"
run:
# prevent golangci-lint from deducting the go version to lint for through go.mod,
# which causes it to fallback to go1.17 semantics.
go: "1.25.4"
concurrency: 2
# Only supported with go modules enabled (build flag -mod=vendor only valid when using modules)
# modules-download-mode: vendor
formatters:
enable:
- gofmt
- goimports
linters:
enable:
- asasalint # Detects "[]any" used as argument for variadic "func(...any)".
- copyloopvar # Detects places where loop variables are copied.
- depguard
- dogsled # Detects assignments with too many blank identifiers.
- dupword # Detects duplicate words.
- durationcheck # Detect cases where two time.Duration values are being multiplied in possibly erroneous ways.
- errorlint # Detects code that will cause problems with the error wrapping scheme introduced in Go 1.13.
- errchkjson # Detects unsupported types passed to json encoding functions and reports if checks for the returned error can be omitted.
- exhaustive # Detects missing options in enum switch statements.
- exptostd # Detects functions from golang.org/x/exp/ that can be replaced by std functions.
- fatcontext # Detects nested contexts in loops and function literals.
- forbidigo
- gocheckcompilerdirectives # Detects invalid go compiler directive comments (//go:).
- gocritic # Detects for bugs, performance and style issues.
- gosec # Detects security problems.
- goimports
- gosec
- gosimple
- govet
- iface # Detects incorrect use of interfaces. Currently only used for "identical" interfaces in the same package.
- importas
- ineffassign
- makezero # Finds slice declarations with non-zero initial length.
- mirror # Detects wrong mirror patterns of bytes/strings usage.
- misspell # Detects commonly misspelled English words in comments.
- nakedret # Detects uses of naked returns.
- nilnesserr # Detects returning nil errors. It combines the features of nilness and nilerr,
- nosprintfhostport # Detects misuse of Sprintf to construct a host with port in a URL.
- reassign # Detects reassigning a top-level variable in another package.
- revive # Metalinter; drop-in replacement for golint.
- spancheck # Detects mistakes with OpenTelemetry/Census spans.
- misspell
- revive
- staticcheck
- thelper
- unconvert # Detects unnecessary type conversions.
- typecheck
- unconvert
- unused
- usestdlibvars # Detects the possibility to use variables/constants from the Go standard library.
- wastedassign # Detects wasted assignment statements.
disable:
- errcheck
- spancheck # FIXME
settings:
depguard:
rules:
main:
deny:
- pkg: "github.com/stretchr/testify/assert"
desc: Use "gotest.tools/v3/assert" instead
- pkg: "github.com/stretchr/testify/require"
desc: Use "gotest.tools/v3/assert" instead
- pkg: "github.com/stretchr/testify/suite"
desc: Do not use
- pkg: "github.com/containerd/containerd/pkg/userns"
desc: Use github.com/moby/sys/userns instead.
- pkg: "github.com/tonistiigi/fsutil"
desc: The fsutil module does not have a stable API, so we should not have a direct dependency unless necessary.
- pkg: "github.com/hashicorp/go-multierror"
desc: "Use errors.Join instead"
run:
concurrency: 2
modules-download-mode: vendor
dupword:
ignore:
- "true" # some tests use this as expected output
- "false" # some tests use this as expected output
- "root" # for tests using "ls" output with files owned by "root:root"
skip-dirs:
- docs
errorlint:
# Check whether fmt.Errorf uses the %w verb for formatting errors.
# See the https://github.com/polyfloyd/go-errorlint for caveats.
errorf: false
# Check for plain type assertions and type switches.
asserts: false
linters-settings:
importas:
# Do not allow unaliased imports of aliased packages.
no-unaliased: true
exhaustive:
# Program elements to check for exhaustiveness.
# Default: [ switch ]
check:
- switch
# - map # TODO(thaJeztah): also enable for maps
# Presence of "default" case in switch statements satisfies exhaustiveness,
# even if all enum members are not listed.
# Default: false
#
# TODO(thaJeztah): consider not allowing this to catch new values being added (and falling through to "default")
default-signifies-exhaustive: true
alias:
# Enforce alias to prevent it accidentally being used instead of our
# own errdefs package (or vice-versa).
- pkg: github.com/containerd/containerd/errdefs
alias: cerrdefs
- pkg: github.com/opencontainers/image-spec/specs-go/v1
alias: ocispec
forbidigo:
forbid:
- pkg: ^sync/atomic$
pattern: ^atomic\.(Add|CompareAndSwap|Load|Store|Swap).
msg: Go 1.19 atomic types should be used instead.
- pkg: ^regexp$
pattern: ^regexp\.MustCompile
msg: Use daemon/internal/lazyregexp.New instead.
- pkg: github.com/vishvananda/netlink$
pattern: ^netlink\.(Handle\.)?(AddrList|BridgeVlanList|ChainList|ClassList|ConntrackTableList|ConntrackDeleteFilter$|ConntrackDeleteFilters|DevLinkGetDeviceList|DevLinkGetAllPortList|DevlinkGetDeviceParams|FilterList|FouList|GenlFamilyList|GTPPDPList|LinkByName|LinkByAlias|LinkList|LinkSubscribeWithOptions|NeighList$|NeighProxyList|NeighListExecute|NeighSubscribeWithOptions|LinkGetProtinfo|QdiscList|RdmaLinkList|RdmaLinkByName|RdmaLinkDel|RouteList|RouteListFilteredIter|RuleListFiltered$|RouteSubscribeWithOptions|RuleList$|RuleListFiltered|SocketGet|SocketDiagTCPInfo|SocketDiagTCP|SocketDiagUDPInfo|SocketDiagUDP|UnixSocketDiagInfo|UnixSocketDiag|VDPAGetDevConfigList|VDPAGetDevList|VDPAGetMGMTDevList|XfrmPolicyList|XfrmStateList)
msg: Use internal nlwrap package for EINTR handling.
- pkg: github.com/moby/moby/v2/internal/nlwrap$
pattern: ^nlwrap.Handle.(BridgeVlanList|ChainList|ClassList|ConntrackDeleteFilter$|DevLinkGetDeviceList|DevLinkGetAllPortList|DevlinkGetDeviceParams|FilterList|FouList|GenlFamilyList|GTPPDPList|LinkByAlias|LinkSubscribeWithOptions|NeighList$|NeighProxyList|NeighListExecute|NeighSubscribeWithOptions|LinkGetProtinfo|QdiscList|RdmaLinkList|RdmaLinkByName|RdmaLinkDel|RouteListFilteredIter|RuleListFiltered$|RouteSubscribeWithOptions|RuleList$|RuleListFiltered|SocketGet|SocketDiagTCPInfo|SocketDiagTCP|SocketDiagUDPInfo|SocketDiagUDP|UnixSocketDiagInfo|UnixSocketDiag|VDPAGetDevConfigList|VDPAGetDevList|VDPAGetMGMTDevList)
msg: Add a wrapper to nlwrap.Handle for EINTR handling and update the list in .golangci.yml.
analyze-types: true
govet:
check-shadowing: false
gocritic:
disabled-checks:
- appendAssign
- appendCombine
- assignOp
- builtinShadow
- builtinShadowDecl
- captLocal
- commentedOutCode
- deferInLoop
- dupImport
- dupSubExpr
- elseif
- emptyFallthrough
- equalFold
- evalOrder
- exitAfterDefer
- exposedSyncMutex
- filepathJoin
- hexLiteral
- hugeParam
- ifElseChain
- importShadow
- indexAlloc
- methodExprCall
- nestingReduce
- nilValReturn
- octalLiteral
- paramTypeCombine
- preferStringWriter
- ptrToRefParam
- rangeValCopy
- redundantSprint
- regexpMust
- regexpSimplify
- singleCaseSwitch
- sloppyReassign
- stringXbytes
- typeAssertChain
- typeDefFirst
- typeUnparen
- uncheckedInlineErr
- unlambda
- unnamedResult
- unnecessaryDefer
- unslice
- valSwap
- whyNoLint
enable-all: true
gosec:
excludes:
- G115 # FIXME temporarily suppress 'G115: integer overflow conversion': it produces many hits, some of which may be false positives, and need to be looked at; see https://github.com/moby/moby/issues/48358
gosec:
excludes:
- G104 # G104: Errors unhandled; (TODO: reduce unhandled errors, or explicitly ignore)
- G115 # G115: integer overflow conversion; (TODO: verify these: https://github.com/moby/moby/issues/48358)
- G204 # G204: Subprocess launched with variable; too many false positives.
- G301 # G301: Expect directory permissions to be 0750 or less (also EXC0009); too restrictive
- G302 # G302: Expect file permissions to be 0600 or less (also EXC0009); too restrictive
- G304 # G304: Potential file inclusion via variable.
- G306 # G306: Expect WriteFile permissions to be 0600 or less (too restrictive; also flags "0o644" permissions)
- G307 # G307: Deferring unsafe method "*os.File" on type "Close" (also EXC0008); (TODO: evaluate these and fix where needed: G307: Deferring unsafe method "*os.File" on type "Close")
- G504 # G504: Blocklisted import net/http/cgi: Go versions < 1.6.3 are vulnerable to Httpoxy attack: (CVE-2016-5386); (only affects go < 1.6.3)
govet:
enable-all: true
disable:
- fieldalignment # TODO: evaluate which ones should be updated.
importas:
# Do not allow unaliased imports of aliased packages.
no-unaliased: true
alias:
# Enforce alias to prevent it accidentally being used instead of our
# own errdefs package (or vice-versa).
- pkg: github.com/containerd/errdefs
alias: cerrdefs
- pkg: github.com/containerd/containerd/images
alias: c8dimages
- pkg: github.com/opencontainers/image-spec/specs-go/v1
alias: ocispec
- pkg: github.com/moby/docker-image-spec/specs-go/v1
alias: dockerspec
- pkg: go.etcd.io/bbolt
alias: bolt
# Enforce that gotest.tools/v3/assert/cmp is always aliased as "is"
- pkg: gotest.tools/v3/assert/cmp
alias: is
nakedret:
# Disallow naked returns if func has more lines of code than this setting.
# Default: 30
max-func-lines: 0
revive:
# Only listed rules are applied
# https://github.com/mgechev/revive/blob/HEAD/RULES_DESCRIPTIONS.md
rules:
- name: increment-decrement
# FIXME make sure all packages have a description. Currently, there's many packages without.
- name: package-comments
disabled: true
- name: redefines-builtin-id
- name: superfluous-else
arguments:
- preserve-scope
- name: use-any
- name: use-errors-new
- name: var-declaration
staticcheck:
checks:
- all
- -QF1008 # Omit embedded fields from selector expression; https://staticcheck.dev/docs/checks/#QF1008
- -ST1000 # Incorrect or missing package comment; https://staticcheck.dev/docs/checks/#ST1000
- -ST1003 # Poorly chosen identifier; https://staticcheck.dev/docs/checks/#ST1003
- -ST1005 # Incorrectly formatted error string; https://staticcheck.dev/docs/checks/#ST1005
spancheck:
# Default: ["end"]
checks:
- end # check that `span.End()` is called
- record-error # check that `span.RecordError(err)` is called when an error is returned
- set-status # check that `span.SetStatus(codes.Error, msg)` is called when an error is returned
thelper:
test:
# Check *testing.T is first param (or after context.Context) of helper function.
first: false
# Check t.Helper() begins helper function.
begin: false
benchmark:
# Check *testing.B is first param (or after context.Context) of helper function.
first: false
# Check b.Helper() begins helper function.
begin: false
tb:
# Check *testing.TB is first param (or after context.Context) of helper function.
first: false
# Check *testing.TB param has name tb.
name: false
# Check tb.Helper() begins helper function.
begin: false
fuzz:
# Check *testing.F is first param (or after context.Context) of helper function.
first: false
# Check f.Helper() begins helper function.
begin: false
usestdlibvars:
# Suggest the use of http.MethodXX.
http-method: true
# Suggest the use of http.StatusXX.
http-status-code: true
exclusions:
depguard:
rules:
# We prefer to use an "linters.exclusions.rules" so that new "default" exclusions are not
# automatically inherited. We can decide whether or not to follow upstream
# defaults when updating golang-ci-lint versions.
# Unfortunately, this means we have to copy the whole exclusion pattern, as
# (unlike the "include" option), the "exclude" option does not take exclusion
# ID's.
#
# These exclusion patterns are copied from the default excludes at:
# https://github.com/golangci/golangci-lint/blob/v1.61.0/pkg/config/issues.go#L11-L104
#
# The default list of exclusions can be found at:
# https://golangci-lint.run/usage/false-positives/#default-exclusions
# Exclude some linters from running on tests files.
- path: _test\.go
linters:
- errcheck
- text: "G404: Use of weak random number generator"
path: _test\.go
linters:
- gosec
# Suppress golint complaining about generated types in api/types/
- text: "type name will be used as (container|volume)\\.(Container|Volume).* by other packages, and that stutters; consider calling this"
path: "api/types/(volume|container)/"
linters:
- revive
# FIXME: ignoring unused assigns to ctx for now; too many hits in libnetwork/xxx functions that setup traces
- text: "assigned to ctx, but never used afterwards"
linters:
- wastedassign
- text: "ineffectual assignment to ctx"
source: "ctx[, ].*=.*\\(ctx[,)]"
linters:
- ineffassign
- text: "SA4006: this value of ctx is never used"
source: "ctx[, ].*=.*\\(ctx[,)]"
linters:
- staticcheck
# Ignore "nested context in function literal (fatcontext)" as we intentionally set up tracing on a base-context for tests.
# FIXME(thaJeztah): see if there's a more iodiomatic way to do this.
- text: 'nested context in function literal'
path: '((main|check)_(linux_|)test\.go)|testutil/helpers\.go'
linters:
- fatcontext
- text: '^shadow: declaration of "(ctx|err|ok)" shadows declaration'
linters:
- govet
- text: '^shadow: declaration of "(out)" shadows declaration'
path: _test\.go
linters:
- govet
- text: 'use of `regexp.MustCompile` forbidden'
path: _test\.go
linters:
- forbidigo
- text: 'use of `regexp.MustCompile` forbidden'
path: "daemon/internal/lazyregexp"
linters:
- forbidigo
- text: 'use of `regexp.MustCompile` forbidden'
path: "internal/testutils"
linters:
- forbidigo
- text: 'use of `regexp.MustCompile` forbidden'
path: "libnetwork/cmd/networkdb-test/dbclient"
linters:
- forbidigo
- text: 'use of `regexp.MustCompile` forbidden'
path: "registry/"
linters:
- forbidigo
# Log a warning if an exclusion rule is unused.
# Default: false
warn-unused: true
main:
deny:
- pkg: io/ioutil
desc: The io/ioutil package has been deprecated, see https://go.dev/doc/go1.16#ioutil
- pkg: "github.com/stretchr/testify/assert"
desc: Use "gotest.tools/v3/assert" instead
- pkg: "github.com/stretchr/testify/require"
desc: Use "gotest.tools/v3/assert" instead
- pkg: "github.com/stretchr/testify/suite"
desc: Do not use
revive:
rules:
# FIXME make sure all packages have a description. Currently, there's many packages without.
- name: package-comments
disabled: true
- name: redefines-builtin-id
issues:
# The default exclusion rules are a bit too permissive, so copying the relevant ones below
exclude-use-default: false
exclude-rules:
# We prefer to use an "exclude-list" so that new "default" exclusions are not
# automatically inherited. We can decide whether or not to follow upstream
# defaults when updating golang-ci-lint versions.
# Unfortunately, this means we have to copy the whole exclusion pattern, as
# (unlike the "include" option), the "exclude" option does not take exclusion
# ID's.
#
# These exclusion patterns are copied from the default excluses at:
# https://github.com/golangci/golangci-lint/blob/v1.46.2/pkg/config/issues.go#L10-L104
# EXC0001
- text: "Error return value of .((os\\.)?std(out|err)\\..*|.*Close|.*Flush|os\\.Remove(All)?|.*print(f|ln)?|os\\.(Un)?Setenv). is not checked"
linters:
- errcheck
# EXC0006
- text: "Use of unsafe calls should be audited"
linters:
- gosec
# EXC0007
- text: "Subprocess launch(ed with variable|ing should be audited)"
linters:
- gosec
# EXC0008
# TODO: evaluate these and fix where needed: G307: Deferring unsafe method "*os.File" on type "Close" (gosec)
- text: "(G104|G307)"
linters:
- gosec
# EXC0009
- text: "(Expect directory permissions to be 0750 or less|Expect file permissions to be 0600 or less)"
linters:
- gosec
# EXC0010
- text: "Potential file inclusion via variable"
linters:
- gosec
# Looks like the match in "EXC0007" above doesn't catch this one
# TODO: consider upstreaming this to golangci-lint's default exclusion rules
- text: "G204: Subprocess launched with a potential tainted input or cmd arguments"
linters:
- gosec
# Looks like the match in "EXC0009" above doesn't catch this one
# TODO: consider upstreaming this to golangci-lint's default exclusion rules
- text: "G306: Expect WriteFile permissions to be 0600 or less"
linters:
- gosec
# Exclude some linters from running on tests files.
- path: _test\.go
linters:
- errcheck
- gosec
# Suppress golint complaining about generated types in api/types/
- text: "type name will be used as (container|volume)\\.(Container|Volume).* by other packages, and that stutters; consider calling this"
path: "api/types/(volume|container)/"
linters:
- revive
# FIXME temporarily suppress these (see https://github.com/gotestyourself/gotest.tools/issues/272)
- text: "SA1019: (assert|cmp|is)\\.ErrorType is deprecated"
linters:
- staticcheck
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-issues-per-linter: 0

View File

@@ -7,7 +7,6 @@
#
# For an explanation of this file format, consult gitmailmap(5).
Aaron Yoshitake <airandfingers@gmail.com>
Aaron L. Xu <liker.xu@foxmail.com>
Aaron L. Xu <liker.xu@foxmail.com> <likexu@harmonycloud.cn>
Aaron Lehmann <alehmann@netflix.com>
@@ -31,11 +30,9 @@ Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.akihiro@lab.ntt.co.jp>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.kyoto@gmail.com>
Akshay Moghe <akshay.moghe@gmail.com>
Alano Terblanche <alano.terblanche@docker.com>
Alano Terblanche <alano.terblanche@docker.com> <18033717+Benehiko@users.noreply.github.com>
Albin Kerouanton <albinker@gmail.com>
Albin Kerouanton <albinker@gmail.com> <557933+akerouanton@users.noreply.github.com>
Albin Kerouanton <albinker@gmail.com> <albin@akerouanton.name>
Albin Kerouanton <albinker@gmail.com> <557933+akerouanton@users.noreply.github.com>
Aleksa Sarai <asarai@suse.de>
Aleksa Sarai <asarai@suse.de> <asarai@suse.com>
Aleksa Sarai <asarai@suse.de> <cyphar@cyphar.com>
@@ -62,8 +59,6 @@ Allen Sun <allensun.shl@alibaba-inc.com> <allen.sun@daocloud.io>
Allen Sun <allensun.shl@alibaba-inc.com> <shlallen1990@gmail.com>
Anca Iordache <anca.iordache@docker.com>
Andrea Denisse Gómez <crypto.andrea@protonmail.ch>
Andrew Baxter <423qpsxzhh8k3h@s.rendaw.me>
Andrew Baxter <423qpsxzhh8k3h@s.rendaw.me> andrew <>
Andrew Kim <taeyeonkim90@gmail.com>
Andrew Kim <taeyeonkim90@gmail.com> <akim01@fortinet.com>
Andrew Weiss <andrew.weiss@docker.com> <andrew.weiss@microsoft.com>
@@ -94,10 +89,6 @@ Arnaud Rebillout <arnaud.rebillout@collabora.com>
Arnaud Rebillout <arnaud.rebillout@collabora.com> <elboulangero@gmail.com>
Arthur Gautier <baloo@gandi.net> <superbaloo+registrations.github@superbaloo.net>
Artur Meyster <arthurfbi@yahoo.com>
Austin Vazquez <austin.vazquez.dev@gmail.com>
Austin Vazquez <austin.vazquez.dev@gmail.com> <55906459+austinvazquez@users.noreply.github.com>
Austin Vazquez <austin.vazquez.dev@gmail.com> <macedonv@amazon.com>
Austin Vazquez <austin.vazquez.dev@gmail.com> <austin.vazquez@docker.com>
Avi Miller <avi.miller@oracle.com> <avi.miller@gmail.com>
Ben Bonnefoy <frenchben@docker.com>
Ben Golub <ben.golub@dotcloud.com>
@@ -128,7 +119,6 @@ Brian Goff <cpuguy83@gmail.com> <bgoff@cpuguy83-mbp.home>
Brian Goff <cpuguy83@gmail.com> <bgoff@cpuguy83-mbp.local>
Brian Goff <cpuguy83@gmail.com> <brian.goff@microsoft.com>
Brian Goff <cpuguy83@gmail.com> <cpuguy@hey.com>
Calvin Liu <flycalvin@qq.com>
Cameron Sparr <gh@sparr.email>
Carlos de Paula <me@carlosedp.com>
Chander Govindarajan <chandergovind@gmail.com>
@@ -140,8 +130,6 @@ Chen Mingjie <chenmingjie0828@163.com>
Chen Qiu <cheney-90@hotmail.com>
Chen Qiu <cheney-90@hotmail.com> <21321229@zju.edu.cn>
Chengfei Shang <cfshang@alauda.io>
Chengyu Zhu <hudson@cyzhu.com>
Chentianze <cmoman@126.com>
Chris Dias <cdias@microsoft.com>
Chris McKinnel <chris.mckinnel@tangentlabs.co.uk>
Chris Price <cprice@mirantis.com>
@@ -150,8 +138,6 @@ Chris Telfer <ctelfer@docker.com>
Chris Telfer <ctelfer@docker.com> <ctelfer@users.noreply.github.com>
Christopher Biscardi <biscarch@sketcht.com>
Christopher Latham <sudosurootdev@gmail.com>
Christopher Petito <chrisjpetito@gmail.com>
Christopher Petito <chrisjpetito@gmail.com> <47751006+krissetto@users.noreply.github.com>
Christy Norman <christy@linux.vnet.ibm.com>
Chun Chen <ramichen@tencent.com> <chenchun.feed@gmail.com>
Corbin Coleman <corbin.coleman@docker.com>
@@ -187,8 +173,6 @@ Dattatraya Kumbhar <dattatraya.kumbhar@gslab.com>
Dave Goodchild <buddhamagnet@gmail.com>
Dave Henderson <dhenderson@gmail.com> <Dave.Henderson@ca.ibm.com>
Dave Tucker <dt@docker.com> <dave@dtucker.co.uk>
David Dooling <dooling@gmail.com>
David Dooling <dooling@gmail.com> <david.dooling@docker.com>
David M. Karr <davidmichaelkarr@gmail.com>
David Sheets <dsheets@docker.com> <sheets@alum.mit.edu>
David Sissitka <me@dsissitka.com>
@@ -235,8 +219,6 @@ Felix Hupfeld <felix@quobyte.com> <quofelix@users.noreply.github.com>
Felix Ruess <felix.ruess@gmail.com> <felix.ruess@roboception.de>
Feng Yan <fy2462@gmail.com>
Fengtu Wang <wangfengtu@huawei.com> <wangfengtu@huawei.com>
Filipe Pina <hzlu1ot0@duck.com>
Filipe Pina <hzlu1ot0@duck.com> <636320+fopina@users.noreply.github.com>
Francisco Carriedo <fcarriedo@gmail.com>
Frank Rosquin <frank.rosquin+github@gmail.com> <frank.rosquin@gmail.com>
Frank Yang <yyb196@gmail.com>
@@ -288,7 +270,6 @@ Hollie Teal <hollie@docker.com> <hollie.teal@docker.com>
Hollie Teal <hollie@docker.com> <hollietealok@users.noreply.github.com>
hsinko <21551195@zju.edu.cn> <hsinko@users.noreply.github.com>
Hu Keping <hukeping@huawei.com>
Huajin Tong <fliterdashen@gmail.com>
Hui Kang <hkang.sunysb@gmail.com>
Hui Kang <hkang.sunysb@gmail.com> <kangh@us.ibm.com>
Huu Nguyen <huu@prismskylabs.com> <whoshuu@gmail.com>
@@ -355,8 +336,6 @@ John Howard <github@lowenna.com> <john.howard@microsoft.com>
John Howard <github@lowenna.com> <john@lowenna.com>
John Stephens <johnstep@docker.com> <johnstep@users.noreply.github.com>
Jon Surrell <jon.surrell@gmail.com> <jon.surrell@automattic.com>
Jonathan A. Sternberg <jonathansternberg@gmail.com>
Jonathan A. Sternberg <jonathansternberg@gmail.com> <jonathan.sternberg@docker.com>
Jonathan Choy <jonathan.j.choy@gmail.com>
Jonathan Choy <jonathan.j.choy@gmail.com> <oni@tetsujinlabs.com>
Jordan Arentsen <blissdev@gmail.com>
@@ -499,20 +478,19 @@ Mikael Davranche <mikael.davranche@corp.ovh.com> <mikael.davranche@corp.ovh.net>
Mike Casas <mkcsas0@gmail.com> <mikecasas@users.noreply.github.com>
Mike Goelzer <mike.goelzer@docker.com> <mgoelzer@docker.com>
Milas Bowman <devnull@milas.dev>
Milas Bowman <devnull@milas.dev> <milas.bowman@docker.com>
Milas Bowman <devnull@milas.dev> <milasb@gmail.com>
Milas Bowman <devnull@milas.dev> <milas.bowman@docker.com>
Milind Chawre <milindchawre@gmail.com>
Misty Stanley-Jones <misty@docker.com> <misty@apache.org>
Mohammad Banikazemi <MBanikazemi@gmail.com>
Mohammad Banikazemi <MBanikazemi@gmail.com> <mb@us.ibm.com>
Mohd Sadiq <mohdsadiq058@gmail.com> <42430865+msadiq058@users.noreply.github.com>
Mohd Sadiq <mohdsadiq058@gmail.com> <mohdsadiq058@gmail.com>
Mohd Sadiq <mohdsadiq058@gmail.com> <42430865+msadiq058@users.noreply.github.com>
Mohit Soni <mosoni@ebay.com> <mohitsoni1989@gmail.com>
Moorthy RS <rsmoorthy@gmail.com> <rsmoorthy@users.noreply.github.com>
Moysés Borges <moysesb@gmail.com>
Moysés Borges <moysesb@gmail.com> <moyses.furtado@wplex.com.br>
mrfly <mr.wrfly@gmail.com> <wrfly@users.noreply.github.com>
Myeongjoon Kim <kimmj8409@gmail.com>
Nace Oroz <orkica@gmail.com>
Natasha Jarus <linuxmercedes@gmail.com>
Nathan LeClaire <nathan.leclaire@docker.com> <nathan.leclaire@gmail.com>
@@ -532,11 +510,8 @@ Olli Janatuinen <olli.janatuinen@gmail.com> <olljanat@users.noreply.github.com>
Onur Filiz <onur.filiz@microsoft.com>
Onur Filiz <onur.filiz@microsoft.com> <ofiliz@users.noreply.github.com>
Ouyang Liduo <oyld0210@163.com>
Patrick St. laurent <patrick@saint-laurent.us>
Patrick Stapleton <github@gdi2290.com>
Paul Liljenberg <liljenberg.paul@gmail.com> <letters@paulnotcom.se>
Paweł Gronowski <pawel.gronowski@docker.com>
Paweł Gronowski <pawel.gronowski@docker.com> <me@woland.xyz>
Pavel Tikhomirov <ptikhomirov@virtuozzo.com> <ptikhomirov@parallels.com>
Pawel Konczalski <mail@konczalski.de>
Peter Choi <phkchoi89@gmail.com> <reikani@Peters-MacBook-Pro.local>
@@ -558,21 +533,16 @@ Qin TianHuan <tianhuan@bingotree.cn>
Ray Tsang <rayt@google.com> <saturnism@users.noreply.github.com>
Renaud Gaubert <rgaubert@nvidia.com> <renaud.gaubert@gmail.com>
Richard Scothern <richard.scothern@gmail.com>
Rob Murray <rob.murray@docker.com>
Rob Murray <rob.murray@docker.com> <148866618+robmry@users.noreply.github.com>
Robert Terhaar <rterhaar@atlanticdynamic.com> <robbyt@users.noreply.github.com>
Roberto G. Hashioka <roberto.hashioka@docker.com> <roberto_hashioka@hotmail.com>
Roberto Muñoz Fernández <robertomf@gmail.com> <roberto.munoz.fernandez.contractor@bbva.com>
Robin Thoni <robin@rthoni.com>
Rodrigo Campos <rodrigoca@microsoft.com>
Rodrigo Campos <rodrigoca@microsoft.com> <rodrigo@kinvolk.io>
Roman Dudin <katrmr@gmail.com> <decadent@users.noreply.github.com>
Rong Zhang <rongzhang@alauda.io>
Rongxiang Song <tinysong1226@gmail.com>
Rony Weng <ronyweng@synology.com>
Ross Boucher <rboucher@gmail.com>
Rui Cao <ruicao@alauda.io>
Rui JingAn <quiterace@gmail.com>
Runshen Zhu <runshen.zhu@gmail.com>
Ryan Stelly <ryan.stelly@live.com>
Ryoga Saito <contact@proelbtn.com>
@@ -593,7 +563,6 @@ Sebastiaan van Stijn <github@gone.nl> <sebastiaan@ws-key-sebas3.dpi1.dpi>
Sebastiaan van Stijn <github@gone.nl> <thaJeztah@users.noreply.github.com>
Sebastian Thomschke <sebthom@users.noreply.github.com>
Seongyeol Lim <seongyeol37@gmail.com>
Serhii Nakon <serhii.n@thescimus.com>
Shaun Kaasten <shaunk@gmail.com>
Shawn Landden <shawn@churchofgit.com> <shawnlandden@gmail.com>
Shengbo Song <thomassong@tencent.com>

93
AUTHORS
View File

@@ -2,10 +2,7 @@
# This file lists all contributors to the repository.
# See hack/generate-authors.sh to make modifications.
17neverends <ionianrise@gmail.com>
7sunarni <710720732@qq.com>
Aanand Prasad <aanand.prasad@gmail.com>
Aarni Koskela <akx@iki.fi>
Aaron Davidson <aaron@databricks.com>
Aaron Feng <aaron.feng@gmail.com>
Aaron Hnatiw <aaron@griddio.com>
@@ -13,8 +10,6 @@ Aaron Huslage <huslage@gmail.com>
Aaron L. Xu <liker.xu@foxmail.com>
Aaron Lehmann <alehmann@netflix.com>
Aaron Welch <welch@packet.net>
Aaron Yoshitake <airandfingers@gmail.com>
Abdur Rehman <abdur_rehman@mentor.com>
Abel Muiño <amuino@gmail.com>
Abhijeet Kasurde <akasurde@redhat.com>
Abhinandan Prativadi <aprativadi@gmail.com>
@@ -23,17 +18,14 @@ Abhishek Chanda <abhishek.becs@gmail.com>
Abhishek Sharma <abhishek@asharma.me>
Abin Shahab <ashahab@altiscale.com>
Abirdcfly <fp544037857@gmail.com>
Abubacarr Ceesay <abubacarr671@gmail.com>
Ada Mancini <ada@docker.com>
Adam Avilla <aavilla@yp.com>
Adam Dobrawy <naczelnik@jawnosc.tk>
Adam Eijdenberg <adam.eijdenberg@gmail.com>
Adam Kunk <adam.kunk@tiaa-cref.org>
Adam Lamers <adam.lamers@wmsdev.pl>
Adam Miller <admiller@redhat.com>
Adam Mills <adam@armills.info>
Adam Pointer <adam.pointer@skybettingandgaming.com>
Adam Simon <adamsimon85100@gmail.com>
Adam Singer <financeCoding@gmail.com>
Adam Thornton <adam.thornton@maryville.com>
Adam Walz <adam@adamwalz.net>
@@ -50,7 +42,6 @@ Adrian Mouat <adrian.mouat@gmail.com>
Adrian Oprea <adrian@codesi.nz>
Adrien Folie <folie.adrien@gmail.com>
Adrien Gallouët <adrien@gallouet.fr>
Adrien Pompée <adrien.pompee@atmosphere.aero>
Ahmed Kamal <email.ahmedkamal@googlemail.com>
Ahmet Alp Balkan <ahmetb@microsoft.com>
Aidan Feldman <aidan.feldman@gmail.com>
@@ -71,7 +62,6 @@ alambike <alambike@gmail.com>
Alan Hoyle <alan@alanhoyle.com>
Alan Scherger <flyinprogrammer@gmail.com>
Alan Thompson <cloojure@gmail.com>
Alano Terblanche <alano.terblanche@docker.com>
Albert Callarisa <shark234@gmail.com>
Albert Zhang <zhgwenming@gmail.com>
Albin Kerouanton <albinker@gmail.com>
@@ -83,7 +73,6 @@ Aleksandrs Fadins <aleks@s-ko.net>
Alena Prokharchyk <alena@rancher.com>
Alessandro Boch <aboch@tetrationanalytics.com>
Alessio Biancalana <dottorblaster@gmail.com>
Alessio Perugini <alessio@perugini.xyz>
Alex Chan <alex@alexwlchan.net>
Alex Chen <alexchenunix@gmail.com>
Alex Coventry <alx@empirical.com>
@@ -128,7 +117,6 @@ amangoel <amangoel@gmail.com>
Amen Belayneh <amenbelayneh@gmail.com>
Ameya Gawde <agawde@mirantis.com>
Amir Goldstein <amir73il@aquasec.com>
AmirBuddy <badinlu.amirhossein@gmail.com>
Amit Bakshi <ambakshi@gmail.com>
Amit Krishnan <amit.krishnan@oracle.com>
Amit Shukla <amit.shukla@docker.com>
@@ -153,7 +141,6 @@ Andreas Tiefenthaler <at@an-ti.eu>
Andrei Gherzan <andrei@resin.io>
Andrei Ushakov <aushakov@netflix.com>
Andrei Vagin <avagin@gmail.com>
Andrew Baxter <423qpsxzhh8k3h@s.rendaw.me>
Andrew C. Bodine <acbodine@us.ibm.com>
Andrew Clay Shafer <andrewcshafer@gmail.com>
Andrew Duckworth <grillopress@gmail.com>
@@ -174,12 +161,10 @@ Andrew Po <absourd.noise@gmail.com>
Andrew Weiss <andrew.weiss@docker.com>
Andrew Williams <williams.andrew@gmail.com>
Andrews Medina <andrewsmedina@gmail.com>
Andrey Epifanov <aepifanov@mirantis.com>
Andrey Kolomentsev <andrey.kolomentsev@docker.com>
Andrey Petrov <andrey.petrov@shazow.net>
Andrey Stolbovsky <andrey.stolbovsky@gmail.com>
André Martins <aanm90@gmail.com>
Andrés Maldonado <maldonado@codelutin.com>
Andy Chambers <anchambers@paypal.com>
andy diller <dillera@gmail.com>
Andy Goldstein <agoldste@redhat.com>
@@ -194,7 +179,6 @@ Anes Hasicic <anes.hasicic@gmail.com>
Angel Velazquez <angelcar@amazon.com>
Anil Belur <askb23@gmail.com>
Anil Madhavapeddy <anil@recoil.org>
Anirudh Aithal <aithal@amazon.com>
Ankit Jain <ajatkj@yahoo.co.in>
Ankush Agarwal <ankushagarwal11@gmail.com>
Anonmily <michelle@michelleliu.io>
@@ -204,13 +188,11 @@ Anthon van der Neut <anthon@mnt.org>
Anthony Baire <Anthony.Baire@irisa.fr>
Anthony Bishopric <git@anthonybishopric.com>
Anthony Dahanne <anthony.dahanne@gmail.com>
Anthony Nandaa <profnandaa@gmail.com>
Anthony Sottile <asottile@umich.edu>
Anton Löfgren <anton.lofgren@gmail.com>
Anton Nikitin <anton.k.nikitin@gmail.com>
Anton Polonskiy <anton.polonskiy@gmail.com>
Anton Tiurin <noxiouz@yandex.ru>
Antonio Aguilar <antonio@zoftko.com>
Antonio Murdaca <antonio.murdaca@gmail.com>
Antonis Kalipetis <akalipetis@gmail.com>
Antony Messerli <amesserl@rackspace.com>
@@ -233,13 +215,13 @@ Artur Meyster <arthurfbi@yahoo.com>
Arun Gupta <arun.gupta@gmail.com>
Asad Saeeduddin <masaeedu@gmail.com>
Asbjørn Enge <asbjorn@hanafjedle.net>
Ashly Mathew <ashly.mathew@sap.com>
Austin Vazquez <austin.vazquez.dev@gmail.com>
Austin Vazquez <macedonv@amazon.com>
averagehuman <averagehuman@users.noreply.github.com>
Avi Das <andas222@gmail.com>
Avi Kivity <avi@scylladb.com>
Avi Miller <avi.miller@oracle.com>
Avi Vaid <avaid1996@gmail.com>
ayoshitake <airandfingers@gmail.com>
Azat Khuyiyakhmetov <shadow_uz@mail.ru>
Bao Yonglei <baoyonglei@huawei.com>
Bardia Keyoumarsi <bkeyouma@ucsc.edu>
@@ -300,7 +282,6 @@ Brandon Liu <bdon@bdon.org>
Brandon Philips <brandon.philips@coreos.com>
Brandon Rhodes <brandon@rhodesmill.org>
Brendan Dixon <brendand@microsoft.com>
Brendon Smith <bws@bws.bio>
Brennan Kinney <5098581+polarathene@users.noreply.github.com>
Brent Salisbury <brent.salisbury@docker.com>
Brett Higgins <brhiggins@arbor.net>
@@ -335,7 +316,6 @@ Burke Libbey <burke@libbey.me>
Byung Kang <byung.kang.ctr@amrdec.army.mil>
Caleb Spare <cespare@gmail.com>
Calen Pennington <cale@edx.org>
Calvin Liu <flycalvin@qq.com>
Cameron Boehmer <cameron.boehmer@gmail.com>
Cameron Sparr <gh@sparr.email>
Cameron Spear <cameronspear@gmail.com>
@@ -350,20 +330,17 @@ Carlos Alexandro Becker <caarlos0@gmail.com>
Carlos de Paula <me@carlosedp.com>
Carlos Sanchez <carlos@apache.org>
Carol Fager-Higgins <carol.fager-higgins@docker.com>
carsontham <carsontham@outlook.com>
Cary <caryhartline@users.noreply.github.com>
Casey Bisson <casey.bisson@joyent.com>
Catalin Pirvu <pirvu.catalin94@gmail.com>
Ce Gao <ce.gao@outlook.com>
Cedric Davies <cedricda@microsoft.com>
Cesar Talledo <cesar.talledo@docker.com>
Cezar Sa Espinola <cezarsa@gmail.com>
Chad Swenson <chadswen@gmail.com>
Chance Zibolski <chance.zibolski@gmail.com>
Chander Govindarajan <chandergovind@gmail.com>
Chanhun Jeong <keyolk@gmail.com>
Chao Wang <wangchao.fnst@cn.fujitsu.com>
Charity Kathure <ckathure@microsoft.com>
Charles Chan <charleswhchan@users.noreply.github.com>
Charles Hooper <charles.hooper@dotcloud.com>
Charles Law <claw@conduce.com>
@@ -385,8 +362,6 @@ Chen Qiu <cheney-90@hotmail.com>
Cheng-mean Liu <soccerl@microsoft.com>
Chengfei Shang <cfshang@alauda.io>
Chengguang Xu <cgxu519@gmx.com>
Chengyu Zhu <hudson@cyzhu.com>
Chentianze <cmoman@126.com>
Chenyang Yan <memory.yancy@gmail.com>
chenyuzhu <chenyuzhi@oschina.cn>
Chetan Birajdar <birajdar.chetan@gmail.com>
@@ -434,7 +409,6 @@ Christopher Crone <christopher.crone@docker.com>
Christopher Currie <codemonkey+github@gmail.com>
Christopher Jones <tophj@linux.vnet.ibm.com>
Christopher Latham <sudosurootdev@gmail.com>
Christopher Petito <chrisjpetito@gmail.com>
Christopher Rigor <crigor@gmail.com>
Christy Norman <christy@linux.vnet.ibm.com>
Chun Chen <ramichen@tencent.com>
@@ -500,7 +474,6 @@ Daniel Farrell <dfarrell@redhat.com>
Daniel Garcia <daniel@danielgarcia.info>
Daniel Gasienica <daniel@gasienica.ch>
Daniel Grunwell <mwgrunny@gmail.com>
Daniel Guns <danbguns@gmail.com>
Daniel Helfand <helfand.4@gmail.com>
Daniel Hiltgen <daniel.hiltgen@docker.com>
Daniel J Walsh <dwalsh@redhat.com>
@@ -696,7 +669,6 @@ Erik Hollensbe <github@hollensbe.org>
Erik Inge Bolsø <knan@redpill-linpro.com>
Erik Kristensen <erik@erikkristensen.com>
Erik Sipsma <erik@sipsma.dev>
Erik Sjölund <erik.sjolund@gmail.com>
Erik St. Martin <alakriti@gmail.com>
Erik Weathers <erikdw@gmail.com>
Erno Hopearuoho <erno.hopearuoho@gmail.com>
@@ -759,7 +731,6 @@ Feroz Salam <feroz.salam@sourcegraph.com>
Ferran Rodenas <frodenas@gmail.com>
Filipe Brandenburger <filbranden@google.com>
Filipe Oliveira <contato@fmoliveira.com.br>
Filipe Pina <hzlu1ot0@duck.com>
Flavio Castelli <fcastelli@suse.com>
Flavio Crisciani <flavio.crisciani@docker.com>
Florian <FWirtz@users.noreply.github.com>
@@ -784,7 +755,6 @@ Frank Macreery <frank@macreery.com>
Frank Rosquin <frank.rosquin+github@gmail.com>
Frank Villaro-Dixon <frank.villarodixon@merkle.com>
Frank Yang <yyb196@gmail.com>
François Scala <github@arcenik.net>
Fred Lifton <fred.lifton@docker.com>
Frederick F. Kautz IV <fkautz@redhat.com>
Frederico F. de Oliveira <FreddieOliveira@users.noreply.github.com>
@@ -805,7 +775,6 @@ Gabriel L. Somlo <gsomlo@gmail.com>
Gabriel Linder <linder.gabriel@gmail.com>
Gabriel Monroy <gabriel@opdemand.com>
Gabriel Nicolas Avellaneda <avellaneda.gabriel@gmail.com>
Gabriel Tomitsuka <gabriel@tomitsuka.com>
Gaetan de Villele <gdevillele@gmail.com>
Galen Sampson <galen.sampson@gmail.com>
Gang Qiao <qiaohai8866@gmail.com>
@@ -820,9 +789,7 @@ GennadySpb <lipenkov@gmail.com>
Geoff Levand <geoff@infradead.org>
Geoffrey Bachelet <grosfrais@gmail.com>
Geon Kim <geon0250@gmail.com>
George Adams <georgeadams1995@gmail.com>
George Kontridze <george@bugsnag.com>
George Ma <mayangang@outlook.com>
George MacRorie <gmacr31@gmail.com>
George Xie <georgexsh@gmail.com>
Georgi Hristozov <georgi@forkbomb.nl>
@@ -849,7 +816,6 @@ Gopikannan Venugopalsamy <gopikannan.venugopalsamy@gmail.com>
Gosuke Miyashita <gosukenator@gmail.com>
Gou Rao <gou@portworx.com>
Govinda Fichtner <govinda.fichtner@googlemail.com>
Grace Choi <grace.54109@gmail.com>
Grant Millar <rid@cylo.io>
Grant Reaber <grant.reaber@gmail.com>
Graydon Hoare <graydon@pobox.com>
@@ -877,7 +843,6 @@ haining.cao <haining.cao@daocloud.io>
Hakan Özler <hakan.ozler@kodcu.com>
Hamish Hutchings <moredhel@aoeu.me>
Hannes Ljungberg <hannes@5monkeys.se>
Hannes Ortmeier <ortmeier.hannes@gmail.com>
Hans Kristian Flaatten <hans@starefossen.com>
Hans Rødtang <hansrodtang@gmail.com>
Hao Shu Wei <haoshuwei24@gmail.com>
@@ -897,7 +862,6 @@ heartlock <21521209@zju.edu.cn>
Hector Castro <hectcastro@gmail.com>
Helen Xie <chenjg@harmonycloud.cn>
Henning Sprang <henning.sprang@gmail.com>
Henry Wang <henwang@amazon.com>
Hiroshi Hatake <hatake@clear-code.com>
Hiroyuki Sasagawa <hs19870702@gmail.com>
Hobofan <goisser94@gmail.com>
@@ -911,8 +875,6 @@ Hsing-Yu (David) Chen <davidhsingyuchen@gmail.com>
hsinko <21551195@zju.edu.cn>
Hu Keping <hukeping@huawei.com>
Hu Tao <hutao@cn.fujitsu.com>
Huajin Tong <fliterdashen@gmail.com>
huang-jl <1046678590@qq.com>
HuanHuan Ye <logindaveye@gmail.com>
Huanzhong Zhang <zhanghuanzhong90@gmail.com>
Huayi Zhang <irachex@gmail.com>
@@ -947,7 +909,6 @@ Illo Abdulrahim <abdulrahim.illo@nokia.com>
Ilya Dmitrichenko <errordeveloper@gmail.com>
Ilya Gusev <mail@igusev.ru>
Ilya Khlopotov <ilya.khlopotov@gmail.com>
imalasong <2879499479@qq.com>
imre Fitos <imre.fitos+github@gmail.com>
inglesp <peter.inglesby@gmail.com>
Ingo Gottwald <in.gottwald@gmail.com>
@@ -965,7 +926,6 @@ J Bruni <joaohbruni@yahoo.com.br>
J. Nunn <jbnunn@gmail.com>
Jack Danger Canty <jackdanger@squareup.com>
Jack Laxson <jackjrabbit@gmail.com>
Jack Walker <90711509+j2walker@users.noreply.github.com>
Jacob Atzen <jacob@jacobatzen.dk>
Jacob Edelman <edelman.jd@gmail.com>
Jacob Tomlinson <jacob@tom.linson.uk>
@@ -992,7 +952,6 @@ James Nugent <james@jen20.com>
James Sanders <james3sanders@gmail.com>
James Turnbull <james@lovedthanlost.net>
James Watkins-Harvey <jwatkins@progi-media.com>
Jameson Hyde <jameson.hyde@docker.com>
Jamie Hannaford <jamie@limetree.org>
Jamshid Afshar <jafshar@yahoo.com>
Jan Breig <git@pygos.space>
@@ -1010,7 +969,6 @@ Jannick Fahlbusch <git@jf-projects.de>
Januar Wayong <januar@gmail.com>
Jared Biel <jared.biel@bolderthinking.com>
Jared Hocutt <jaredh@netapp.com>
Jaroslav Jindrak <dzejrou@gmail.com>
Jaroslaw Zabiello <hipertracker@gmail.com>
Jasmine Hegman <jasmine@jhegman.com>
Jason A. Donenfeld <Jason@zx2c4.com>
@@ -1026,7 +984,6 @@ Jason Shepherd <jason@jasonshepherd.net>
Jason Smith <jasonrichardsmith@gmail.com>
Jason Sommer <jsdirv@gmail.com>
Jason Stangroome <jason@codeassassin.com>
Jasper Siepkes <siepkes@serviceplanet.nl>
Javier Bassi <javierbassi@gmail.com>
jaxgeller <jacksongeller@gmail.com>
Jay <teguhwpurwanto@gmail.com>
@@ -1055,7 +1012,6 @@ Jeffrey Bolle <jeffreybolle@gmail.com>
Jeffrey Morgan <jmorganca@gmail.com>
Jeffrey van Gogh <jvg@google.com>
Jenny Gebske <jennifer@gebske.de>
Jeongseok Kang <piono623@naver.com>
Jeremy Chambers <jeremy@thehipbot.com>
Jeremy Grosser <jeremy@synack.me>
Jeremy Huntwork <jhuntwork@lightcubesolutions.com>
@@ -1073,7 +1029,6 @@ Jezeniel Zapanta <jpzapanta22@gmail.com>
Jhon Honce <jhonce@redhat.com>
Ji.Zhilong <zhilongji@gmail.com>
Jian Liao <jliao@alauda.io>
Jian Zeng <anonymousknight96@gmail.com>
Jian Zhang <zhangjian.fnst@cn.fujitsu.com>
Jiang Jinyang <jjyruby@gmail.com>
Jianyong Wu <jianyong.wu@arm.com>
@@ -1091,17 +1046,13 @@ Jim Perrin <jperrin@centos.org>
Jimmy Cuadra <jimmy@jimmycuadra.com>
Jimmy Puckett <jimmy.puckett@spinen.com>
Jimmy Song <rootsongjc@gmail.com>
jinjiadu <jinjiadu@aliyun.com>
Jinsoo Park <cellpjs@gmail.com>
Jintao Zhang <zhangjintao9020@gmail.com>
Jiri Appl <jiria@microsoft.com>
Jiri Popelka <jpopelka@redhat.com>
Jiuyue Ma <majiuyue@huawei.com>
Jiří Moravčík <jiri.moravcik@gmail.com>
Jiří Župka <jzupka@redhat.com>
jjimbo137 <115816493+jjimbo137@users.noreply.github.com>
Joakim Roubert <joakim.roubert@axis.com>
Joan Grau <grautxo.dev@proton.me>
Joao Fernandes <joao.fernandes@docker.com>
Joao Trindade <trindade.joao@gmail.com>
Joe Beda <joe.github@bedafamily.com>
@@ -1142,7 +1093,6 @@ Jon Johnson <jonjohnson@google.com>
Jon Surrell <jon.surrell@gmail.com>
Jon Wedaman <jweede@gmail.com>
Jonas Dohse <jonas@dohse.ch>
Jonas Geiler <git@jonasgeiler.com>
Jonas Heinrich <Jonas@JonasHeinrich.com>
Jonas Pfenniger <jonas@pfenniger.name>
Jonathan A. Schweder <jonathanschweder@gmail.com>
@@ -1186,7 +1136,6 @@ Josiah Kiehl <jkiehl@riotgames.com>
José Tomás Albornoz <jojo@eljojo.net>
Joyce Jang <mail@joycejang.com>
JP <jpellerin@leapfrogonline.com>
JSchltggr <jschltggr@gmail.com>
Julian Taylor <jtaylor.debian@googlemail.com>
Julien Barbier <write0@gmail.com>
Julien Bisconti <veggiemonk@users.noreply.github.com>
@@ -1221,7 +1170,6 @@ K. Heller <pestophagous@gmail.com>
Kai Blin <kai@samba.org>
Kai Qiang Wu (Kennan) <wkq5325@gmail.com>
Kaijie Chen <chen@kaijie.org>
Kaita Nakamura <kaita.nakamura0830@gmail.com>
Kamil Domański <kamil@domanski.co>
Kamjar Gerami <kami.gerami@gmail.com>
Kanstantsin Shautsou <kanstantsin.sha@gmail.com>
@@ -1296,7 +1244,6 @@ Krasi Georgiev <krasi@vip-consult.solutions>
Krasimir Georgiev <support@vip-consult.co.uk>
Kris-Mikael Krister <krismikael@protonmail.com>
Kristian Haugene <kristian.haugene@capgemini.com>
Kristian Heljas <kristian@kristian.ee>
Kristina Zabunova <triara.xiii@gmail.com>
Krystian Wojcicki <kwojcicki@sympatico.ca>
Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp>
@@ -1313,7 +1260,6 @@ Lakshan Perera <lakshan@laktek.com>
Lalatendu Mohanty <lmohanty@redhat.com>
Lance Chen <cyen0312@gmail.com>
Lance Kinley <lkinley@loyaltymethods.com>
Lars Andringa <l.s.andringa@rug.nl>
Lars Butler <Lars.Butler@gmail.com>
Lars Kellogg-Stedman <lars@redhat.com>
Lars R. Damerow <lars@pixar.com>
@@ -1323,13 +1269,11 @@ Laura Brehm <laurabrehm@hey.com>
Laura Frank <ljfrank@gmail.com>
Laurent Bernaille <laurent.bernaille@datadoghq.com>
Laurent Erignoux <lerignoux@gmail.com>
Laurent Goderre <laurent.goderre@docker.com>
Laurie Voss <github@seldo.com>
Leandro Motta Barros <lmb@stackedboxes.org>
Leandro Siqueira <leandro.siqueira@gmail.com>
Lee Calcote <leecalcote@gmail.com>
Lee Chao <932819864@qq.com>
Lee Gaines <leetgaines@gmail.com>
Lee, Meng-Han <sunrisedm4@gmail.com>
Lei Gong <lgong@alauda.io>
Lei Jitang <leijitang@huawei.com>
@@ -1405,7 +1349,6 @@ Madhan Raj Mookkandy <MadhanRaj.Mookkandy@microsoft.com>
Madhav Puri <madhav.puri@gmail.com>
Madhu Venugopal <mavenugo@gmail.com>
Mageee <fangpuyi@foxmail.com>
maggie44 <64841595+maggie44@users.noreply.github.com>
Mahesh Tiyyagura <tmahesh@gmail.com>
malnick <malnick@gmail..com>
Malte Janduda <mail@janduda.net>
@@ -1417,7 +1360,6 @@ Manuel Meurer <manuel@krautcomputing.com>
Manuel Rüger <manuel@rueg.eu>
Manuel Woelker <github@manuel.woelker.org>
mapk0y <mapk0y@gmail.com>
Marat Abrarov <abrarov@gmail.com>
Marat Radchenko <marat@slonopotamus.org>
Marc Abramowitz <marc@marc-abramowitz.com>
Marc Kuo <kuomarc2@gmail.com>
@@ -1432,7 +1374,6 @@ Marcus Linke <marcus.linke@gmx.de>
Marcus Martins <marcus@docker.com>
Marcus Ramberg <marcus@nordaaker.com>
Marek Goldmann <marek.goldmann@gmail.com>
Maria Glushenok <glushenokm@gmail.com>
Marian Marinov <mm@yuhu.biz>
Marianna Tessel <mtesselh@gmail.com>
Mario Loriedo <mario.loriedo@gmail.com>
@@ -1501,7 +1442,6 @@ Matthias Kühnle <git.nivoc@neverbox.com>
Matthias Rampke <mr@soundcloud.com>
Matthieu Fronton <m@tthieu.fr>
Matthieu Hauglustaine <matt.hauglustaine@gmail.com>
Matthieu MOREL <matthieu.morel35@gmail.com>
Mattias Jernberg <nostrad@gmail.com>
Mauricio Garavaglia <mauricio@medallia.com>
mauriyouth <mauriyouth@gmail.com>
@@ -1516,7 +1456,6 @@ Maxime Petazzoni <max@signalfuse.com>
Maximiliano Maccanti <maccanti@amazon.com>
Maxwell <csuhp007@gmail.com>
Meaglith Ma <genedna@gmail.com>
Medhy DOHOU <52136144+PowerPixel@users.noreply.github.com>
meejah <meejah@meejah.ca>
Megan Kostick <mkostick@us.ibm.com>
Mehul Kar <mehul.kar@gmail.com>
@@ -1617,11 +1556,9 @@ Moysés Borges <moysesb@gmail.com>
mrfly <mr.wrfly@gmail.com>
Mrunal Patel <mrunalp@gmail.com>
Muayyad Alsadi <alsadi@gmail.com>
Muhammad Daffa Dinaya <muhammaddaffadinaya@gmail.com>
Muhammad Zohaib Aslam <zohaibse011@gmail.com>
Mustafa Akın <mustafa91@gmail.com>
Muthukumar R <muthur@gmail.com>
Myeongjoon Kim <kimmj8409@gmail.com>
Máximo Cuadros <mcuadros@gmail.com>
Médi-Rémi Hashim <medimatrix@users.noreply.github.com>
Nace Oroz <orkica@gmail.com>
@@ -1636,7 +1573,6 @@ Natasha Jarus <linuxmercedes@gmail.com>
Nate Brennand <nate.brennand@clever.com>
Nate Eagleson <nate@nateeag.com>
Nate Jones <nate@endot.org>
Nathan Baulch <nathan.baulch@gmail.com>
Nathan Carlson <carl4403@umn.edu>
Nathan Herald <me@nathanherald.com>
Nathan Hsieh <hsieh.nathan@gmail.com>
@@ -1699,7 +1635,6 @@ Nuutti Kotivuori <naked@iki.fi>
nzwsch <hi@nzwsch.com>
O.S. Tezer <ostezer@gmail.com>
objectified <objectified@gmail.com>
Octol1ttle <l1ttleofficial@outlook.com>
Odin Ugedal <odin@ugedal.com>
Oguz Bilgic <fisyonet@gmail.com>
Oh Jinkyun <tintypemolly@gmail.com>
@@ -1731,10 +1666,8 @@ Patrick Böänziger <patrick.baenziger@bsi-software.com>
Patrick Devine <patrick.devine@docker.com>
Patrick Haas <patrickhaas@google.com>
Patrick Hemmer <patrick.hemmer@gmail.com>
Patrick St. laurent <patrick@saint-laurent.us>
Patrick Stapleton <github@gdi2290.com>
Patrik Cyvoct <patrik@ptrk.io>
Patrik Leifert <patrikleifert@hotmail.com>
pattichen <craftsbear@gmail.com>
Paul "TBBle" Hampson <Paul.Hampson@Pobox.com>
Paul <paul9869@gmail.com>
@@ -1809,7 +1742,6 @@ Pierre Carrier <pierre@meteor.com>
Pierre Dal-Pra <dalpra.pierre@gmail.com>
Pierre Wacrenier <pierre.wacrenier@gmail.com>
Pierre-Alain RIVIERE <pariviere@ippon.fr>
pinglanlu <pinglanlu@outlook.com>
Piotr Bogdan <ppbogdan@gmail.com>
Piotr Karbowski <piotr.karbowski@protonmail.ch>
Porjo <porjo38@yahoo.com.au>
@@ -1837,7 +1769,6 @@ Quentin Tayssier <qtayssier@gmail.com>
r0n22 <cameron.regan@gmail.com>
Rachit Sharma <rachitsharma613@gmail.com>
Radostin Stoyanov <rstoyanov1@gmail.com>
Rafael Fernández López <ereslibre@ereslibre.es>
Rafal Jeczalik <rjeczalik@gmail.com>
Rafe Colton <rafael.colton@gmail.com>
Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
@@ -1893,7 +1824,6 @@ Robert Obryk <robryk@gmail.com>
Robert Schneider <mail@shakeme.info>
Robert Shade <robert.shade@gmail.com>
Robert Stern <lexandro2000@gmail.com>
Robert Sturla <robertsturla@outlook.com>
Robert Terhaar <rterhaar@atlanticdynamic.com>
Robert Wallis <smilingrob@gmail.com>
Robert Wang <robert@arctic.tw>
@@ -1905,7 +1835,7 @@ Robin Speekenbrink <robin@kingsquare.nl>
Robin Thoni <robin@rthoni.com>
robpc <rpcann@gmail.com>
Rodolfo Carvalho <rhcarvalho@gmail.com>
Rodrigo Campos <rodrigoca@microsoft.com>
Rodrigo Campos <rodrigo@kinvolk.io>
Rodrigo Vaz <rodrigo.vaz@gmail.com>
Roel Van Nyen <roel.vannyen@gmail.com>
Roger Peppe <rogpeppe@gmail.com>
@@ -1941,7 +1871,6 @@ Royce Remer <royceremer@gmail.com>
Rozhnov Alexandr <nox73@ya.ru>
Rudolph Gottesheim <r.gottesheim@loot.at>
Rui Cao <ruicao@alauda.io>
Rui JingAn <quiterace@gmail.com>
Rui Lopes <rgl@ruilopes.com>
Ruilin Li <liruilin4@huawei.com>
Runshen Zhu <runshen.zhu@gmail.com>
@@ -2037,16 +1966,12 @@ Sergey Alekseev <sergey.alekseev.minsk@gmail.com>
Sergey Evstifeev <sergey.evstifeev@gmail.com>
Sergii Kabashniuk <skabashnyuk@codenvy.com>
Sergio Lopez <slp@redhat.com>
Serhan Tutar <randomnoise@users.noreply.github.com>
Serhat Gülçiçek <serhat25@gmail.com>
Serhii Nakon <serhii.n@thescimus.com>
SeungUkLee <lsy931106@gmail.com>
Sevki Hasirci <s@sevki.org>
Shane Canon <scanon@lbl.gov>
Shane da Silva <shane@dasilva.io>
Shang Mu <smu@princeton.edu>
Shaun Kaasten <shaunk@gmail.com>
Shaun Thompson <shaun.thompson@docker.com>
shaunol <shaunol@gmail.com>
Shawn Landden <shawn@churchofgit.com>
Shawn Siefkas <shawn.siefkas@meredith.com>
@@ -2065,7 +1990,6 @@ Shijun Qin <qinshijun16@mails.ucas.ac.cn>
Shishir Mahajan <shishir.mahajan@redhat.com>
Shoubhik Bose <sbose78@gmail.com>
Shourya Sarcar <shourya.sarcar@gmail.com>
Shreenidhi Shedi <shreenidhi.shedi@broadcom.com>
Shu-Wai Chow <shu-wai.chow@seattlechildrens.org>
shuai-z <zs.broccoli@gmail.com>
Shukui Yang <yangshukui@huawei.com>
@@ -2136,7 +2060,6 @@ Stéphane Este-Gracias <sestegra@gmail.com>
Stig Larsson <stig@larsson.dev>
Su Wang <su.wang@docker.com>
Subhajit Ghosh <isubuz.g@gmail.com>
Sudheendra Gopinath <sudheendra.gopinath@amd.com>
Sujith Haridasan <sujith.h@gmail.com>
Sun Gengze <690388648@qq.com>
Sun Jianbo <wonderflow.sun@gmail.com>
@@ -2154,7 +2077,6 @@ Sébastien Stormacq <sebsto@users.noreply.github.com>
Sören Tempel <soeren+git@soeren-tempel.net>
Tabakhase <mail@tabakhase.com>
Tadej Janež <tadej.j@nez.si>
Tadeusz Dudkiewicz <tadeusz.dudkiewicz@rtbhouse.com>
Takuto Sato <tockn.jp@gmail.com>
tang0th <tang0th@gmx.com>
Tangi Colin <tangicolin@gmail.com>
@@ -2162,7 +2084,6 @@ Tatsuki Sugiura <sugi@nemui.org>
Tatsushi Inagaki <e29253@jp.ibm.com>
Taylan Isikdemir <taylani@google.com>
Taylor Jones <monitorjbl@gmail.com>
tcpdumppy <847462026@qq.com>
Ted M. Young <tedyoung@gmail.com>
Tehmasp Chaudhri <tehmasp@gmail.com>
Tejaswini Duggaraju <naduggar@microsoft.com>
@@ -2193,7 +2114,6 @@ Thomas Tanaka <thomas.tanaka@oracle.com>
Thomas Texier <sharkone@en-mousse.org>
Ti Zhou <tizhou1986@gmail.com>
Tiago Seabra <tlgs@users.noreply.github.com>
Tiago Teixeira <tiago.teixeira@ecorobotix.com>
Tianon Gravi <admwiggin@gmail.com>
Tianyi Wang <capkurmagati@gmail.com>
Tibor Vass <teabee89@gmail.com>
@@ -2256,7 +2176,6 @@ Tomek Mańko <tomek.manko@railgun-solutions.com>
Tommaso Visconti <tommaso.visconti@gmail.com>
Tomoya Tabuchi <t@tomoyat1.com>
Tomáš Hrčka <thrcka@redhat.com>
Tomáš Virtus <nechtom@gmail.com>
tonic <tonicbupt@gmail.com>
Tonny Xu <tonny.xu@gmail.com>
Tony Abboud <tdabboud@hotmail.com>
@@ -2294,7 +2213,6 @@ Valentin Kulesh <valentin.kulesh@virtuozzo.com>
vanderliang <lansheng@meili-inc.com>
Velko Ivanov <vivanov@deeperplane.com>
Veres Lajos <vlajos@gmail.com>
Viacheslav Gagara <viacheslavg@gmail.com>
Victor Algaze <valgaze@gmail.com>
Victor Coisne <victor.coisne@dotcloud.com>
Victor Costan <costan@gmail.com>
@@ -2302,7 +2220,6 @@ Victor I. Wood <viw@t2am.com>
Victor Lyuboslavsky <victor@victoreda.com>
Victor Marmol <vmarmol@google.com>
Victor Palma <palma.victor@gmail.com>
Victor Toni <victor.toni@gmail.com>
Victor Vieux <victor.vieux@docker.com>
Victoria Bialas <victoria.bialas@docker.com>
Vijaya Kumar K <vijayak@caviumnetworks.com>
@@ -2336,7 +2253,6 @@ VladimirAus <v_roudakov@yahoo.com>
Vladislav Kolesnikov <vkolesnikov@beget.ru>
Vlastimil Zeman <vlastimil.zeman@diffblue.com>
Vojtech Vitek (V-Teq) <vvitek@redhat.com>
voloder <110066198+voloder@users.noreply.github.com>
Walter Leibbrandt <github@wrl.co.za>
Walter Stanish <walter@pratyeka.org>
Wang Chao <chao.wang@ucloud.cn>
@@ -2354,7 +2270,6 @@ Wassim Dhif <wassimdhif@gmail.com>
Wataru Ishida <ishida.wataru@lab.ntt.co.jp>
Wayne Chang <wayne@neverfear.org>
Wayne Song <wsong@docker.com>
weebney <weebney@gmail.com>
Weerasak Chongnguluam <singpor@gmail.com>
Wei Fu <fuweid89@gmail.com>
Wei Wu <wuwei4455@gmail.com>
@@ -2449,7 +2364,6 @@ You-Sheng Yang (楊有勝) <vicamo@gmail.com>
youcai <omegacoleman@gmail.com>
Youcef YEKHLEF <yyekhlef@gmail.com>
Youfu Zhang <zhangyoufu@gmail.com>
YR Chen <stevapple@icloud.com>
Yu Changchun <yuchangchun1@huawei.com>
Yu Chengxia <yuchengxia@huawei.com>
Yu Peng <yu.peng36@zte.com.cn>
@@ -2511,6 +2425,5 @@ Zunayed Ali <zunayed@gmail.com>
徐俊杰 <paco.xu@daocloud.io>
慕陶 <jihui.xjh@alibaba-inc.com>
搏通 <yufeng.pyf@alibaba-inc.com>
纯真 <38834411+chunzhennn@users.noreply.github.com>
黄艳红00139573 <huang.yanhong@zte.com.cn>
정재영 <jjy600901@gmail.com>

View File

@@ -83,39 +83,6 @@ contributions, see [the advanced contribution
section](https://docs.docker.com/opensource/workflow/advanced-contributing/) in
the contributors guide.
### Where to put your changes
You can make changes to any Go package within Moby outside of the vendor directory. There are no
restrictions on packages but a few guidelines to follow for deciding on making these changes.
When adding new packages, first consider putting them in an internal directory to prevent
unintended importing from other modules. Code changes should either go under `api`, `client`,
or `daemon` modules, or one of the integration test directories.
Try to put a new package under the appropriate directories. The root directory is reserved for
configuration and build files, no source files will be accepted in the root.
- `api` - All types shared by client and daemon along with swagger definitions.
- `client` - All Go files for the docker client
- `contrib` - Files, configurations, and packages related to external tools or libraries
- `daemon` - All Go files and packages for building the daemon
- `docs` - All Moby technical documentation using markdown
- `hack` - All scripts used for testing, development, and CI
- `integration` - Testing the integration of the API, client, and daemon
- `integration-cli` - Deprecated integration tests of the docker cli with the daemon, no new tests allowed
- `pkg` - Legacy Go packages used externally, no new packages should be added here
- `project` - All files related to Moby project governance
- `vendor` - Autogenerated vendor files from `make vendor` command, do not manually edit files here
The daemon module has many subpackages. Consider putting new packages under one of these
directories.
- `daemon/cmd` - All Go main packages and the packages used only for that main package
- `daemon/internal` - All utility packages used by daemon and not intended for external use
- `daemon/man`- All Moby reference manuals used for the `man` command
- `daemon/plugins` - All included daemon plugins which are intended to be registered via init
- `daemon/pkg` - All libraries used by daemon and for integration testing
- `daemon/version` - Version package with the current daemon version
### Connect with other Moby Project contributors
<table class="tg">
@@ -134,7 +101,7 @@ directories.
<td>
<p>
Register for the Docker Community Slack at
<a href="https://dockr.ly/comm-slack" target="_blank">https://dockr.ly/comm-slack</a>.
<a href="https://dockr.ly/slack" target="_blank">https://dockr.ly/slack</a>.
We use the #moby-project channel for general discussion, and there are separate channels for other Moby projects such as #containerd.
</p>
</td>

View File

@@ -1,30 +1,19 @@
# syntax=docker/dockerfile:1
ARG GO_VERSION=1.25.4
ARG GO_VERSION=1.23.9
ARG BASE_DEBIAN_DISTRO="bookworm"
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"
ARG XX_VERSION=1.6.1
# XX_VERSION specifies the version of the xx utility to use.
# It must be a valid tag in the docker.io/tonistiigi/xx image repository.
ARG XX_VERSION=1.7.0
ARG VPNKIT_VERSION=0.5.0
# VPNKIT_VERSION is the version of the vpnkit binary which is used as a fallback
# network driver for rootless.
ARG VPNKIT_VERSION=0.6.0
# DOCKERCLI_VERSION is the version of the CLI to install in the dev-container.
ARG DOCKERCLI_VERSION=v28.5.0
ARG DOCKERCLI_REPOSITORY="https://github.com/docker/cli.git"
ARG DOCKERCLI_VERSION=v25.0.2
# cli version used for integration-cli tests
ARG DOCKERCLI_INTEGRATION_REPOSITORY="https://github.com/docker/cli.git"
ARG DOCKERCLI_INTEGRATION_VERSION=v25.0.5
# BUILDX_VERSION is the version of buildx to install in the dev container.
ARG BUILDX_VERSION=0.29.1
# COMPOSE_VERSION is the version of compose to install in the dev container.
ARG COMPOSE_VERSION=v2.40.0
ARG DOCKERCLI_INTEGRATION_VERSION=v17.06.2-ce
ARG BUILDX_VERSION=0.12.1
ARG COMPOSE_VERSION=v2.24.5
ARG SYSTEMD="false"
ARG FIREWALLD="false"
@@ -34,18 +23,7 @@ ARG DOCKER_STATIC=1
# https://hub.docker.com/r/distribution/distribution. This version of
# the registry is used to test schema 2 manifests. Generally, the version
# specified here should match a current release.
ARG REGISTRY_VERSION=3.0.0
# delve is currently only supported on linux/amd64 and linux/arm64;
# https://github.com/go-delve/delve/blob/v1.25.0/pkg/proc/native/support_sentinel.go#L1
# https://github.com/go-delve/delve/blob/v1.25.0/pkg/proc/native/support_sentinel_linux.go#L1
#
# ppc64le support was added in v1.21.1, but is still experimental, and requires
# the "-tags exp.linuxppc64le" build-tag to be set:
# https://github.com/go-delve/delve/commit/71f12207175a1cc09668f856340d8a543c87dcca
ARG DELVE_SUPPORTED=${TARGETPLATFORM#linux/amd64} DELVE_SUPPORTED=${DELVE_SUPPORTED#linux/arm64} DELVE_SUPPORTED=${DELVE_SUPPORTED#linux/ppc64le}
ARG DELVE_SUPPORTED=${DELVE_SUPPORTED:+"unsupported"}
ARG DELVE_SUPPORTED=${DELVE_SUPPORTED:-"supported"}
ARG REGISTRY_VERSION=2.8.3
# cross compilation helper
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
@@ -60,13 +38,9 @@ COPY --from=build-dummy /build /build
# base
FROM --platform=$BUILDPLATFORM ${GOLANG_IMAGE} AS base
COPY --from=xx / /
# Disable collecting local telemetry, as collected by Go and Delve;
#
# - https://github.com/go-delve/delve/blob/v1.24.1/CHANGELOG.md#1231-2024-09-23
# - https://go.dev/doc/telemetry#background
RUN go telemetry off && [ "$(go telemetry)" = "off" ] || { echo "Failed to disable Go telemetry"; exit 1; }
RUN echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN apt-get update && apt-get install --no-install-recommends -y file
ENV GO111MODULE=off
ENV GOTOOLCHAIN=local
FROM base AS criu
@@ -80,22 +54,62 @@ RUN --mount=type=cache,sharing=locked,id=moby-criu-aptlib,target=/var/lib/apt \
&& /build/criu --version
# registry
FROM distribution/distribution:$REGISTRY_VERSION AS registry
RUN mkdir /build && mv /bin/registry /build/registry
FROM base AS registry-src
WORKDIR /usr/src/registry
RUN git init . && git remote add origin "https://github.com/distribution/distribution.git"
FROM base AS registry
WORKDIR /go/src/github.com/docker/distribution
# REGISTRY_VERSION_SCHEMA1 specifies the version of the registry to build and
# install from the https://github.com/docker/distribution repository. This is
# an older (pre v2.3.0) version of the registry that only supports schema1
# manifests. This version of the registry is not working on arm64, so installation
# is skipped on that architecture.
ARG REGISTRY_VERSION_SCHEMA1=v2.1.0
ARG TARGETPLATFORM
RUN --mount=from=registry-src,src=/usr/src/registry,rw \
--mount=type=cache,target=/root/.cache/go-build,id=registry-build-$TARGETPLATFORM \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=tmpfs,target=/go/src <<EOT
set -ex
export GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH"
# Make the /build directory no matter what so that it doesn't fail on arm64 or
# any other platform where we don't build this registry
mkdir /build
case $TARGETPLATFORM in
linux/amd64|linux/arm/v7|linux/ppc64le|linux/s390x)
git fetch -q --depth 1 origin "${REGISTRY_VERSION_SCHEMA1}" +refs/tags/*:refs/tags/*
git checkout -q FETCH_HEAD
CGO_ENABLED=0 xx-go build -o /build/registry-v2-schema1 -v ./cmd/registry
xx-verify /build/registry-v2-schema1
;;
esac
EOT
FROM distribution/distribution:$REGISTRY_VERSION AS registry-v2
RUN mkdir /build && mv /bin/registry /build/registry-v2
# go-swagger
FROM base AS swagger-src
WORKDIR /usr/src/swagger
# Currently uses a fork from https://github.com/kolyshkin/go-swagger/tree/golang-1.13-fix
# TODO: move to under moby/ or fix upstream go-swagger to work for us.
RUN git init . && git remote add origin "https://github.com/kolyshkin/go-swagger.git"
# GO_SWAGGER_COMMIT specifies the version of the go-swagger binary to build and
# install. Go-swagger is used in CI for validating swagger.yaml in hack/validate/swagger-gen
ARG GO_SWAGGER_COMMIT=c56166c036004ba7a3a321e5951ba472b9ae298c
RUN git fetch -q --depth 1 origin "${GO_SWAGGER_COMMIT}" && git checkout -q FETCH_HEAD
FROM base AS swagger
WORKDIR /go/src/github.com/go-swagger/go-swagger
ARG TARGETPLATFORM
# GO_SWAGGER_VERSION specifies the version of the go-swagger binary to install.
# Go-swagger is used in CI for generating types from swagger.yaml in
# hack/validate/swagger-gen
ARG GO_SWAGGER_VERSION=v0.33.1
RUN --mount=type=cache,target=/root/.cache/go-build,id=swagger-build-$TARGETPLATFORM \
RUN --mount=from=swagger-src,src=/usr/src/swagger,rw \
--mount=type=cache,target=/root/.cache/go-build,id=swagger-build-$TARGETPLATFORM \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=tmpfs,target=/go/src/ <<EOT
set -e
GOBIN=/build CGO_ENABLED=0 xx-go install "github.com/go-swagger/go-swagger/cmd/swagger@${GO_SWAGGER_VERSION}"
xx-go build -o /build/swagger ./cmd/swagger
xx-verify /build/swagger
EOT
@@ -116,11 +130,9 @@ ARG TARGETVARIANT
RUN /download-frozen-image-v2.sh /build \
busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \
busybox:glibc@sha256:1f81263701cddf6402afe9f33fca0266d9fff379e59b1748f33d3072da71ee85 \
debian:trixie-slim@sha256:c85a2732e97694ea77237c61304b3bb410e0e961dd6ee945997a06c788c545bb \
debian:bookworm-slim@sha256:2bc5c236e9b262645a323e9088dfa3bb1ecb16cc75811daf40a23a824d665be9 \
hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9 \
arm32v7/hello-world:latest@sha256:50b8560ad574c779908da71f7ce370c0a2471c098d44d1c8f6b513c5a55eeeb1 \
hello-world:amd64@sha256:90659bf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cbc042 \
hello-world:arm64@sha256:963612c5503f3f1674f315c67089dee577d8cc6afc18565e0b4183ae355fb343
arm32v7/hello-world:latest@sha256:50b8560ad574c779908da71f7ce370c0a2471c098d44d1c8f6b513c5a55eeeb1
# delve
FROM base AS delve-src
@@ -130,29 +142,50 @@ RUN git init . && git remote add origin "https://github.com/go-delve/delve.git"
# from the https://github.com/go-delve/delve repository.
# It can be used to run Docker with a possibility of
# attaching debugger to it.
ARG DELVE_VERSION=v1.25.2
ARG DELVE_VERSION=v1.23.0
RUN git fetch -q --depth 1 origin "${DELVE_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS delve-supported
FROM base AS delve-build
WORKDIR /usr/src/delve
ARG TARGETPLATFORM
RUN --mount=from=delve-src,src=/usr/src/delve,rw \
--mount=type=cache,target=/root/.cache/go-build,id=delve-build-$TARGETPLATFORM \
--mount=type=cache,target=/go/pkg/mod <<EOT
set -e
xx-go build -o /build/dlv ./cmd/dlv
GO111MODULE=on xx-go build -o /build/dlv ./cmd/dlv
xx-verify /build/dlv
EOT
FROM binary-dummy AS delve-unsupported
FROM delve-${DELVE_SUPPORTED} AS delve
# delve is currently only supported on linux/amd64 and linux/arm64;
# https://github.com/go-delve/delve/blob/v1.8.1/pkg/proc/native/support_sentinel.go#L1-L6
FROM binary-dummy AS delve-windows
FROM binary-dummy AS delve-linux-arm
FROM binary-dummy AS delve-linux-ppc64le
FROM binary-dummy AS delve-linux-s390x
FROM delve-build AS delve-linux-amd64
FROM delve-build AS delve-linux-arm64
FROM delve-linux-${TARGETARCH} AS delve-linux
FROM delve-${TARGETOS} AS delve
FROM base AS tomll
# GOTOML_VERSION specifies the version of the tomll binary to build and install
# from the https://github.com/pelletier/go-toml repository. This binary is used
# in CI in the hack/validate/toml script.
#
# When updating this version, consider updating the github.com/pelletier/go-toml
# dependency in vendor.mod accordingly.
ARG GOTOML_VERSION=v1.8.1
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "github.com/pelletier/go-toml/cmd/tomll@${GOTOML_VERSION}" \
&& /build/tomll --help
FROM base AS gowinres
# GOWINRES_VERSION defines go-winres tool version
ARG GOWINRES_VERSION=v0.3.1
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build CGO_ENABLED=0 go install "github.com/tc-hib/go-winres@${GOWINRES_VERSION}" \
GOBIN=/build/ GO111MODULE=on go install "github.com/tc-hib/go-winres@${GOWINRES_VERSION}" \
&& /build/go-winres --help
# containerd
@@ -162,8 +195,11 @@ RUN git init . && git remote add origin "https://github.com/containerd/container
# CONTAINERD_VERSION is used to build containerd binaries, and used for the
# integration tests. The distributed docker .deb and .rpm packages depend on a
# separate (containerd.io) package, which may be a different version as is
# specified here.
ARG CONTAINERD_VERSION=v2.1.5
# specified here. The containerd golang package is also pinned in vendor.mod.
# When updating the binary version you may also need to update the vendor
# version to pick up bug fixes or new APIs, however, usually the Go packages
# are built from a commit from the master branch.
ARG CONTAINERD_VERSION=v1.7.27
RUN git fetch -q --depth 1 origin "${CONTAINERD_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS containerd-build
@@ -173,6 +209,8 @@ RUN --mount=type=cache,sharing=locked,id=moby-containerd-aptlib,target=/var/lib/
--mount=type=cache,sharing=locked,id=moby-containerd-aptcache,target=/var/cache/apt \
apt-get update && xx-apt-get install -y --no-install-recommends \
gcc \
libbtrfs-dev \
libsecret-1-dev \
pkg-config
ARG DOCKER_STATIC
RUN --mount=from=containerd-src,src=/usr/src/containerd,rw \
@@ -194,33 +232,26 @@ FROM binary-dummy AS containerd-windows
FROM containerd-${TARGETOS} AS containerd
FROM base AS golangci_lint
ARG GOLANGCI_LINT_VERSION=v2.1.5
ARG GOLANGCI_LINT_VERSION=v1.60.2
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build CGO_ENABLED=0 go install "github.com/golangci/golangci-lint/v2/cmd/golangci-lint@${GOLANGCI_LINT_VERSION}" \
GOBIN=/build/ GO111MODULE=on go install "github.com/golangci/golangci-lint/cmd/golangci-lint@${GOLANGCI_LINT_VERSION}" \
&& /build/golangci-lint --version
FROM base AS gotestsum
# GOTESTSUM_VERSION is the version of gotest.tools/gotestsum to install.
ARG GOTESTSUM_VERSION=v1.13.0
ARG GOTESTSUM_VERSION=v1.8.2
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build CGO_ENABLED=0 go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}" \
GOBIN=/build/ GO111MODULE=on go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}" \
&& /build/gotestsum --version
FROM base AS shfmt
ARG SHFMT_VERSION=v3.8.0
ARG SHFMT_VERSION=v3.6.0
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build CGO_ENABLED=0 go install "mvdan.cc/sh/v3/cmd/shfmt@${SHFMT_VERSION}" \
GOBIN=/build/ GO111MODULE=on go install "mvdan.cc/sh/v3/cmd/shfmt@${SHFMT_VERSION}" \
&& /build/shfmt --version
FROM base AS gopls
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build CGO_ENABLED=0 go install "golang.org/x/tools/gopls@latest" \
&& /build/gopls version
FROM base AS dockercli
WORKDIR /go/src/github.com/docker/cli
ARG DOCKERCLI_REPOSITORY
@@ -231,8 +262,7 @@ RUN --mount=source=hack/dockerfile/cli.sh,target=/download-or-build-cli.sh \
--mount=type=cache,target=/root/.cache/go-build,id=dockercli-build-$TARGETPLATFORM \
rm -f ./.git/*.lock \
&& /download-or-build-cli.sh ${DOCKERCLI_VERSION} ${DOCKERCLI_REPOSITORY} /build \
&& /build/docker --version \
&& /build/docker completion bash >/completion.bash
&& /build/docker --version
FROM base AS dockercli-integration
WORKDIR /go/src/github.com/docker/cli
@@ -250,11 +280,11 @@ RUN --mount=source=hack/dockerfile/cli.sh,target=/download-or-build-cli.sh \
FROM base AS runc-src
WORKDIR /usr/src/runc
RUN git init . && git remote add origin "https://github.com/opencontainers/runc.git"
# RUNC_VERSION sets the version of runc to install in the dev-container.
# This version should usually match the version that is used by the containerd version
# RUNC_VERSION should match the version that is used by the containerd version
# that is used. If you need to update runc, open a pull request in the containerd
# project first, and update both after that is merged.
ARG RUNC_VERSION=v1.3.3
# project first, and update both after that is merged. When updating RUNC_VERSION,
# consider updating runc in vendor.mod accordingly.
ARG RUNC_VERSION=v1.2.5
RUN git fetch -q --depth 1 origin "${RUNC_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS runc-build
@@ -263,6 +293,7 @@ ARG TARGETPLATFORM
RUN --mount=type=cache,sharing=locked,id=moby-runc-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-runc-aptcache,target=/var/cache/apt \
apt-get update && xx-apt-get install -y --no-install-recommends \
dpkg-dev \
gcc \
libc6-dev \
libseccomp-dev \
@@ -321,8 +352,8 @@ FROM tini-${TARGETOS} AS tini
FROM base AS rootlesskit-src
WORKDIR /usr/src/rootlesskit
RUN git init . && git remote add origin "https://github.com/rootless-containers/rootlesskit.git"
# When updating, also update go.mod and hack/dockerfile/install/rootlesskit.installer accordingly.
ARG ROOTLESSKIT_VERSION=v2.3.5
# When updating, also update vendor.mod and hack/dockerfile/install/rootlesskit.installer accordingly.
ARG ROOTLESSKIT_VERSION=v2.0.2
RUN git fetch -q --depth 1 origin "${ROOTLESSKIT_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS rootlesskit-build
@@ -334,6 +365,7 @@ RUN --mount=type=cache,sharing=locked,id=moby-rootlesskit-aptlib,target=/var/lib
gcc \
libc6-dev \
pkg-config
ENV GO111MODULE=on
ARG DOCKER_STATIC
RUN --mount=from=rootlesskit-src,src=/usr/src/rootlesskit,rw \
--mount=type=cache,target=/go/pkg/mod \
@@ -342,6 +374,8 @@ RUN --mount=from=rootlesskit-src,src=/usr/src/rootlesskit,rw \
export CGO_ENABLED=$([ "$DOCKER_STATIC" = "1" ] && echo "0" || echo "1")
xx-go build -o /build/rootlesskit -ldflags="$([ "$DOCKER_STATIC" != "1" ] && echo "-linkmode=external")" ./cmd/rootlesskit
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") /build/rootlesskit
xx-go build -o /build/rootlesskit-docker-proxy -ldflags="$([ "$DOCKER_STATIC" != "1" ] && echo "-linkmode=external")" ./cmd/rootlesskit-docker-proxy
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") /build/rootlesskit-docker-proxy
EOT
COPY --link ./contrib/dockerd-rootless.sh /build/
COPY --link ./contrib/dockerd-rootless-setuptool.sh /build/
@@ -351,8 +385,7 @@ FROM binary-dummy AS rootlesskit-windows
FROM rootlesskit-${TARGETOS} AS rootlesskit
FROM base AS crun
# CRUN_VERSION is the version of crun to install in the dev-container.
ARG CRUN_VERSION=1.21
ARG CRUN_VERSION=1.12
RUN --mount=type=cache,sharing=locked,id=moby-crun-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-crun-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
@@ -364,6 +397,7 @@ RUN --mount=type=cache,sharing=locked,id=moby-crun-aptlib,target=/var/lib/apt \
libseccomp-dev \
libsystemd-dev \
libtool \
libudev-dev \
libyajl-dev \
python3 \
;
@@ -383,8 +417,8 @@ FROM scratch AS vpnkit-linux-arm
FROM scratch AS vpnkit-linux-ppc64le
FROM scratch AS vpnkit-linux-riscv64
FROM scratch AS vpnkit-linux-s390x
FROM moby/vpnkit-bin:${VPNKIT_VERSION} AS vpnkit-linux-amd64
FROM moby/vpnkit-bin:${VPNKIT_VERSION} AS vpnkit-linux-arm64
FROM djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-linux-amd64
FROM djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-linux-arm64
FROM vpnkit-linux-${TARGETARCH} AS vpnkit-linux
FROM vpnkit-${TARGETOS} AS vpnkit
@@ -423,9 +457,11 @@ FROM base AS dev-systemd-false
COPY --link --from=frozen-images /build/ /docker-frozen-images
COPY --link --from=swagger /build/ /usr/local/bin/
COPY --link --from=delve /build/ /usr/local/bin/
COPY --link --from=tomll /build/ /usr/local/bin/
COPY --link --from=gowinres /build/ /usr/local/bin/
COPY --link --from=tini /build/ /usr/local/bin/
COPY --link --from=registry /build/ /usr/local/bin/
COPY --link --from=registry-v2 /build/ /usr/local/bin/
# Skip the CRIU stage for now, as the opensuse package repository is sometimes
# unstable, and we're currently not using it in CI.
@@ -472,6 +508,7 @@ RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
firewalld
RUN sed -i 's/FirewallBackend=nftables/FirewallBackend=iptables/' /etc/firewalld/firewalld.conf
FROM dev-firewalld-${FIREWALLD} AS dev-base
RUN groupadd -r docker
@@ -480,8 +517,9 @@ RUN useradd --create-home --gid docker unprivilegeduser \
&& chown -R unprivilegeduser /home/unprivilegeduser
# Let us use a .bashrc file
RUN ln -sfv /go/src/github.com/docker/docker/.bashrc ~/.bashrc
# Activate bash completion
# Activate bash completion and include Docker's completion if mounted with DOCKER_BASH_COMPLETION_PATH
RUN echo "source /usr/share/bash-completion/bash_completion" >> /etc/bash.bashrc
RUN ln -s /usr/local/completion/bash/docker /etc/bash_completion.d/docker
RUN ldconfig
# Set dev environment as safe git directory to prevent "dubious ownership" errors
# when bind-mounting the source into the dev-container. See https://github.com/moby/moby/pull/44930
@@ -494,21 +532,16 @@ RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
apparmor \
bash-completion \
bzip2 \
fuse-overlayfs \
inetutils-ping \
iproute2 \
iptables \
nftables \
jq \
libcap2-bin \
libnet1 \
libnftables-dev \
libnl-3-200 \
libprotobuf-c1 \
libyajl2 \
nano \
net-tools \
netcat-openbsd \
patch \
pigz \
sudo \
@@ -521,38 +554,49 @@ RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
xz-utils \
zip \
zstd
# Switch to use iptables instead of nftables (to match the CI hosts)
# TODO use some kind of runtime auto-detection instead if/when nftables is supported (https://github.com/moby/moby/issues/26824)
RUN update-alternatives --set iptables /usr/sbin/iptables-legacy || true \
&& update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy || true \
&& update-alternatives --set arptables /usr/sbin/arptables-legacy || true
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install --no-install-recommends -y \
gcc \
pkg-config \
dpkg-dev \
libapparmor-dev \
libseccomp-dev \
libsecret-1-dev \
libsystemd-dev \
libudev-dev \
yamllint
COPY --link --from=dockercli /build/ /usr/local/cli
COPY --link --from=dockercli /completion.bash /etc/bash_completion.d/docker
COPY --link --from=dockercli-integration /build/ /usr/local/cli-integration
FROM base AS build
COPY --from=gowinres /build/ /usr/local/bin/
WORKDIR /go/src/github.com/docker/docker
ENV GO111MODULE=off
ENV CGO_ENABLED=1
RUN --mount=type=cache,sharing=locked,id=moby-build-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-build-aptcache,target=/var/cache/apt \
apt-get update && apt-get install --no-install-recommends -y \
clang \
lld \
llvm \
icoutils
llvm
ARG TARGETPLATFORM
RUN --mount=type=cache,sharing=locked,id=moby-build-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-build-aptcache,target=/var/cache/apt \
xx-apt-get install --no-install-recommends -y \
dpkg-dev \
gcc \
libapparmor-dev \
libc6-dev \
libnftables-dev \
libseccomp-dev \
libsecret-1-dev \
libsystemd-dev \
libudev-dev \
pkg-config
ARG DOCKER_BUILDTAGS
ARG DOCKER_DEBUG
@@ -574,13 +618,15 @@ RUN <<EOT
fi
EOT
RUN --mount=type=bind,target=.,rw \
--mount=type=tmpfs,target=cli/winresources/dockerd \
--mount=type=tmpfs,target=cli/winresources/docker-proxy \
--mount=type=cache,target=/root/.cache/go-build,id=moby-build-$TARGETPLATFORM <<EOT
set -e
target=$([ "$DOCKER_STATIC" = "1" ] && echo "binary" || echo "dynbinary")
xx-go --wrap
PKG_CONFIG=$(xx-go env PKG_CONFIG) ./hack/make.sh $target
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") /tmp/bundles/${target}-daemon/dockerd$([ "$(xx-info os)" = "windows" ] && echo ".exe")
[ "$(xx-info os)" != "linux" ] || xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") /tmp/bundles/${target}-daemon/docker-proxy
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") /tmp/bundles/${target}-daemon/docker-proxy$([ "$(xx-info os)" = "windows" ] && echo ".exe")
mkdir /build
mv /tmp/bundles/${target}-daemon/* /build/
EOT
@@ -619,20 +665,6 @@ RUN <<EOT
docker-proxy --version
EOT
# devcontainer is a stage used by .devcontainer/devcontainer.json
FROM dev-base AS devcontainer
COPY --link . .
COPY --link --from=gopls /build/ /usr/local/bin/
# usage:
# > docker buildx bake dind
# > docker run -d --restart always --privileged --name devdind -p 12375:2375 docker-dind --debug --host=tcp://0.0.0.0:2375 --tlsverify=false
FROM docker:dind AS dind
COPY --link --from=dockercli /build/docker /usr/local/bin/
COPY --link --from=buildx /buildx /usr/local/libexec/docker/cli-plugins/docker-buildx
COPY --link --from=compose /docker-compose /usr/local/libexec/docker/cli-plugins/docker-compose
COPY --link --from=all / /usr/local/bin/
# usage:
# > make shell
# > SYSTEMD=true make shell

View File

@@ -5,22 +5,24 @@
# This represents the bare minimum required to build and test Docker.
ARG GO_VERSION=1.25.4
ARG GO_VERSION=1.23.9
ARG BASE_DEBIAN_DISTRO="bookworm"
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"
FROM ${GOLANG_IMAGE}
ENV GO111MODULE=off
ENV GOTOOLCHAIN=local
# Compile and runtime deps
# https://github.com/moby/moby/blob/master/project/PACKAGERS.md#build-dependencies
# https://github.com/moby/moby/blob/master/project/PACKAGERS.md#runtime-dependencies
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#build-dependencies
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#runtime-dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
curl \
cmake \
git \
libapparmor-dev \
libseccomp-dev \
ca-certificates \
e2fsprogs \
@@ -34,10 +36,10 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
vim-common \
&& rm -rf /var/lib/apt/lists/*
# Install runc, containerd, and tini
# Install runc, containerd, tini and docker-proxy
# Please edit hack/dockerfile/install/<name>.installer to update them.
COPY hack/dockerfile/install hack/dockerfile/install
RUN set -e; for i in runc containerd tini dockercli; \
RUN for i in runc containerd tini proxy dockercli; \
do hack/dockerfile/install/install.sh $i; \
done
ENV PATH=/usr/local/cli:$PATH

View File

@@ -161,17 +161,10 @@ FROM ${WINDOWS_BASE_IMAGE}:${WINDOWS_BASE_IMAGE_TAG}
# Use PowerShell as the default shell
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ARG GO_VERSION=1.25.4
# GOTESTSUM_VERSION is the version of gotest.tools/gotestsum to install.
ARG GOTESTSUM_VERSION=v1.13.0
# GOWINRES_VERSION is the version of go-winres to install.
ARG GOWINRES_VERSION=v0.3.3
# TODO: Update containerd version to match Linux version once
# https://github.com/microsoft/hcsshim/issues/2488 is resolved.
ARG CONTAINERD_VERSION=v2.0.7
ARG GOTESTSUM_VERSION=v1.8.2
ARG GO_VERSION=1.23.9
ARG GOWINRES_VERSION=v0.3.1
ARG CONTAINERD_VERSION=v1.7.27
# Environment variable notes:
# - GO_VERSION must be consistent with 'Dockerfile' used by Linux.
@@ -181,6 +174,7 @@ ENV GO_VERSION=${GO_VERSION} `
CONTAINERD_VERSION=${CONTAINERD_VERSION} `
GIT_VERSION=2.11.1 `
GOPATH=C:\gopath `
GO111MODULE=off `
GOTOOLCHAIN=local `
FROM_DOCKERFILE=1 `
GOTESTSUM_VERSION=${GOTESTSUM_VERSION} `
@@ -277,11 +271,13 @@ RUN `
RUN `
Function Install-GoTestSum() { `
$Env:GO111MODULE = 'on'; `
$tmpGobin = "${Env:GOBIN_TMP}"; `
$Env:GOBIN = """${Env:GOPATH}`\bin"""; `
Write-Host "INFO: Installing gotestsum version $Env:GOTESTSUM_VERSION in $Env:GOBIN"; `
&go install "gotest.tools/gotestsum@${Env:GOTESTSUM_VERSION}"; `
$Env:GOBIN = "${tmpGobin}"; `
$Env:GO111MODULE = 'off'; `
if ($LASTEXITCODE -ne 0) { `
Throw '"gotestsum install failed..."'; `
} `
@@ -291,11 +287,13 @@ RUN `
RUN `
Function Install-GoWinres() { `
$Env:GO111MODULE = 'on'; `
$tmpGobin = "${Env:GOBIN_TMP}"; `
$Env:GOBIN = """${Env:GOPATH}`\bin"""; `
Write-Host "INFO: Installing go-winres version $Env:GOWINRES_VERSION in $Env:GOBIN"; `
&go install "github.com/tc-hib/go-winres@${Env:GOWINRES_VERSION}"; `
$Env:GOBIN = "${tmpGobin}"; `
$Env:GO111MODULE = 'off'; `
if ($LASTEXITCODE -ne 0) { `
Throw '"go-winres install failed..."'; `
} `

View File

@@ -1,33 +1,585 @@
# Moby maintainers file
#
# See project/GOVERNANCE.md for committer versus reviewer roles
# This file describes the maintainer groups within the moby/moby project.
# More detail on Moby project governance is available in the
# project/GOVERNANCE.md file found in this repository.
#
# COMMITTERS
# GitHub ID, Name, Email address, GPG fingerprint
"akerouanton","Albin Kerouanton","albinker@gmail.com"
"AkihiroSuda","Akihiro Suda","akihiro.suda.cz@hco.ntt.co.jp"
"austinvazquez","Austin Vazquez","austin.vazquez.dev@gmail.com"
"corhere","Cory Snider","csnider@mirantis.com"
"cpuguy83","Brian Goff","cpuguy83@gmail.com"
"robmry","Rob Murray","rob.murray@docker.com"
"thaJeztah","Sebastiaan van Stijn","github@gone.nl"
"tianon","Tianon Gravi","admwiggin@gmail.com"
"tonistiigi","Tõnis Tiigi","tonis@docker.com"
"vvoland","Paweł Gronowski","pawel.gronowski@docker.com"
# It is structured to be consumable by both humans and programs.
# To extract its contents programmatically, use any TOML-compliant
# parser.
#
# REVIEWERS
# GitHub ID, Name, Email address, GPG fingerprint
"coolljt0725","Lei Jitang","leijitang@huawei.com"
"crazy-max","Kevin Alvarez","contact@crazymax.dev"
"dmcgowan","Derek McGowan","derek@mcgstyle.net"
"estesp","Phil Estes","estesp@linux.vnet.ibm.com"
"justincormack","Justin Cormack","justin.cormack@docker.com"
"kolyshkin","Kir Kolyshkin","kolyshkin@gmail.com"
"laurazard","Laura Brehm","laurabrehm@hey.com"
"neersighted","Bjorn Neergaard","bjorn@neersighted.com"
"rumpl","Djordje Lukic","djordje.lukic@docker.com"
"samuelkarp","Samuel Karp","me@samuelkarp.com"
"stevvooe","Stephen Day","stephen.day@docker.com"
"thompson-shaun","Shaun Thompson","shaun.thompson@docker.com"
"tiborvass","Tibor Vass","tibor@docker.com"
"unclejack","Cristian Staretu","cristian.staretu@gmail.com"
# TODO(estesp): This file should not necessarily depend on docker/opensource
# This file is compiled into the MAINTAINERS file in docker/opensource.
#
[Org]
[Org."Core maintainers"]
# The Core maintainers are the ghostbusters of the project: when there's a problem others
# can't solve, they show up and fix it with bizarre devices and weaponry.
# They have final say on technical implementation and coding style.
# They are ultimately responsible for quality in all its forms: usability polish,
# bugfixes, performance, stability, etc. When ownership can cleanly be passed to
# a subsystem, they are responsible for doing so and holding the
# subsystem maintainers accountable. If ownership is unclear, they are the de facto owners.
people = [
"akerouanton",
"akihirosuda",
"anusha",
"coolljt0725",
"corhere",
"cpuguy83",
"crazy-max",
"estesp",
"johnstep",
"justincormack",
"kolyshkin",
"laurazard",
"mhbauer",
"neersighted",
"rumpl",
"runcom",
"samuelkarp",
"stevvooe",
"thajeztah",
"tianon",
"tibor",
"tonistiigi",
"unclejack",
"vdemeester",
"vieux",
"vvoland",
"yongtang"
]
[Org.Curators]
# The curators help ensure that incoming issues and pull requests are properly triaged and
# that our various contribution and reviewing processes are respected. With their knowledge of
# the repository activity, they can also guide contributors to relevant material or
# discussions.
#
# They are neither code nor docs reviewers, so they are never expected to merge. They can
# however:
# - close an issue or pull request when it's an exact duplicate
# - close an issue or pull request when it's inappropriate or off-topic
people = [
"alexellis",
"andrewhsu",
"bsousaa",
"dmcgowan",
"fntlnz",
"gianarb",
"olljanat",
"programmerq",
"ripcurld",
"robmry",
"sam-thibault",
"samwhited",
"thajeztah"
]
[Org.Alumni]
# This list contains maintainers that are no longer active on the project.
# It is thanks to these people that the project has become what it is today.
# Thank you!
people = [
# Aaron Lehmann was a maintainer for swarmkit, the registry, and the engine,
# and contributed many improvements, features, and bugfixes in those areas,
# among which "automated service rollbacks", templated secrets and configs,
# and resumable image layer downloads.
"aaronlehmann",
# Harald Albers is the mastermind behind the bash completion scripts for the
# Docker CLI. The completion scripts moved to the Docker CLI repository, so
# you can now find him perform his magic in the https://github.com/docker/cli repository.
"albers",
# Andrea Luzzardi started contributing to the Docker codebase in the "dotCloud"
# era, even before it was called "Docker". He is one of the architects of both
# Swarm and SwarmKit, and its integration into the Docker engine.
"aluzzardi",
# David Calavera contributed many features to Docker, such as an improved
# event system, dynamic configuration reloading, volume plugins, fancy
# new templating options, and an external client credential store. As a
# maintainer, David was release captain for Docker 1.8, and competing
# with Jess Frazelle to be "top dream killer".
# David is now doing amazing stuff as CTO for https://www.netlify.com,
# and tweets as @calavera.
"calavera",
# Michael Crosby was "chief maintainer" of the Docker project.
# During his time as a maintainer, Michael contributed to many
# milestones of the project; he was release captain of Docker v1.0.0,
# started the development of "libcontainer" (what later became runc)
# and containerd, as well as demoing cool hacks such as live migrating
# a game server container with checkpoint/restore.
#
# Michael is currently a maintainer of containerd, but you may see
# him around in other projects on GitHub.
"crosbymichael",
# Before becoming a maintainer, Daniel Nephin was a core contributor
# to "Fig" (now known as Docker Compose). As a maintainer for both the
# Engine and Docker CLI, Daniel contributed many features, among which
# the `docker stack` commands, allowing users to deploy their Docker
# Compose projects as a Swarm service.
"dnephin",
# Doug Davis contributed many features and fixes for the classic builder,
# such as "wildcard" copy, the dockerignore file, custom paths/names
# for the Dockerfile, as well as enhancements to the API and documentation.
# Follow Doug on Twitter, where he tweets as @duginabox.
"duglin",
# As a maintainer, Erik was responsible for the "builder", and
# started the first designs for the new networking model in
# Docker. Erik is now working on all kinds of plugins for Docker
# (https://github.com/contiv) and various open source projects
# in his own repository https://github.com/erikh. You may
# still stumble into him in our issue tracker, or on IRC.
"erikh",
# Evan Hazlett is the creator of the Shipyard and Interlock open source projects,
# and the author of "Orca", which became the foundation of Docker Universal Control
# Plane (UCP). As a maintainer, Evan helped integrating SwarmKit (secrets, tasks)
# into the Docker engine.
"ehazlett",
# Arnaud Porterie (AKA "icecrime") was in charge of maintaining the maintainers.
# As a maintainer, he made life easier for contributors to the Docker open-source
# projects, bringing order in the chaos by designing a triage- and review workflow
# using labels (see https://icecrime.net/technology/a-structured-approach-to-labeling/),
# and automating the hell out of things with his buddies GordonTheTurtle and Poule
# (a chicken!).
#
# A lesser-known fact is that he created the first commit in the libnetwork repository
# even though he didn't know anything about it. Some say, he's now selling stuff on
# the internet ;-)
"icecrime",
# After a false start with his first PR being rejected, James Turnbull became a frequent
# contributor to the documentation, and became a docs maintainer on December 5, 2013. As
# a maintainer, James lifted the docs to a higher standard, and introduced the community
# guidelines ("three strikes"). James is currently changing the world as CTO of https://www.empatico.org,
# meanwhile authoring various books that are worth checking out. You can find him on Twitter,
# rambling as @kartar, and although no longer active as a maintainer, he's always "game" to
# help out reviewing docs PRs, so you may still see him around in the repository.
"jamtur01",
# Jessica Frazelle, also known as the "Keyser Söze of containers",
# runs *everything* in containers. She started contributing to
# Docker with a (fun fun) change involving both iptables and regular
# expressions (coz, YOLO!) on July 10, 2014
# https://github.com/docker/docker/pull/6950/commits/f3a68ffa390fb851115c77783fa4031f1d3b2995.
# Jess was Release Captain for Docker 1.4, 1.6 and 1.7, and contributed
# many features and improvement, among which "seccomp profiles" (making
# containers a lot more secure). Besides being a maintainer, she
# set up the CI infrastructure for the project, giving everyone
# something to shout at if a PR failed ("noooo Janky!").
# Be sure you don't miss her talks at a conference near you (a must-see),
# read her blog at https://blog.jessfraz.com (a must-read), and
# check out her open source projects on GitHub https://github.com/jessfraz (a must-try).
"jessfraz",
# As a maintainer, John Howard managed to make the impossible possible;
# to run Docker on Windows. After facing many challenges, teaching
# fellow-maintainers that 'Windows is not Linux', and many changes in
# Windows Server to facilitate containers, native Windows containers
# saw the light of day in 2015.
#
# John is now enjoying life without containers: playing piano, painting,
# and walking his dogs, but you may occasionally see him drop by on GitHub.
"lowenna",
# Alexander Morozov contributed many features to Docker, worked on the premise of
# what later became containerd (and worked on that too), and made a "stupid" Go
# vendor tool specifically for docker/docker needs: vndr (https://github.com/LK4D4/vndr).
# Not many know that Alexander is a master negotiator, being able to change course
# of action with a single "Nope, we're not gonna do that".
"lk4d4",
# Madhu Venugopal was part of the SocketPlane team that joined Docker.
# As a maintainer, he was working with Jana for the Container Network
# Model (CNM) implemented through libnetwork, and the "routing mesh" powering
# Swarm mode networking.
"mavenugo",
# As a maintainer, Kenfe-Mickaël Laventure worked on the container runtime,
# integrating containerd 1.0 with the daemon, and adding support for custom
# OCI runtimes, as well as implementing the `docker prune` subcommands,
# which was a welcome feature to be added. You can keep up with Mickaél on
# Twitter (@kmlaventure).
"mlaventure",
# As a docs maintainer, Mary Anthony contributed greatly to the Docker
# docs. She wrote the Docker Contributor Guide and Getting Started
# Guides. She helped create a doc build system independent of
# docker/docker project, and implemented a new docs.docker.com theme and
# nav for 2015 Dockercon. Fun fact: the most inherited layer in DockerHub
# public repositories was originally referenced in
# maryatdocker/docker-whale back in May 2015.
"moxiegirl",
# Jana Radhakrishnan was part of the SocketPlane team that joined Docker.
# As a maintainer, he was the lead architect for the Container Network
# Model (CNM) implemented through libnetwork, and the "routing mesh" powering
# Swarm mode networking.
#
# Jana started new adventures in networking, but you can find him tweeting as @mrjana,
# coding on GitHub https://github.com/mrjana, and he may be hiding on the Docker Community
# slack channel :-)
"mrjana",
# Sven Dowideit became a well known person in the Docker ecosphere, building
# boot2docker, and became a regular contributor to the project, starting as
# early as October 2013 (https://github.com/docker/docker/pull/2119), to become
# a maintainer less than two months later (https://github.com/docker/docker/pull/3061).
#
# As a maintainer, Sven took on the task to convert the documentation from
# ReStructuredText to Markdown, migrate to Hugo for generating the docs, and
# writing tooling for building, testing, and publishing them.
#
# If you're not in the occasion to visit "the Australian office", you
# can keep up with Sven on Twitter (@SvenDowideit), his blog http://fosiki.com,
# and of course on GitHub.
"sven",
# Vincent "vbatts!" Batts made his first contribution to the project
# in November 2013, to become a maintainer a few months later, on
# May 10, 2014 (https://github.com/docker/docker/commit/d6e666a87a01a5634c250358a94c814bf26cb778).
# As a maintainer, Vincent made important contributions to core elements
# of Docker, such as "distribution" (tarsum) and graphdrivers (btrfs, devicemapper).
# He also contributed the "tar-split" library, an important element
# for the content-addressable store.
# Vincent is currently a member of the Open Containers Initiative
# Technical Oversight Board (TOB), besides his work at Red Hat and
# Project Atomic. You can still find him regularly hanging out in
# our repository and the #docker-dev and #docker-maintainers IRC channels
# for a chat, as he's always a lot of fun.
"vbatts",
# Vishnu became a maintainer to help out on the daemon codebase and
# libcontainer integration. He's currently involved in the
# Open Containers Initiative, working on the specifications,
# besides his work on cAdvisor and Kubernetes for Google.
"vishh"
]
[people]
# A reference list of all people associated with the project.
# All other sections should refer to people by their canonical key
# in the people section.
# ADD YOURSELF HERE IN ALPHABETICAL ORDER
[people.aaronlehmann]
Name = "Aaron Lehmann"
Email = "aaron.lehmann@docker.com"
GitHub = "aaronlehmann"
[people.akerouanton]
Name = "Albin Kerouanton"
Email = "albinker@gmail.com"
GitHub = "akerouanton"
[people.alexellis]
Name = "Alex Ellis"
Email = "alexellis2@gmail.com"
GitHub = "alexellis"
[people.akihirosuda]
Name = "Akihiro Suda"
Email = "akihiro.suda.cz@hco.ntt.co.jp"
GitHub = "AkihiroSuda"
[people.aluzzardi]
Name = "Andrea Luzzardi"
Email = "al@docker.com"
GitHub = "aluzzardi"
[people.albers]
Name = "Harald Albers"
Email = "github@albersweb.de"
GitHub = "albers"
[people.andrewhsu]
Name = "Andrew Hsu"
Email = "andrewhsu@docker.com"
GitHub = "andrewhsu"
[people.anusha]
Name = "Anusha Ragunathan"
Email = "anusha@docker.com"
GitHub = "anusha-ragunathan"
[people.bsousaa]
Name = "Bruno de Sousa"
Email = "bruno.sousa@docker.com"
GitHub = "bsousaa"
[people.calavera]
Name = "David Calavera"
Email = "david.calavera@gmail.com"
GitHub = "calavera"
[people.coolljt0725]
Name = "Lei Jitang"
Email = "leijitang@huawei.com"
GitHub = "coolljt0725"
[people.corhere]
Name = "Cory Snider"
Email = "csnider@mirantis.com"
GitHub = "corhere"
[people.cpuguy83]
Name = "Brian Goff"
Email = "cpuguy83@gmail.com"
GitHub = "cpuguy83"
[people.crazy-max]
Name = "Kevin Alvarez"
Email = "contact@crazymax.dev"
GitHub = "crazy-max"
[people.crosbymichael]
Name = "Michael Crosby"
Email = "crosbymichael@gmail.com"
GitHub = "crosbymichael"
[people.dnephin]
Name = "Daniel Nephin"
Email = "dnephin@gmail.com"
GitHub = "dnephin"
[people.dmcgowan]
Name = "Derek McGowan"
Email = "derek@mcgstyle.net"
GitHub = "dmcgowan"
[people.duglin]
Name = "Doug Davis"
Email = "dug@us.ibm.com"
GitHub = "duglin"
[people.ehazlett]
Name = "Evan Hazlett"
Email = "ejhazlett@gmail.com"
GitHub = "ehazlett"
[people.erikh]
Name = "Erik Hollensbe"
Email = "erik@docker.com"
GitHub = "erikh"
[people.estesp]
Name = "Phil Estes"
Email = "estesp@linux.vnet.ibm.com"
GitHub = "estesp"
[people.fntlnz]
Name = "Lorenzo Fontana"
Email = "fontanalorenz@gmail.com"
GitHub = "fntlnz"
[people.gianarb]
Name = "Gianluca Arbezzano"
Email = "ga@thumpflow.com"
GitHub = "gianarb"
[people.icecrime]
Name = "Arnaud Porterie"
Email = "icecrime@gmail.com"
GitHub = "icecrime"
[people.jamtur01]
Name = "James Turnbull"
Email = "james@lovedthanlost.net"
GitHub = "jamtur01"
[people.jessfraz]
Name = "Jessie Frazelle"
Email = "jess@linux.com"
GitHub = "jessfraz"
[people.johnstep]
Name = "John Stephens"
Email = "johnstep@docker.com"
GitHub = "johnstep"
[people.justincormack]
Name = "Justin Cormack"
Email = "justin.cormack@docker.com"
GitHub = "justincormack"
[people.kolyshkin]
Name = "Kir Kolyshkin"
Email = "kolyshkin@gmail.com"
GitHub = "kolyshkin"
[people.laurazard]
Name = "Laura Brehm"
Email = "laura.brehm@docker.com"
GitHub = "laurazard"
[people.lk4d4]
Name = "Alexander Morozov"
Email = "lk4d4@docker.com"
GitHub = "lk4d4"
[people.lowenna]
Name = "John Howard"
Email = "github@lowenna.com"
GitHub = "lowenna"
[people.mavenugo]
Name = "Madhu Venugopal"
Email = "madhu@docker.com"
GitHub = "mavenugo"
[people.mhbauer]
Name = "Morgan Bauer"
Email = "mbauer@us.ibm.com"
GitHub = "mhbauer"
[people.mlaventure]
Name = "Kenfe-Mickaël Laventure"
Email = "mickael.laventure@gmail.com"
GitHub = "mlaventure"
[people.moxiegirl]
Name = "Mary Anthony"
Email = "mary.anthony@docker.com"
GitHub = "moxiegirl"
[people.mrjana]
Name = "Jana Radhakrishnan"
Email = "mrjana@docker.com"
GitHub = "mrjana"
[people.neersighted]
Name = "Bjorn Neergaard"
Email = "bjorn@neersighted.com"
GitHub = "neersighted"
[people.olljanat]
Name = "Olli Janatuinen"
Email = "olli.janatuinen@gmail.com"
GitHub = "olljanat"
[people.programmerq]
Name = "Jeff Anderson"
Email = "jeff@docker.com"
GitHub = "programmerq"
[people.robmry]
Name = "Rob Murray"
Email = "rob.murray@docker.com"
GitHub = "robmry"
[people.ripcurld]
Name = "Boaz Shuster"
Email = "ripcurld.github@gmail.com"
GitHub = "ripcurld"
[people.rumpl]
Name = "Djordje Lukic"
Email = "djordje.lukic@docker.com"
GitHub = "rumpl"
[people.runcom]
Name = "Antonio Murdaca"
Email = "runcom@redhat.com"
GitHub = "runcom"
[people.sam-thibault]
Name = "Sam Thibault"
Email = "sam.thibault@docker.com"
GitHub = "sam-thibault"
[people.samuelkarp]
Name = "Samuel Karp"
Email = "me@samuelkarp.com"
GitHub = "samuelkarp"
[people.samwhited]
Name = "Sam Whited"
Email = "sam@samwhited.com"
GitHub = "samwhited"
[people.shykes]
Name = "Solomon Hykes"
Email = "solomon@docker.com"
GitHub = "shykes"
[people.stevvooe]
Name = "Stephen Day"
Email = "stephen.day@docker.com"
GitHub = "stevvooe"
[people.sven]
Name = "Sven Dowideit"
Email = "SvenDowideit@home.org.au"
GitHub = "SvenDowideit"
[people.thajeztah]
Name = "Sebastiaan van Stijn"
Email = "github@gone.nl"
GitHub = "thaJeztah"
[people.tianon]
Name = "Tianon Gravi"
Email = "admwiggin@gmail.com"
GitHub = "tianon"
[people.tibor]
Name = "Tibor Vass"
Email = "tibor@docker.com"
GitHub = "tiborvass"
[people.tonistiigi]
Name = "Tõnis Tiigi"
Email = "tonis@docker.com"
GitHub = "tonistiigi"
[people.unclejack]
Name = "Cristian Staretu"
Email = "cristian.staretu@gmail.com"
GitHub = "unclejack"
[people.vbatts]
Name = "Vincent Batts"
Email = "vbatts@redhat.com"
GitHub = "vbatts"
[people.vdemeester]
Name = "Vincent Demeester"
Email = "vincent@sbr.pm"
GitHub = "vdemeester"
[people.vieux]
Name = "Victor Vieux"
Email = "vieux@docker.com"
GitHub = "vieux"
[people.vishh]
Name = "Vishnu Kannan"
Email = "vishnuk@google.com"
GitHub = "vishh"
[people.vvoland]
Name = "Paweł Gronowski"
Email = "pawel.gronowski@docker.com"
GitHub = "vvoland"
[people.yongtang]
Name = "Yong Tang"
Email = "yong.tang.github@outlook.com"
GitHub = "yongtang"

View File

@@ -1,6 +1,12 @@
.PHONY: all binary dynbinary build cross help install manpages run shell test test-docker-py test-integration test-unit validate validate-% win
DOCKER ?= docker
BUILDX ?= $(DOCKER) buildx
# set the graph driver as the current graphdriver if not set
DOCKER_GRAPHDRIVER := $(if $(DOCKER_GRAPHDRIVER),$(DOCKER_GRAPHDRIVER),$(shell docker info -f '{{ .Driver }}' 2>&1))
export DOCKER_GRAPHDRIVER
DOCKER_GITCOMMIT := $(shell git rev-parse HEAD)
export DOCKER_GITCOMMIT
@@ -31,6 +37,7 @@ DOCKER_ENVS := \
-e DOCKER_BUILD_OPTS \
-e DOCKER_BUILD_PKGS \
-e DOCKER_BUILDKIT \
-e DOCKER_BASH_COMPLETION_PATH \
-e DOCKER_CLI_PATH \
-e DOCKERCLI_VERSION \
-e DOCKERCLI_REPOSITORY \
@@ -38,10 +45,8 @@ DOCKER_ENVS := \
-e DOCKERCLI_INTEGRATION_REPOSITORY \
-e DOCKER_DEBUG \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_FIREWALL_BACKEND \
-e DOCKER_GITCOMMIT \
-e DOCKER_GRAPHDRIVER \
-e DOCKER_IGNORE_BR_NETFILTER_ERROR \
-e DOCKER_LDFLAGS \
-e DOCKER_PORT \
-e DOCKER_REMAP_ROOT \
@@ -55,7 +60,7 @@ DOCKER_ENVS := \
-e GITHUB_ACTIONS \
-e TEST_FORCE_VALIDATE \
-e TEST_INTEGRATION_DIR \
-e TEST_INTEGRATION_USE_GRAPHDRIVER \
-e TEST_INTEGRATION_USE_SNAPSHOTTER \
-e TEST_INTEGRATION_FAIL_FAST \
-e TEST_SKIP_INTEGRATION \
-e TEST_SKIP_INTEGRATION_CLI \
@@ -85,11 +90,11 @@ DOCKER_ENVS := \
# to allow `make BIND_DIR=. shell` or `make BIND_DIR= test`
# (default to no bind mount if DOCKER_HOST is set)
# note: BINDDIR is supported for backwards-compatibility here
BIND_DIR := $(if $(BINDDIR),$(BINDDIR),$(if $(DOCKER_HOST),,.))
BIND_DIR := $(if $(BINDDIR),$(BINDDIR),$(if $(DOCKER_HOST),,bundles))
# DOCKER_MOUNT can be overridden, but use at your own risk!
# DOCKER_MOUNT can be overriden, but use at your own risk!
ifndef DOCKER_MOUNT
DOCKER_MOUNT := $(if $(BIND_DIR),-v "$(BIND_DIR):/go/src/github.com/docker/docker/$(BIND_DIR)")
DOCKER_MOUNT := $(if $(BIND_DIR),-v "$(CURDIR)/$(BIND_DIR):/go/src/github.com/docker/docker/$(BIND_DIR)")
DOCKER_MOUNT := $(if $(DOCKER_BINDDIR_MOUNT_OPTS),$(DOCKER_MOUNT):$(DOCKER_BINDDIR_MOUNT_OPTS),$(DOCKER_MOUNT))
# This allows the test suite to be able to run without worrying about the underlying fs used by the container running the daemon (e.g. aufs-on-aufs), so long as the host running the container is running a supported fs.
@@ -99,14 +104,8 @@ DOCKER_MOUNT := $(if $(DOCKER_MOUNT),$(DOCKER_MOUNT),-v /go/src/github.com/docke
DOCKER_MOUNT_CACHE := -v docker-dev-cache:/root/.cache -v docker-mod-cache:/go/pkg/mod/
DOCKER_MOUNT_CLI := $(if $(DOCKER_CLI_PATH),-v $(shell dirname $(DOCKER_CLI_PATH)):/usr/local/cli,)
ifdef BIND_GIT
# Gets the common .git directory (even from inside a git worktree)
GITDIR := $(shell realpath $(shell git rev-parse --git-common-dir))
MOUNT_GITDIR := $(if $(GITDIR),-v "$(GITDIR):$(GITDIR)")
endif
DOCKER_MOUNT := $(DOCKER_MOUNT) $(DOCKER_MOUNT_CACHE) $(DOCKER_MOUNT_CLI) $(DOCKER_MOUNT_BASH_COMPLETION) $(MOUNT_GITDIR)
DOCKER_MOUNT_BASH_COMPLETION := $(if $(DOCKER_BASH_COMPLETION_PATH),-v $(shell dirname $(DOCKER_BASH_COMPLETION_PATH)):/usr/local/completion/bash,)
DOCKER_MOUNT := $(DOCKER_MOUNT) $(DOCKER_MOUNT_CACHE) $(DOCKER_MOUNT_CLI) $(DOCKER_MOUNT_BASH_COMPLETION)
endif # ifndef DOCKER_MOUNT
# This allows to set the docker-dev container name
@@ -161,19 +160,15 @@ BAKE_CMD := $(BUILDX) bake
default: binary
.PHONY: all
all: build ## validate all checks, build linux binaries, run all tests,\ncross build non-linux binaries, and generate archives
$(DOCKER_RUN_DOCKER) bash -c 'hack/validate/default && hack/make.sh'
.PHONY: binary
binary: bundles ## build statically linked linux binaries
$(BAKE_CMD) binary
.PHONY: dynbinary
dynbinary: bundles ## build dynamically linked linux binaries
$(BAKE_CMD) dynbinary
.PHONY: cross
cross: bundles ## cross build the binaries
$(BAKE_CMD) binary-cross
@@ -187,15 +182,12 @@ clean: clean-cache
clean-cache: ## remove the docker volumes that are used for caching in the dev-container
docker volume rm -f docker-dev-cache docker-mod-cache
.PHONY: help
help: ## this help
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z0-9_-]+:.*?## / {gsub("\\\\n",sprintf("\n%22c",""), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)
.PHONY: install
install: ## install the linux binaries
KEEPBUNDLE=1 hack/make.sh install-binary
.PHONY: run
run: build ## run the docker daemon in a container
$(DOCKER_RUN_DOCKER) sh -c "KEEPBUNDLE=1 hack/make.sh install-binary run"
@@ -205,25 +197,20 @@ build: shell_target := --target=dev-base
else
build: shell_target := --target=dev
endif
build: validate-bind-dir bundles
build: bundles
$(BUILD_CMD) $(BUILD_OPTS) $(shell_target) --load -t "$(DOCKER_IMAGE)" .
.PHONY: shell
shell: build ## start a shell inside the build env
$(DOCKER_RUN_DOCKER) bash
.PHONY: test
test: build test-unit ## run the unit, integration and docker-py tests
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-integration test-docker-py
.PHONY: test-docker-py
test-docker-py: build ## run the docker-py tests
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-docker-py
.PHONY: test-integration-cli
test-integration-cli: test-integration ## (DEPRECATED) use test-integration
.PHONY: test-integration
ifneq ($(and $(TEST_SKIP_INTEGRATION),$(TEST_SKIP_INTEGRATION_CLI)),)
test-integration:
@echo Both integrations suites skipped per environment variables
@@ -232,29 +219,23 @@ test-integration: build ## run the integration tests
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-integration
endif
.PHONY: test-integration-flaky
test-integration-flaky: build ## run the stress test for all new integration tests
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-integration-flaky
.PHONY: test-unit
test-unit: build ## run the unit tests
$(DOCKER_RUN_DOCKER) hack/test/unit
.PHONY: validate
validate: build ## validate DCO, Seccomp profile generation, gofmt,\n./pkg/ isolation, golint, tests, tomls, go vet and vendor
$(DOCKER_RUN_DOCKER) hack/validate/all
.PHONY: validate-generate-files
validate-generate-files:
$(BUILD_CMD) --target "validate" \
--output "type=cacheonly" \
--file "./hack/dockerfiles/generate-files.Dockerfile" .
.PHONY: validate-%
validate-%: build ## validate specific check
$(DOCKER_RUN_DOCKER) hack/validate/$*
.PHONY: win
win: bundles ## cross build the binary for windows
$(BAKE_CMD) --set *.platform=windows/amd64 binary
@@ -286,10 +267,3 @@ generate-files:
--file "./hack/dockerfiles/generate-files.Dockerfile" .
cp -R "$($@_TMP_OUT)"/. .
rm -rf "$($@_TMP_OUT)"/*
.PHONY: validate-bind-dir
validate-bind-dir:
@case "$(BIND_DIR)" in \
".."*|"/"*) echo "Make needs to be run from the project-root directory, with BIND_DIR set to \".\" or a subdir"; \
exit 1 ;; \
esac

View File

@@ -1,13 +1,6 @@
The Moby Project
================
[![PkgGoDev](https://pkg.go.dev/badge/github.com/moby/moby/v2)](https://pkg.go.dev/github.com/moby/moby/v2)
![GitHub License](https://img.shields.io/github/license/moby/moby)
[![Go Report Card](https://goreportcard.com/badge/github.com/moby/moby/v2)](https://goreportcard.com/report/github.com/moby/moby/v2)
[![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/moby/moby/badge)](https://scorecard.dev/viewer/?uri=github.com/moby/moby)
[![OpenSSF Best Practices](https://www.bestpractices.dev/projects/10989/badge)](https://www.bestpractices.dev/projects/10989)
![Moby Project logo](docs/static_files/moby-project-logo.png "The Moby Project")
Moby is an open-source project created by Docker to enable and accelerate software containerization.
@@ -39,7 +32,7 @@ New projects can be added if they fit with the community goals. Docker is commit
However, other projects are also encouraged to use Moby as an upstream, and to reuse the components in diverse ways, and all these uses will be treated in the same way. External maintainers and contributors are welcomed.
The Moby project is not intended as a location for support or feature requests for Docker products, but as a place for contributors to work on open source code, fix bugs, and make the code more useful.
The releases are supported by the maintainers, community and users, on a best efforts basis only. For customers who want enterprise or commercial support, [Docker Desktop](https://www.docker.com/products/docker-desktop/) and [Mirantis Container Runtime](https://www.mirantis.com/software/mirantis-container-runtime/) are the appropriate products for these use cases.
The releases are supported by the maintainers, community and users, on a best efforts basis only, and are not intended for customers who want enterprise or commercial support; Docker EE is the appropriate product for these use cases.
-----

View File

@@ -113,5 +113,5 @@ We see gRPC as the natural communication layer between decoupled components.
In addition to pushing out large components into other projects, much of the
internal code structure, and in particular the
["Daemon"](https://pkg.go.dev/github.com/moby/moby/v2/daemon#Daemon) object,
["Daemon"](https://godoc.org/github.com/docker/docker/daemon#Daemon) object,
should be split into smaller, more manageable, and more testable components.

View File

@@ -1,42 +1,9 @@
# Security Policy
# Reporting security issues
The maintainers of the Moby project take security seriously. If you discover
a security issue, please bring it to their attention right away!
The Moby maintainers take security seriously. If you discover a security issue, please bring it to their attention right away!
## Reporting a Vulnerability
### Reporting a Vulnerability
Please **DO NOT** file a public issue, instead send your report privately
to [security@docker.com](mailto:security@docker.com).
Please **DO NOT** file a public issue, instead send your report privately to security@docker.com.
Reporter(s) can expect a response within 72 hours, acknowledging the issue was
received.
## Review Process
After receiving the report, an initial triage and technical analysis is
performed to confirm the report and determine its scope. We may request
additional information in this stage of the process.
Once a reviewer has confirmed the relevance of the report, a draft security
advisory will be created on GitHub. The draft advisory will be used to discuss
the issue with maintainers, the reporter(s), and where applicable, other
affected parties under embargo.
If the vulnerability is accepted, a timeline for developing a patch, public
disclosure, and patch release will be determined. If there is an embargo period
on public disclosure before the patch release, the reporter(s) are expected to
participate in the discussion of the timeline and abide by agreed upon dates
for public disclosure.
## Accreditation
Security reports are greatly appreciated and we will publicly thank you,
although we will keep your name confidential if you request it. We also like to
send gifts - if you're into swag, make sure to let us know. We do not currently
offer a paid security bounty program at this time.
## Supported Versions
This project uses long-lived branches to maintain releases. Refer to
[BRANCHES-AND-TAGS.md](project/BRANCHES-AND-TAGS.md) in the default branch to
learn about the current maintenance status of each branch.
Security reports are greatly appreciated and we will publicly thank you for it, although we keep your name confidential if you request it. We also like to send gifts—if you're into schwag, make sure to let us know. We currently do not offer a paid security bounty program, but are not ruling it out in the future.

View File

@@ -8,11 +8,11 @@ questions you may have as an aspiring Moby contributor.
Moby has two test suites (and one legacy test suite):
* Unit tests - use standard `go test` and
[gotest.tools/v3/assert](https://pkg.go.dev/gotest.tools/v3/assert) assertions. They are located in
[gotest.tools/assert](https://godoc.org/gotest.tools/assert) assertions. They are located in
the package they test. Unit tests should be fast and test only their own
package.
* API integration tests - use standard `go test` and
[gotest.tools/v3/assert](https://pkg.go.dev/gotest.tools/v3/assert) assertions. They are located in
[gotest.tools/assert](https://godoc.org/gotest.tools/assert) assertions. They are located in
`./integration/<component>` directories, where `component` is: container,
image, volume, etc. These tests perform HTTP requests to an API endpoint and
check the HTTP response and daemon state after the call.
@@ -57,28 +57,17 @@ Instead, implement new tests under `integration/`.
### Integration tests environment considerations
When adding new tests or modifying existing tests under `integration/`, testing
environment should be properly considered. [`skip.If`](https://pkg.go.dev/gotest.tools/v3/skip#If) from
[gotest.tools/v3/skip](https://pkg.go.dev/gotest.tools/v3/skip) can be used to make the
environment should be properly considered. `skip.If` from
[gotest.tools/skip](https://godoc.org/gotest.tools/skip) can be used to make the
test run conditionally. Full testing environment conditions can be found at
[environment.go](https://github.com/moby/moby/blob/311b2c87e125c6d4198014369e313135cf928a8a/testutil/environment/environment.go)
[environment.go](https://github.com/moby/moby/blob/6b6eeed03b963a27085ea670f40cd5ff8a61f32e/testutil/environment/environment.go)
Here is a quick example. If the test needs to interact with a docker daemon on
the same host, the following condition should be checked within the test code
```go
package example
import (
"testing"
"gotest.tools/v3/skip"
)
func TestSomething(t *testing.T) {
skip.If(t, testEnv.IsRemoteDaemon(), "test requires a local daemon")
// your integration test code
}
skip.If(t, testEnv.IsRemoteDaemon())
// your integration test code
```
If a remote daemon is detected, the test will be skipped.
@@ -89,11 +78,11 @@ If a remote daemon is detected, the test will be skipped.
To run the unit test suite:
```bash
```
make test-unit
```
or `hack/test/unit` from inside a `make shell` container or properly
or `hack/test/unit` from inside a `BINDDIR=. make shell` container or properly
configured environment.
The following environment variables may be used to run a subset of tests:
@@ -106,7 +95,7 @@ The following environment variables may be used to run a subset of tests:
To run the integration test suite:
```bash
```
make test-integration
```
@@ -132,6 +121,6 @@ automatically set the other above mentioned environment variables accordingly.
You can change a version of golang used for building stuff that is being tested
by setting `GO_VERSION` variable, for example:
```bash
make GO_VERSION=1.24.8 test
```
make GO_VERSION=1.12.8 test
```

46
VENDORING.md Normal file
View File

@@ -0,0 +1,46 @@
# Vendoring policies
This document outlines recommended Vendoring policies for Docker repositories.
(Example, libnetwork is a Docker repo and logrus is not.)
## Vendoring using tags
Commit ID based vendoring provides little/no information about the updates
vendored. To fix this, vendors will now require that repositories use annotated
tags along with commit ids to snapshot commits. Annotated tags by themselves
are not sufficient, since the same tag can be force updated to reference
different commits.
Each tag should:
- Follow Semantic Versioning rules (refer to section on "Semantic Versioning")
- Have a corresponding entry in the change tracking document.
Each repo should:
- Have a change tracking document between tags/releases. Ex: CHANGELOG.md,
github releases file.
The goal here is for consuming repos to be able to use the tag version and
changelog updates to determine whether the vendoring will cause any breaking or
backward incompatible changes. This also means that repos can specify having
dependency on a package of a specific version or greater up to the next major
release, without encountering breaking changes.
## Semantic Versioning
Annotated version tags should follow [Semantic Versioning](http://semver.org) policies:
"Given a version number MAJOR.MINOR.PATCH, increment the:
1. MAJOR version when you make incompatible API changes,
2. MINOR version when you add functionality in a backwards-compatible manner, and
3. PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions
to the MAJOR.MINOR.PATCH format."
## Vendoring cadence
In order to avoid huge vendoring changes, it is recommended to have a regular
cadence for vendoring updates. e.g. monthly.
## Pre-merge vendoring tests
All related repos will be vendored into docker/docker.
CI on docker/docker should catch any breaking changes involving multiple repos.

View File

@@ -1,18 +1,12 @@
# Engine API
[![PkgGoDev](https://pkg.go.dev/badge/github.com/moby/moby/api)](https://pkg.go.dev/github.com/moby/moby/api)
![GitHub License](https://img.shields.io/github/license/moby/moby)
[![Go Report Card](https://goreportcard.com/badge/github.com/moby/moby/api)](https://goreportcard.com/report/github.com/moby/moby/api)
[![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/moby/moby/badge)](https://scorecard.dev/viewer/?uri=github.com/moby/moby)
[![OpenSSF Best Practices](https://www.bestpractices.dev/projects/10989/badge)](https://www.bestpractices.dev/projects/10989)
# Working on the Engine API
The Engine API is an HTTP API used by the command-line client to communicate with the daemon. It can also be used by third-party software to control the daemon.
It consists of various components in this repository:
- `api/swagger.yaml` A Swagger definition of the API.
- `api/types/` Types shared by both the client and server, representing various objects, options, responses, etc. Most are written manually, but some are automatically generated from the Swagger definition. See [#27919](https://github.com/moby/moby/issues/27919) for progress on this.
- `api/types/` Types shared by both the client and server, representing various objects, options, responses, etc. Most are written manually, but some are automatically generated from the Swagger definition. See [#27919](https://github.com/docker/docker/issues/27919) for progress on this.
- `cli/` The command-line client.
- `client/` The Go client used by the command-line client. It can also be used by third-party Go programs.
- `daemon/` The daemon, which serves the API.
@@ -27,7 +21,6 @@ The API is defined by the [Swagger](http://swagger.io/specification/) definition
## Updating the API documentation
The API documentation is generated entirely from `api/swagger.yaml`. If you make updates to the API, edit this file to represent the change in the documentation.
Documentation for each API version can be found in the [docs directory](docs/README.md), which also provides a [CHANGELOG.md](docs/CHANGELOG.md).
The file is split into two main sections:
@@ -36,7 +29,7 @@ The file is split into two main sections:
To make an edit, first look for the endpoint you want to edit under `paths`, then make the required edits. Endpoints may reference reusable objects with `$ref`, which can be found in the `definitions` section.
There is hopefully enough example material in the file for you to copy a similar pattern from elsewhere in the file (e.g. adding new fields or endpoints), but for the full reference, see the [Swagger specification](https://github.com/moby/moby/issues/27919).
There is hopefully enough example material in the file for you to copy a similar pattern from elsewhere in the file (e.g. adding new fields or endpoints), but for the full reference, see the [Swagger specification](https://github.com/docker/docker/issues/27919).
`swagger.yaml` is validated by `hack/validate/swagger` to ensure it is a valid Swagger definition. This is useful when making edits to ensure you are doing the right thing.
@@ -46,4 +39,4 @@ When you make edits to `swagger.yaml`, you may want to check the generated API d
Run `make swagger-docs` and a preview will be running at `http://localhost:9000`. Some of the styling may be incorrect, but you'll be able to ensure that it is generating the correct documentation.
The production documentation is generated by vendoring `swagger.yaml` into [docker/docs](https://github.com/docker/docs).
The production documentation is generated by vendoring `swagger.yaml` into [docker/docker.github.io](https://github.com/docker/docker.github.io).

11
api/common.go Normal file
View File

@@ -0,0 +1,11 @@
package api // import "github.com/docker/docker/api"
// Common constants for daemon and client.
const (
// DefaultVersion of Current REST API
DefaultVersion = "1.44"
// NoBaseImageSpecifier is the symbol used by the FROM
// command to specify that no base image is to be used.
NoBaseImageSpecifier = "scratch"
)

File diff suppressed because it is too large Load Diff

View File

@@ -1,26 +0,0 @@
# API Documentation
This directory contains versioned documents for each version of the API
specification supported by this module. While this module provides support
for older API versions, support should be considered "best-effort", especially
for very old versions. Users are recommended to use the latest API versions,
and only rely on older API versions for compatibility with older clients.
Newer API versions tend to be backward-compatible with older versions,
with some exceptions where features were deprecated. For an overview
of changes for each version, refer to [CHANGELOG.md](CHANGELOG.md).
The latest version of the API specification can be found [at the root directory
of this module](../swagger.yaml) which may contain unreleased changes.
For API version v1.24, documentation is only available in markdown
format, for later versions [Swagger (OpenAPI) v2.0](https://swagger.io/specification/v2/)
specifications can be found in this directory. The Moby project itself
primarily uses these swagger files to produce the API documentation;
while we attempt to make these files match the actual implementation,
the OpenAPI 2.0 specification has limitations that prevent us from
expressing all options provided. There may be discrepancies (for which
we welcome contributions). If you find bugs, or discrepancies, please
open a ticket (or pull request).

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,14 +0,0 @@
module github.com/moby/moby/api
go 1.24.0
require (
github.com/docker/go-units v0.5.0
github.com/moby/docker-image-spec v1.3.1
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.1.1
gotest.tools/v3 v3.5.2
pgregory.net/rapid v1.2.0
)
require github.com/google/go-cmp v0.7.0 // indirect

View File

@@ -1,14 +0,0 @@
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
gotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q=
gotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA=
pgregory.net/rapid v1.2.0 h1:keKAYRcjm+e1F0oAuU5F5+YPAWcyxNNRK2wud503Gnk=
pgregory.net/rapid v1.2.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04=

View File

@@ -1,92 +0,0 @@
package authconfig
import (
"bytes"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io"
"github.com/moby/moby/api/types/registry"
)
// Encode serializes the auth configuration as a base64url encoded
// ([RFC4648, section 5]) JSON string for sending through the X-Registry-Auth header.
//
// [RFC4648, section 5]: https://tools.ietf.org/html/rfc4648#section-5
func Encode(authConfig registry.AuthConfig) (string, error) {
// Older daemons (or registries) may not handle an empty string,
// which resulted in an "io.EOF" when unmarshaling or decoding.
//
// FIXME(thaJeztah): find exactly what code-paths are impacted by this.
// if authConfig == (AuthConfig{}) { return "", nil }
buf, err := json.Marshal(authConfig)
if err != nil {
return "", errInvalidParameter{err}
}
return base64.URLEncoding.EncodeToString(buf), nil
}
// Decode decodes base64url encoded ([RFC4648, section 5]) JSON
// authentication information as sent through the X-Registry-Auth header.
//
// This function always returns an [AuthConfig], even if an error occurs. It is up
// to the caller to decide if authentication is required, and if the error can
// be ignored.
//
// [RFC4648, section 5]: https://tools.ietf.org/html/rfc4648#section-5
func Decode(authEncoded string) (*registry.AuthConfig, error) {
if authEncoded == "" {
return &registry.AuthConfig{}, nil
}
decoded, err := base64.URLEncoding.DecodeString(authEncoded)
if err != nil {
var e base64.CorruptInputError
if errors.As(err, &e) {
return &registry.AuthConfig{}, invalid(errors.New("must be a valid base64url-encoded string"))
}
return &registry.AuthConfig{}, invalid(err)
}
if bytes.Equal(decoded, []byte("{}")) {
return &registry.AuthConfig{}, nil
}
return decode(bytes.NewReader(decoded))
}
// DecodeRequestBody decodes authentication information as sent as JSON in the
// body of a request. This function is to provide backward compatibility with old
// clients and API versions. Current clients and API versions expect authentication
// to be provided through the X-Registry-Auth header.
//
// Like [Decode], this function always returns an [AuthConfig], even if an
// error occurs. It is up to the caller to decide if authentication is required,
// and if the error can be ignored.
func DecodeRequestBody(r io.ReadCloser) (*registry.AuthConfig, error) {
return decode(r)
}
func decode(r io.Reader) (*registry.AuthConfig, error) {
authConfig := &registry.AuthConfig{}
if err := json.NewDecoder(r).Decode(authConfig); err != nil {
// always return an (empty) AuthConfig to increase compatibility with
// the existing API.
return &registry.AuthConfig{}, invalid(fmt.Errorf("invalid JSON: %w", err))
}
return authConfig, nil
}
func invalid(err error) error {
return errInvalidParameter{fmt.Errorf("invalid X-Registry-Auth header: %w", err)}
}
type errInvalidParameter struct{ error }
func (errInvalidParameter) InvalidParameter() {}
func (e errInvalidParameter) Cause() error { return e.error }
func (e errInvalidParameter) Unwrap() error { return e.error }

View File

@@ -1,191 +0,0 @@
package authconfig
import (
"encoding/base64"
"strings"
"testing"
"github.com/moby/moby/api/types/registry"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
)
func TestDecodeAuthConfig(t *testing.T) {
tests := []struct {
doc string
input string
inputBase64 string
expected registry.AuthConfig
expectedErr string
}{
{
doc: "empty",
input: ``,
inputBase64: ``,
expected: registry.AuthConfig{},
},
{
doc: "empty JSON",
input: `{}`,
inputBase64: `e30=`,
expected: registry.AuthConfig{},
},
{
doc: "malformed JSON",
input: `{`,
inputBase64: `ew==`,
expected: registry.AuthConfig{},
expectedErr: `invalid X-Registry-Auth header: invalid JSON: unexpected EOF`,
},
{
doc: "test authConfig",
input: `{"username":"testuser","password":"testpassword","serveraddress":"example.com"}`,
inputBase64: `eyJ1c2VybmFtZSI6InRlc3R1c2VyIiwicGFzc3dvcmQiOiJ0ZXN0cGFzc3dvcmQiLCJzZXJ2ZXJhZGRyZXNzIjoiZXhhbXBsZS5jb20ifQ==`,
expected: registry.AuthConfig{
Username: "testuser",
Password: "testpassword",
ServerAddress: "example.com",
},
},
{
// FIXME(thaJeztah): we should not accept multiple JSON documents.
doc: "multiple authConfig",
input: `{"username":"testuser","password":"testpassword","serveraddress":"example.com"}{"username":"testuser2","password":"testpassword2","serveraddress":"example.org"}`,
inputBase64: `eyJ1c2VybmFtZSI6InRlc3R1c2VyIiwicGFzc3dvcmQiOiJ0ZXN0cGFzc3dvcmQiLCJzZXJ2ZXJhZGRyZXNzIjoiZXhhbXBsZS5jb20ifXsidXNlcm5hbWUiOiJ0ZXN0dXNlcjIiLCJwYXNzd29yZCI6InRlc3RwYXNzd29yZDIiLCJzZXJ2ZXJhZGRyZXNzIjoiZXhhbXBsZS5vcmcifQ==`,
expected: registry.AuthConfig{
Username: "testuser",
Password: "testpassword",
ServerAddress: "example.com",
},
},
// We currently only support base64url encoding with padding, so
// un-padded should produce an error.
//
// RFC4648, section 5: https://tools.ietf.org/html/rfc4648#section-5
// RFC4648, section 3.2: https://tools.ietf.org/html/rfc4648#section-3.2
{
doc: "empty JSON no padding",
input: `{}`,
inputBase64: `e30`,
expected: registry.AuthConfig{},
expectedErr: `invalid X-Registry-Auth header: must be a valid base64url-encoded string`,
},
{
doc: "test authConfig",
input: `{"username":"testuser","password":"testpassword","serveraddress":"example.com"}`,
inputBase64: `eyJ1c2VybmFtZSI6InRlc3R1c2VyIiwicGFzc3dvcmQiOiJ0ZXN0cGFzc3dvcmQiLCJzZXJ2ZXJhZGRyZXNzIjoiZXhhbXBsZS5jb20ifQ`,
expected: registry.AuthConfig{},
expectedErr: `invalid X-Registry-Auth header: must be a valid base64url-encoded string`,
},
}
for _, tc := range tests {
t.Run(tc.doc, func(t *testing.T) {
if tc.inputBase64 != "" {
// Sanity check to make sure our fixtures are correct.
b64 := base64.URLEncoding.EncodeToString([]byte(tc.input))
if !strings.HasSuffix(tc.inputBase64, "=") {
b64 = strings.TrimRight(b64, "=")
}
assert.Check(t, is.Equal(b64, tc.inputBase64))
}
out, err := Decode(tc.inputBase64)
if tc.expectedErr != "" {
assert.Check(t, is.ErrorType(err, errInvalidParameter{}))
assert.Check(t, is.Error(err, tc.expectedErr))
} else {
assert.NilError(t, err)
assert.Equal(t, *out, tc.expected)
}
})
}
}
func TestEncodeAuthConfig(t *testing.T) {
tests := []struct {
doc string
input registry.AuthConfig
outBase64 string
outPlain string
}{
{
// Older daemons (or registries) may not handle an empty string,
// which resulted in an "io.EOF" when unmarshaling or decoding.
//
// FIXME(thaJeztah): find exactly what code-paths are impacted by this.
doc: "empty",
input: registry.AuthConfig{},
outBase64: `e30=`,
outPlain: `{}`,
},
{
doc: "test authConfig",
input: registry.AuthConfig{
Username: "testuser",
Password: "testpassword",
ServerAddress: "example.com",
},
outBase64: `eyJ1c2VybmFtZSI6InRlc3R1c2VyIiwicGFzc3dvcmQiOiJ0ZXN0cGFzc3dvcmQiLCJzZXJ2ZXJhZGRyZXNzIjoiZXhhbXBsZS5jb20ifQ==`,
outPlain: `{"username":"testuser","password":"testpassword","serveraddress":"example.com"}`,
},
}
for _, tc := range tests {
// Sanity check to make sure our fixtures are correct.
b64 := base64.URLEncoding.EncodeToString([]byte(tc.outPlain))
assert.Check(t, is.Equal(b64, tc.outBase64))
t.Run(tc.doc, func(t *testing.T) {
out, err := Encode(tc.input)
assert.NilError(t, err)
assert.Equal(t, out, tc.outBase64)
authJSON, err := base64.URLEncoding.DecodeString(out)
assert.NilError(t, err)
assert.Equal(t, string(authJSON), tc.outPlain)
})
}
}
func BenchmarkDecodeAuthConfig(b *testing.B) {
cases := []struct {
doc string
inputBase64 string
invalid bool
}{
{
doc: "empty",
inputBase64: ``,
},
{
doc: "empty JSON",
inputBase64: `e30=`,
},
{
doc: "valid",
inputBase64: base64.URLEncoding.EncodeToString([]byte(`{"username":"testuser","password":"testpassword","serveraddress":"example.com"}`)),
},
{
doc: "invalid base64",
inputBase64: "not-base64",
invalid: true,
},
{
doc: "malformed JSON",
inputBase64: `ew==`,
invalid: true,
},
}
for _, tc := range cases {
b.Run(tc.doc, func(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
_, err := Decode(tc.inputBase64)
if !tc.invalid && err != nil {
b.Fatal(err)
}
}
})
}
}

View File

@@ -1,146 +0,0 @@
package stdcopy
import (
"encoding/binary"
"errors"
"fmt"
"io"
)
// StdType is the type of standard stream
// a writer can multiplex to.
type StdType byte
const (
Stdin StdType = 0 // Stdin represents standard input stream. It is present for completeness and should NOT be used. When reading the stream with [StdCopy] it is output on [Stdout].
Stdout StdType = 1 // Stdout represents standard output stream.
Stderr StdType = 2 // Stderr represents standard error steam.
Systemerr StdType = 3 // Systemerr represents errors originating from the system. When reading the stream with [StdCopy] it is returned as an error.
)
const (
stdWriterPrefixLen = 8
stdWriterFdIndex = 0
stdWriterSizeIndex = 4
startingBufLen = 32*1024 + stdWriterPrefixLen + 1
)
// StdCopy is a modified version of [io.Copy] to de-multiplex messages
// from "multiplexedSource" and copy them to destination streams
// "destOut" and "destErr".
//
// StdCopy demultiplexes "multiplexedSource", assuming that it contains
// two streams, previously multiplexed using a writer created with
// [NewStdWriter].
//
// As it reads from "multiplexedSource", StdCopy writes [Stdout] messages
// to "destOut", and [Stderr] message to "destErr]. For backward-compatibility,
// [Stdin] messages are output to "destOut". The [Systemerr] stream provides
// errors produced by the daemon. It is returned as an error, and terminates
// processing the stream.
//
// StdCopy it reads until it hits [io.EOF] on "multiplexedSource", after
// which it returns a nil error. In other words: any error returned indicates
// a real underlying error, which may be when an unknown [StdType] stream
// is received.
//
// The "written" return holds the total number of bytes written to "destOut"
// and "destErr" combined.
func StdCopy(destOut, destErr io.Writer, multiplexedSource io.Reader) (written int64, _ error) {
var (
buf = make([]byte, startingBufLen)
bufLen = len(buf)
nr, nw int
err error
out io.Writer
frameSize int
)
for {
// Make sure we have at least a full header
for nr < stdWriterPrefixLen {
var nr2 int
nr2, err = multiplexedSource.Read(buf[nr:])
nr += nr2
if errors.Is(err, io.EOF) {
if nr < stdWriterPrefixLen {
return written, nil
}
break
}
if err != nil {
return 0, err
}
}
// Check the first byte to know where to write
stream := StdType(buf[stdWriterFdIndex])
switch stream {
case Stdin:
fallthrough
case Stdout:
// Write on stdout
out = destOut
case Stderr:
// Write on stderr
out = destErr
case Systemerr:
// If we're on Systemerr, we won't write anywhere.
// NB: if this code changes later, make sure you don't try to write
// to outstream if Systemerr is the stream
out = nil
default:
return 0, fmt.Errorf("unrecognized stream: %d", stream)
}
// Retrieve the size of the frame
frameSize = int(binary.BigEndian.Uint32(buf[stdWriterSizeIndex : stdWriterSizeIndex+4]))
// Check if the buffer is big enough to read the frame.
// Extend it if necessary.
if frameSize+stdWriterPrefixLen > bufLen {
buf = append(buf, make([]byte, frameSize+stdWriterPrefixLen-bufLen+1)...)
bufLen = len(buf)
}
// While the amount of bytes read is less than the size of the frame + header, we keep reading
for nr < frameSize+stdWriterPrefixLen {
var nr2 int
nr2, err = multiplexedSource.Read(buf[nr:])
nr += nr2
if errors.Is(err, io.EOF) {
if nr < frameSize+stdWriterPrefixLen {
return written, nil
}
break
}
if err != nil {
return 0, err
}
}
// we might have an error from the source mixed up in our multiplexed
// stream. if we do, return it.
if stream == Systemerr {
return written, fmt.Errorf("error from daemon in stream: %s", string(buf[stdWriterPrefixLen:frameSize+stdWriterPrefixLen]))
}
// Write the retrieved frame (without header)
nw, err = out.Write(buf[stdWriterPrefixLen : frameSize+stdWriterPrefixLen])
if err != nil {
return 0, err
}
// If the frame has not been fully written: error
if nw != frameSize {
return 0, io.ErrShortWrite
}
written += int64(nw)
// Move the rest of the buffer to the beginning
copy(buf, buf[frameSize+stdWriterPrefixLen:])
// Move the index
nr -= frameSize + stdWriterPrefixLen
}
}

View File

@@ -1,17 +0,0 @@
# commit to be tagged for new release
commit = "HEAD"
project_name = "moby"
github_repo = "moby/moby"
sub_path = "api"
ignore_deps = [ "github.com/moby/moby" ]
# previous release
previous = "v28.2.2"
pre_release = true
preface = """\
The first dedicated release for the Moby API. This release continues the 1.x
line of API compatibility with the 52nd minor release of the 1.x API.
"""

View File

@@ -0,0 +1,130 @@
package build // import "github.com/docker/docker/api/server/backend/build"
import (
"context"
"fmt"
"strconv"
"github.com/distribution/reference"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/events"
"github.com/docker/docker/builder"
buildkit "github.com/docker/docker/builder/builder-next"
daemonevents "github.com/docker/docker/daemon/events"
"github.com/docker/docker/image"
"github.com/docker/docker/pkg/stringid"
"github.com/pkg/errors"
"google.golang.org/grpc"
)
// ImageComponent provides an interface for working with images
type ImageComponent interface {
SquashImage(from string, to string) (string, error)
TagImage(context.Context, image.ID, reference.Named) error
}
// Builder defines interface for running a build
type Builder interface {
Build(context.Context, backend.BuildConfig) (*builder.Result, error)
}
// Backend provides build functionality to the API router
type Backend struct {
builder Builder
imageComponent ImageComponent
buildkit *buildkit.Builder
eventsService *daemonevents.Events
}
// NewBackend creates a new build backend from components
func NewBackend(components ImageComponent, builder Builder, buildkit *buildkit.Builder, es *daemonevents.Events) (*Backend, error) {
return &Backend{imageComponent: components, builder: builder, buildkit: buildkit, eventsService: es}, nil
}
// RegisterGRPC registers buildkit controller to the grpc server.
func (b *Backend) RegisterGRPC(s *grpc.Server) {
if b.buildkit != nil {
b.buildkit.RegisterGRPC(s)
}
}
// Build builds an image from a Source
func (b *Backend) Build(ctx context.Context, config backend.BuildConfig) (string, error) {
options := config.Options
useBuildKit := options.Version == types.BuilderBuildKit
tags, err := sanitizeRepoAndTags(options.Tags)
if err != nil {
return "", err
}
var build *builder.Result
if useBuildKit {
build, err = b.buildkit.Build(ctx, config)
if err != nil {
return "", err
}
} else {
build, err = b.builder.Build(ctx, config)
if err != nil {
return "", err
}
}
if build == nil {
return "", nil
}
imageID := build.ImageID
if options.Squash {
if imageID, err = squashBuild(build, b.imageComponent); err != nil {
return "", err
}
if config.ProgressWriter.AuxFormatter != nil {
if err = config.ProgressWriter.AuxFormatter.Emit("moby.image.id", types.BuildResult{ID: imageID}); err != nil {
return "", err
}
}
}
if !useBuildKit {
stdout := config.ProgressWriter.StdoutFormatter
fmt.Fprintf(stdout, "Successfully built %s\n", stringid.TruncateID(imageID))
}
if imageID != "" && !useBuildKit {
err = tagImages(ctx, b.imageComponent, config.ProgressWriter.StdoutFormatter, image.ID(imageID), tags)
}
return imageID, err
}
// PruneCache removes all cached build sources
func (b *Backend) PruneCache(ctx context.Context, opts types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error) {
buildCacheSize, cacheIDs, err := b.buildkit.Prune(ctx, opts)
if err != nil {
return nil, errors.Wrap(err, "failed to prune build cache")
}
b.eventsService.Log(events.ActionPrune, events.BuilderEventType, events.Actor{
Attributes: map[string]string{
"reclaimed": strconv.FormatInt(buildCacheSize, 10),
},
})
return &types.BuildCachePruneReport{SpaceReclaimed: uint64(buildCacheSize), CachesDeleted: cacheIDs}, nil
}
// Cancel cancels the build by ID
func (b *Backend) Cancel(ctx context.Context, id string) error {
return b.buildkit.Cancel(ctx, id)
}
func squashBuild(build *builder.Result, imageComponent ImageComponent) (string, error) {
var fromID string
if build.FromImage != nil {
fromID = build.FromImage.ImageID()
}
imageID, err := imageComponent.SquashImage(build.ImageID, fromID)
if err != nil {
return "", errors.Wrap(err, "error squashing image")
}
return imageID, nil
}

View File

@@ -1,4 +1,4 @@
package build
package build // import "github.com/docker/docker/api/server/backend/build"
import (
"context"
@@ -6,7 +6,7 @@ import (
"io"
"github.com/distribution/reference"
"github.com/moby/moby/v2/daemon/internal/image"
"github.com/docker/docker/image"
"github.com/pkg/errors"
)
@@ -24,7 +24,7 @@ func tagImages(ctx context.Context, ic ImageComponent, stdout io.Writer, imageID
// sanitizeRepoAndTags parses the raw "t" parameter received from the client
// to a slice of repoAndTag. It removes duplicates, and validates each name
// to not contain a digest.
func sanitizeRepoAndTags(names []string) (repoAndTags []reference.Named, _ error) {
func sanitizeRepoAndTags(names []string) (repoAndTags []reference.Named, err error) {
uniqNames := map[string]struct{}{}
for _, repo := range names {
if repo == "" {

View File

@@ -0,0 +1,34 @@
package server
import (
"net/http"
"github.com/docker/docker/api/server/httpstatus"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions"
"github.com/gorilla/mux"
"google.golang.org/grpc/status"
)
// makeErrorHandler makes an HTTP handler that decodes a Docker error and
// returns it in the response.
func makeErrorHandler(err error) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
statusCode := httpstatus.FromError(err)
vars := mux.Vars(r)
if apiVersionSupportsJSONErrors(vars["version"]) {
response := &types.ErrorResponse{
Message: err.Error(),
}
_ = httputils.WriteJSON(w, statusCode, response)
} else {
http.Error(w, status.Convert(err).Message(), statusCode)
}
}
}
func apiVersionSupportsJSONErrors(version string) bool {
const firstAPIVersionWithJSONErrors = "1.23"
return version == "" || versions.GreaterThan(version, firstAPIVersionWithJSONErrors)
}

View File

@@ -0,0 +1,152 @@
package httpstatus // import "github.com/docker/docker/api/server/httpstatus"
import (
"context"
"fmt"
"net/http"
cerrdefs "github.com/containerd/containerd/errdefs"
"github.com/containerd/log"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/docker/errdefs"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
type causer interface {
Cause() error
}
// FromError retrieves status code from error message.
func FromError(err error) int {
if err == nil {
log.G(context.TODO()).WithError(err).Error("unexpected HTTP error handling")
return http.StatusInternalServerError
}
var statusCode int
// Stop right there
// Are you sure you should be adding a new error class here? Do one of the existing ones work?
// Note that the below functions are already checking the error causal chain for matches.
switch {
case errdefs.IsNotFound(err):
statusCode = http.StatusNotFound
case errdefs.IsInvalidParameter(err):
statusCode = http.StatusBadRequest
case errdefs.IsConflict(err):
statusCode = http.StatusConflict
case errdefs.IsUnauthorized(err):
statusCode = http.StatusUnauthorized
case errdefs.IsUnavailable(err):
statusCode = http.StatusServiceUnavailable
case errdefs.IsForbidden(err):
statusCode = http.StatusForbidden
case errdefs.IsNotModified(err):
statusCode = http.StatusNotModified
case errdefs.IsNotImplemented(err):
statusCode = http.StatusNotImplemented
case errdefs.IsSystem(err) || errdefs.IsUnknown(err) || errdefs.IsDataLoss(err) || errdefs.IsDeadline(err) || errdefs.IsCancelled(err):
statusCode = http.StatusInternalServerError
default:
statusCode = statusCodeFromGRPCError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
statusCode = statusCodeFromContainerdError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
statusCode = statusCodeFromDistributionError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
if e, ok := err.(causer); ok {
return FromError(e.Cause())
}
log.G(context.TODO()).WithFields(log.Fields{
"module": "api",
"error": err,
"error_type": fmt.Sprintf("%T", err),
}).Debug("FIXME: Got an API for which error does not match any expected type!!!")
}
if statusCode == 0 {
statusCode = http.StatusInternalServerError
}
return statusCode
}
// statusCodeFromGRPCError returns status code according to gRPC error
func statusCodeFromGRPCError(err error) int {
switch status.Code(err) {
case codes.InvalidArgument: // code 3
return http.StatusBadRequest
case codes.NotFound: // code 5
return http.StatusNotFound
case codes.AlreadyExists: // code 6
return http.StatusConflict
case codes.PermissionDenied: // code 7
return http.StatusForbidden
case codes.FailedPrecondition: // code 9
return http.StatusBadRequest
case codes.Unauthenticated: // code 16
return http.StatusUnauthorized
case codes.OutOfRange: // code 11
return http.StatusBadRequest
case codes.Unimplemented: // code 12
return http.StatusNotImplemented
case codes.Unavailable: // code 14
return http.StatusServiceUnavailable
default:
// codes.Canceled(1)
// codes.Unknown(2)
// codes.DeadlineExceeded(4)
// codes.ResourceExhausted(8)
// codes.Aborted(10)
// codes.Internal(13)
// codes.DataLoss(15)
return http.StatusInternalServerError
}
}
// statusCodeFromDistributionError returns status code according to registry errcode
// code is loosely based on errcode.ServeJSON() in docker/distribution
func statusCodeFromDistributionError(err error) int {
switch errs := err.(type) {
case errcode.Errors:
if len(errs) < 1 {
return http.StatusInternalServerError
}
if _, ok := errs[0].(errcode.ErrorCoder); ok {
return statusCodeFromDistributionError(errs[0])
}
case errcode.ErrorCoder:
return errs.ErrorCode().Descriptor().HTTPStatusCode
}
return http.StatusInternalServerError
}
// statusCodeFromContainerdError returns status code for containerd errors when
// consumed directly (not through gRPC)
func statusCodeFromContainerdError(err error) int {
switch {
case cerrdefs.IsInvalidArgument(err):
return http.StatusBadRequest
case cerrdefs.IsNotFound(err):
return http.StatusNotFound
case cerrdefs.IsAlreadyExists(err):
return http.StatusConflict
case cerrdefs.IsFailedPrecondition(err):
return http.StatusPreconditionFailed
case cerrdefs.IsUnavailable(err):
return http.StatusServiceUnavailable
case cerrdefs.IsNotImplemented(err):
return http.StatusNotImplemented
default:
return http.StatusInternalServerError
}
}

View File

@@ -0,0 +1,16 @@
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"io"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/network"
)
// ContainerDecoder specifies how
// to translate an io.Reader into
// container configuration.
type ContainerDecoder interface {
DecodeConfig(src io.Reader) (*container.Config, *container.HostConfig, *network.NetworkingConfig, error)
DecodeHostConfig(src io.Reader) (*container.HostConfig, error)
}

View File

@@ -0,0 +1,111 @@
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"fmt"
"net/http"
"strconv"
"strings"
"github.com/distribution/reference"
)
// BoolValue transforms a form value in different formats into a boolean type.
func BoolValue(r *http.Request, k string) bool {
s := strings.ToLower(strings.TrimSpace(r.FormValue(k)))
return !(s == "" || s == "0" || s == "no" || s == "false" || s == "none")
}
// BoolValueOrDefault returns the default bool passed if the query param is
// missing, otherwise it's just a proxy to boolValue above.
func BoolValueOrDefault(r *http.Request, k string, d bool) bool {
if _, ok := r.Form[k]; !ok {
return d
}
return BoolValue(r, k)
}
// Int64ValueOrZero parses a form value into an int64 type.
// It returns 0 if the parsing fails.
func Int64ValueOrZero(r *http.Request, k string) int64 {
val, err := Int64ValueOrDefault(r, k, 0)
if err != nil {
return 0
}
return val
}
// Int64ValueOrDefault parses a form value into an int64 type. If there is an
// error, returns the error. If there is no value returns the default value.
func Int64ValueOrDefault(r *http.Request, field string, def int64) (int64, error) {
if r.Form.Get(field) != "" {
value, err := strconv.ParseInt(r.Form.Get(field), 10, 64)
return value, err
}
return def, nil
}
// RepoTagReference parses form values "repo" and "tag" and returns a valid
// reference with repository and tag.
// If repo is empty, then a nil reference is returned.
// If no tag is given, then the default "latest" tag is set.
func RepoTagReference(repo, tag string) (reference.NamedTagged, error) {
if repo == "" {
return nil, nil
}
ref, err := reference.ParseNormalizedNamed(repo)
if err != nil {
return nil, err
}
if _, isDigested := ref.(reference.Digested); isDigested {
return nil, fmt.Errorf("cannot import digest reference")
}
if tag != "" {
return reference.WithTag(ref, tag)
}
withDefaultTag := reference.TagNameOnly(ref)
namedTagged, ok := withDefaultTag.(reference.NamedTagged)
if !ok {
return nil, fmt.Errorf("unexpected reference: %q", ref.String())
}
return namedTagged, nil
}
// ArchiveOptions stores archive information for different operations.
type ArchiveOptions struct {
Name string
Path string
}
type badParameterError struct {
param string
}
func (e badParameterError) Error() string {
return "bad parameter: " + e.param + "cannot be empty"
}
func (e badParameterError) InvalidParameter() {}
// ArchiveFormValues parses form values and turns them into ArchiveOptions.
// It fails if the archive name and path are not in the request.
func ArchiveFormValues(r *http.Request, vars map[string]string) (ArchiveOptions, error) {
if err := ParseForm(r); err != nil {
return ArchiveOptions{}, err
}
name := vars["name"]
if name == "" {
return ArchiveOptions{}, badParameterError{"name"}
}
path := r.Form.Get("path")
if path == "" {
return ArchiveOptions{}, badParameterError{"path"}
}
return ArchiveOptions{name, path}, nil
}

View File

@@ -0,0 +1,105 @@
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"net/http"
"net/url"
"testing"
)
func TestBoolValue(t *testing.T) {
cases := map[string]bool{
"": false,
"0": false,
"no": false,
"false": false,
"none": false,
"1": true,
"yes": true,
"true": true,
"one": true,
"100": true,
}
for c, e := range cases {
v := url.Values{}
v.Set("test", c)
r, _ := http.NewRequest(http.MethodPost, "", nil)
r.Form = v
a := BoolValue(r, "test")
if a != e {
t.Fatalf("Value: %s, expected: %v, actual: %v", c, e, a)
}
}
}
func TestBoolValueOrDefault(t *testing.T) {
r, _ := http.NewRequest(http.MethodGet, "", nil)
if !BoolValueOrDefault(r, "queryparam", true) {
t.Fatal("Expected to get true default value, got false")
}
v := url.Values{}
v.Set("param", "")
r, _ = http.NewRequest(http.MethodGet, "", nil)
r.Form = v
if BoolValueOrDefault(r, "param", true) {
t.Fatal("Expected not to get true")
}
}
func TestInt64ValueOrZero(t *testing.T) {
cases := map[string]int64{
"": 0,
"asdf": 0,
"0": 0,
"1": 1,
}
for c, e := range cases {
v := url.Values{}
v.Set("test", c)
r, _ := http.NewRequest(http.MethodPost, "", nil)
r.Form = v
a := Int64ValueOrZero(r, "test")
if a != e {
t.Fatalf("Value: %s, expected: %v, actual: %v", c, e, a)
}
}
}
func TestInt64ValueOrDefault(t *testing.T) {
cases := map[string]int64{
"": -1,
"-1": -1,
"42": 42,
}
for c, e := range cases {
v := url.Values{}
v.Set("test", c)
r, _ := http.NewRequest(http.MethodPost, "", nil)
r.Form = v
a, err := Int64ValueOrDefault(r, "test", -1)
if a != e {
t.Fatalf("Value: %s, expected: %v, actual: %v", c, e, a)
}
if err != nil {
t.Fatalf("Error should be nil, but received: %s", err)
}
}
}
func TestInt64ValueOrDefaultWithError(t *testing.T) {
v := url.Values{}
v.Set("test", "invalid")
r, _ := http.NewRequest(http.MethodPost, "", nil)
r.Form = v
_, err := Int64ValueOrDefault(r, "test", -1)
if err == nil {
t.Fatal("Expected an error.")
}
}

View File

@@ -1,4 +1,4 @@
package httputils
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"context"
@@ -8,7 +8,7 @@ import (
"net/http"
"strings"
"github.com/moby/moby/v2/errdefs"
"github.com/docker/docker/errdefs"
"github.com/pkg/errors"
)
@@ -32,7 +32,7 @@ func HijackConnection(w http.ResponseWriter) (io.ReadCloser, io.Writer, error) {
}
// CloseStreams ensures that a list for http streams are properly closed.
func CloseStreams(streams ...any) {
func CloseStreams(streams ...interface{}) {
for _, stream := range streams {
if tcpc, ok := stream.(interface {
CloseWrite() error
@@ -59,7 +59,7 @@ func CheckForJSON(r *http.Request) error {
// ReadJSON validates the request to have the correct content-type, and decodes
// the request's Body into out.
func ReadJSON(r *http.Request, out any) error {
func ReadJSON(r *http.Request, out interface{}) error {
err := CheckForJSON(r)
if err != nil {
return err
@@ -74,7 +74,7 @@ func ReadJSON(r *http.Request, out any) error {
err = dec.Decode(out)
defer r.Body.Close()
if err != nil {
if errors.Is(err, io.EOF) {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("invalid JSON: got EOF while reading request body"))
}
return errdefs.InvalidParameter(errors.Wrap(err, "invalid JSON"))
@@ -87,7 +87,7 @@ func ReadJSON(r *http.Request, out any) error {
}
// WriteJSON writes the value v to the http response stream as json with standard json encoding.
func WriteJSON(w http.ResponseWriter, code int, v any) error {
func WriteJSON(w http.ResponseWriter, code int, v interface{}) error {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(code)
enc := json.NewEncoder(w)

View File

@@ -1,4 +1,4 @@
package httputils
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"net/http"
@@ -33,7 +33,7 @@ func TestJsonContentType(t *testing.T) {
func TestReadJSON(t *testing.T) {
t.Run("nil body", func(t *testing.T) {
req, err := http.NewRequest(http.MethodPost, "https://example.com/some/path", http.NoBody)
req, err := http.NewRequest(http.MethodPost, "https://example.com/some/path", nil)
if err != nil {
t.Error(err)
}

View File

@@ -0,0 +1,84 @@
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"context"
"fmt"
"io"
"net/url"
"sort"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/jsonmessage"
"github.com/docker/docker/pkg/stdcopy"
)
// WriteLogStream writes an encoded byte stream of log messages from the
// messages channel, multiplexing them with a stdcopy.Writer if mux is true
func WriteLogStream(_ context.Context, w io.Writer, msgs <-chan *backend.LogMessage, config *container.LogsOptions, mux bool) {
wf := ioutils.NewWriteFlusher(w)
defer wf.Close()
wf.Flush()
outStream := io.Writer(wf)
errStream := outStream
sysErrStream := errStream
if mux {
sysErrStream = stdcopy.NewStdWriter(outStream, stdcopy.Systemerr)
errStream = stdcopy.NewStdWriter(outStream, stdcopy.Stderr)
outStream = stdcopy.NewStdWriter(outStream, stdcopy.Stdout)
}
for {
msg, ok := <-msgs
if !ok {
return
}
// check if the message contains an error. if so, write that error
// and exit
if msg.Err != nil {
fmt.Fprintf(sysErrStream, "Error grabbing logs: %v\n", msg.Err)
continue
}
logLine := msg.Line
if config.Details {
logLine = append(attrsByteSlice(msg.Attrs), ' ')
logLine = append(logLine, msg.Line...)
}
if config.Timestamps {
logLine = append([]byte(msg.Timestamp.Format(jsonmessage.RFC3339NanoFixed)+" "), logLine...)
}
if msg.Source == "stdout" && config.ShowStdout {
_, _ = outStream.Write(logLine)
}
if msg.Source == "stderr" && config.ShowStderr {
_, _ = errStream.Write(logLine)
}
}
}
type byKey []backend.LogAttr
func (b byKey) Len() int { return len(b) }
func (b byKey) Less(i, j int) bool { return b[i].Key < b[j].Key }
func (b byKey) Swap(i, j int) { b[i], b[j] = b[j], b[i] }
func attrsByteSlice(a []backend.LogAttr) []byte {
// Note this sorts "a" in-place. That is fine here - nothing else is
// going to use Attrs or care about the order.
sort.Sort(byKey(a))
var ret []byte
for i, pair := range a {
k, v := url.QueryEscape(pair.Key), url.QueryEscape(pair.Value)
ret = append(ret, []byte(k)...)
ret = append(ret, '=')
ret = append(ret, []byte(v)...)
if i != len(a)-1 {
ret = append(ret, ',')
}
}
return ret
}

24
api/server/middleware.go Normal file
View File

@@ -0,0 +1,24 @@
package server // import "github.com/docker/docker/api/server"
import (
"github.com/containerd/log"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/server/middleware"
)
// handlerWithGlobalMiddlewares wraps the handler function for a request with
// the server's global middlewares. The order of the middlewares is backwards,
// meaning that the first in the list will be evaluated last.
func (s *Server) handlerWithGlobalMiddlewares(handler httputils.APIFunc) httputils.APIFunc {
next := handler
for _, m := range s.middlewares {
next = m.WrapHandler(next)
}
if log.GetLevel() == log.DebugLevel {
next = middleware.DebugRequestMiddleware(next)
}
return next
}

View File

@@ -0,0 +1,38 @@
package middleware // import "github.com/docker/docker/api/server/middleware"
import (
"context"
"net/http"
"github.com/containerd/log"
"github.com/docker/docker/api/types/registry"
)
// CORSMiddleware injects CORS headers to each request
// when it's configured.
type CORSMiddleware struct {
defaultHeaders string
}
// NewCORSMiddleware creates a new CORSMiddleware with default headers.
func NewCORSMiddleware(d string) CORSMiddleware {
return CORSMiddleware{defaultHeaders: d}
}
// WrapHandler returns a new handler function wrapping the previous one in the request chain.
func (c CORSMiddleware) WrapHandler(handler func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error) func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
// If "api-cors-header" is not given, but "api-enable-cors" is true, we set cors to "*"
// otherwise, all head values will be passed to HTTP handler
corsHeaders := c.defaultHeaders
if corsHeaders == "" {
corsHeaders = "*"
}
log.G(ctx).Debugf("CORS header is enabled and set to: %s", corsHeaders)
w.Header().Add("Access-Control-Allow-Origin", corsHeaders)
w.Header().Add("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept, "+registry.AuthHeader)
w.Header().Add("Access-Control-Allow-Methods", "HEAD, GET, POST, DELETE, PUT, OPTIONS")
return handler(ctx, w, r, vars)
}
}

View File

@@ -0,0 +1,90 @@
package middleware // import "github.com/docker/docker/api/server/middleware"
import (
"bufio"
"context"
"encoding/json"
"io"
"net/http"
"strings"
"github.com/containerd/log"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/pkg/ioutils"
)
// DebugRequestMiddleware dumps the request to logger
func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error) func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
log.G(ctx).Debugf("Calling %s %s", r.Method, r.RequestURI)
if r.Method != http.MethodPost {
return handler(ctx, w, r, vars)
}
if err := httputils.CheckForJSON(r); err != nil {
return handler(ctx, w, r, vars)
}
maxBodySize := 4096 // 4KB
if r.ContentLength > int64(maxBodySize) {
return handler(ctx, w, r, vars)
}
body := r.Body
bufReader := bufio.NewReaderSize(body, maxBodySize)
r.Body = ioutils.NewReadCloserWrapper(bufReader, func() error { return body.Close() })
b, err := bufReader.Peek(maxBodySize)
if err != io.EOF {
// either there was an error reading, or the buffer is full (in which case the request is too large)
return handler(ctx, w, r, vars)
}
var postForm map[string]interface{}
if err := json.Unmarshal(b, &postForm); err == nil {
maskSecretKeys(postForm)
formStr, errMarshal := json.Marshal(postForm)
if errMarshal == nil {
log.G(ctx).Debugf("form data: %s", string(formStr))
} else {
log.G(ctx).Debugf("form data: %q", postForm)
}
}
return handler(ctx, w, r, vars)
}
}
func maskSecretKeys(inp interface{}) {
if arr, ok := inp.([]interface{}); ok {
for _, f := range arr {
maskSecretKeys(f)
}
return
}
if form, ok := inp.(map[string]interface{}); ok {
scrub := []string{
// Note: The Data field contains the base64-encoded secret in 'secret'
// and 'config' create and update requests. Currently, no other POST
// API endpoints use a data field, so we scrub this field unconditionally.
// Change this handling to be conditional if a new endpoint is added
// in future where this field should not be scrubbed.
"data",
"jointoken",
"password",
"secret",
"signingcakey",
"unlockkey",
}
loop0:
for k, v := range form {
for _, m := range scrub {
if strings.EqualFold(m, k) {
form[k] = "*****"
continue loop0
}
}
maskSecretKeys(v)
}
}
}

View File

@@ -0,0 +1,75 @@
package middleware // import "github.com/docker/docker/api/server/middleware"
import (
"testing"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
)
func TestMaskSecretKeys(t *testing.T) {
tests := []struct {
doc string
input map[string]interface{}
expected map[string]interface{}
}{
{
doc: "secret/config create and update requests",
input: map[string]interface{}{"Data": "foo", "Name": "name", "Labels": map[string]interface{}{}},
expected: map[string]interface{}{"Data": "*****", "Name": "name", "Labels": map[string]interface{}{}},
},
{
doc: "masking other fields (recursively)",
input: map[string]interface{}{
"password": "pass",
"secret": "secret",
"jointoken": "jointoken",
"unlockkey": "unlockkey",
"signingcakey": "signingcakey",
"other": map[string]interface{}{
"password": "pass",
"secret": "secret",
"jointoken": "jointoken",
"unlockkey": "unlockkey",
"signingcakey": "signingcakey",
},
},
expected: map[string]interface{}{
"password": "*****",
"secret": "*****",
"jointoken": "*****",
"unlockkey": "*****",
"signingcakey": "*****",
"other": map[string]interface{}{
"password": "*****",
"secret": "*****",
"jointoken": "*****",
"unlockkey": "*****",
"signingcakey": "*****",
},
},
},
{
doc: "case insensitive field matching",
input: map[string]interface{}{
"PASSWORD": "pass",
"other": map[string]interface{}{
"PASSWORD": "pass",
},
},
expected: map[string]interface{}{
"PASSWORD": "*****",
"other": map[string]interface{}{
"PASSWORD": "*****",
},
},
},
}
for _, testcase := range tests {
t.Run(testcase.doc, func(t *testing.T) {
maskSecretKeys(testcase.input)
assert.Check(t, is.DeepEqual(testcase.expected, testcase.input))
})
}
}

View File

@@ -0,0 +1,28 @@
package middleware // import "github.com/docker/docker/api/server/middleware"
import (
"context"
"net/http"
)
// ExperimentalMiddleware is a the middleware in charge of adding the
// 'Docker-Experimental' header to every outgoing request
type ExperimentalMiddleware struct {
experimental string
}
// NewExperimentalMiddleware creates a new ExperimentalMiddleware
func NewExperimentalMiddleware(experimentalEnabled bool) ExperimentalMiddleware {
if experimentalEnabled {
return ExperimentalMiddleware{"true"}
}
return ExperimentalMiddleware{"false"}
}
// WrapHandler returns a new handler function wrapping the previous one in the request chain.
func (e ExperimentalMiddleware) WrapHandler(handler func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error) func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
w.Header().Set("Docker-Experimental", e.experimental)
return handler(ctx, w, r, vars)
}
}

View File

@@ -0,0 +1,12 @@
package middleware // import "github.com/docker/docker/api/server/middleware"
import (
"context"
"net/http"
)
// Middleware is an interface to allow the use of ordinary functions as Docker API filters.
// Any struct that has the appropriate signature can be registered as a middleware.
type Middleware interface {
WrapHandler(func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error) func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error
}

View File

@@ -0,0 +1,64 @@
package middleware // import "github.com/docker/docker/api/server/middleware"
import (
"context"
"fmt"
"net/http"
"runtime"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types/versions"
)
// VersionMiddleware is a middleware that
// validates the client and server versions.
type VersionMiddleware struct {
serverVersion string
defaultVersion string
minVersion string
}
// NewVersionMiddleware creates a new VersionMiddleware
// with the default versions.
func NewVersionMiddleware(s, d, m string) VersionMiddleware {
return VersionMiddleware{
serverVersion: s,
defaultVersion: d,
minVersion: m,
}
}
type versionUnsupportedError struct {
version, minVersion, maxVersion string
}
func (e versionUnsupportedError) Error() string {
if e.minVersion != "" {
return fmt.Sprintf("client version %s is too old. Minimum supported API version is %s, please upgrade your client to a newer version", e.version, e.minVersion)
}
return fmt.Sprintf("client version %s is too new. Maximum supported API version is %s", e.version, e.maxVersion)
}
func (e versionUnsupportedError) InvalidParameter() {}
// WrapHandler returns a new handler function wrapping the previous one in the request chain.
func (v VersionMiddleware) WrapHandler(handler func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error) func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
w.Header().Set("Server", fmt.Sprintf("Docker/%s (%s)", v.serverVersion, runtime.GOOS))
w.Header().Set("API-Version", v.defaultVersion)
w.Header().Set("OSType", runtime.GOOS)
apiVersion := vars["version"]
if apiVersion == "" {
apiVersion = v.defaultVersion
}
if versions.LessThan(apiVersion, v.minVersion) {
return versionUnsupportedError{version: apiVersion, minVersion: v.minVersion}
}
if versions.GreaterThan(apiVersion, v.defaultVersion) {
return versionUnsupportedError{version: apiVersion, maxVersion: v.defaultVersion}
}
ctx = context.WithValue(ctx, httputils.APIVersionKey{}, apiVersion)
return handler(ctx, w, r, vars)
}
}

View File

@@ -0,0 +1,92 @@
package middleware // import "github.com/docker/docker/api/server/middleware"
import (
"context"
"net/http"
"net/http/httptest"
"runtime"
"testing"
"github.com/docker/docker/api/server/httputils"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
)
func TestVersionMiddlewareVersion(t *testing.T) {
defaultVersion := "1.10.0"
minVersion := "1.2.0"
expectedVersion := defaultVersion
handler := func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
v := httputils.VersionFromContext(ctx)
assert.Check(t, is.Equal(expectedVersion, v))
return nil
}
m := NewVersionMiddleware(defaultVersion, defaultVersion, minVersion)
h := m.WrapHandler(handler)
req, _ := http.NewRequest(http.MethodGet, "/containers/json", nil)
resp := httptest.NewRecorder()
ctx := context.Background()
tests := []struct {
reqVersion string
expectedVersion string
errString string
}{
{
expectedVersion: "1.10.0",
},
{
reqVersion: "1.9.0",
expectedVersion: "1.9.0",
},
{
reqVersion: "0.1",
errString: "client version 0.1 is too old. Minimum supported API version is 1.2.0, please upgrade your client to a newer version",
},
{
reqVersion: "9999.9999",
errString: "client version 9999.9999 is too new. Maximum supported API version is 1.10.0",
},
}
for _, test := range tests {
expectedVersion = test.expectedVersion
err := h(ctx, resp, req, map[string]string{"version": test.reqVersion})
if test.errString != "" {
assert.Check(t, is.Error(err, test.errString))
} else {
assert.Check(t, err)
}
}
}
func TestVersionMiddlewareWithErrorsReturnsHeaders(t *testing.T) {
handler := func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
v := httputils.VersionFromContext(ctx)
assert.Check(t, len(v) != 0)
return nil
}
defaultVersion := "1.10.0"
minVersion := "1.2.0"
m := NewVersionMiddleware(defaultVersion, defaultVersion, minVersion)
h := m.WrapHandler(handler)
req, _ := http.NewRequest(http.MethodGet, "/containers/json", nil)
resp := httptest.NewRecorder()
ctx := context.Background()
vars := map[string]string{"version": "0.1"}
err := h(ctx, resp, req, vars)
assert.Check(t, is.ErrorContains(err, ""))
hdr := resp.Result().Header
assert.Check(t, is.Contains(hdr.Get("Server"), "Docker/"+defaultVersion))
assert.Check(t, is.Contains(hdr.Get("Server"), runtime.GOOS))
assert.Check(t, is.Equal(hdr.Get("API-Version"), defaultVersion))
assert.Check(t, is.Equal(hdr.Get("OSType"), runtime.GOOS))
}

View File

@@ -0,0 +1,23 @@
package build // import "github.com/docker/docker/api/server/router/build"
import (
"context"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
)
// Backend abstracts an image builder whose only purpose is to build an image referenced by an imageID.
type Backend interface {
// Build a Docker image returning the id of the image
// TODO: make this return a reference instead of string
Build(context.Context, backend.BuildConfig) (string, error)
// Prune build cache
PruneCache(context.Context, types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error)
Cancel(context.Context, string) error
}
type experimentalProvider interface {
HasExperimental() bool
}

View File

@@ -0,0 +1,60 @@
package build // import "github.com/docker/docker/api/server/router/build"
import (
"runtime"
"github.com/docker/docker/api/server/router"
"github.com/docker/docker/api/types"
)
// buildRouter is a router to talk with the build controller
type buildRouter struct {
backend Backend
daemon experimentalProvider
routes []router.Route
}
// NewRouter initializes a new build router
func NewRouter(b Backend, d experimentalProvider) router.Router {
r := &buildRouter{
backend: b,
daemon: d,
}
r.initRoutes()
return r
}
// Routes returns the available routers to the build controller
func (br *buildRouter) Routes() []router.Route {
return br.routes
}
func (br *buildRouter) initRoutes() {
br.routes = []router.Route{
router.NewPostRoute("/build", br.postBuild),
router.NewPostRoute("/build/prune", br.postPrune),
router.NewPostRoute("/build/cancel", br.postCancel),
}
}
// BuilderVersion derives the default docker builder version from the config.
//
// The default on Linux is version "2" (BuildKit), but the daemon can be
// configured to recommend version "1" (classic Builder). Windows does not
// yet support BuildKit for native Windows images, and uses "1" (classic builder)
// as a default.
//
// This value is only a recommendation as advertised by the daemon, and it is
// up to the client to choose which builder to use.
func BuilderVersion(features map[string]bool) types.BuilderVersion {
// TODO(thaJeztah) move the default to daemon/config
if runtime.GOOS == "windows" {
return types.BuilderV1
}
bv := types.BuilderBuildKit
if v, ok := features["buildkit"]; ok && !v {
bv = types.BuilderV1
}
return bv
}

View File

@@ -1,4 +1,4 @@
package build
package build // import "github.com/docker/docker/api/server/router/build"
import (
"bufio"
@@ -13,19 +13,19 @@ import (
"strconv"
"strings"
"sync"
"syscall"
"github.com/containerd/log"
"github.com/moby/moby/api/types/build"
"github.com/moby/moby/api/types/container"
"github.com/moby/moby/api/types/registry"
"github.com/moby/moby/v2/daemon/internal/filters"
"github.com/moby/moby/v2/daemon/internal/progress"
"github.com/moby/moby/v2/daemon/internal/streamformatter"
"github.com/moby/moby/v2/daemon/internal/versions"
"github.com/moby/moby/v2/daemon/server/buildbackend"
"github.com/moby/moby/v2/daemon/server/httputils"
"github.com/moby/moby/v2/pkg/ioutils"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/progress"
"github.com/docker/docker/pkg/streamformatter"
units "github.com/docker/go-units"
"github.com/pkg/errors"
)
@@ -35,14 +35,13 @@ type invalidParam struct {
func (e invalidParam) InvalidParameter() {}
func newImageBuildOptions(ctx context.Context, r *http.Request) (*buildbackend.BuildOptions, error) {
options := &buildbackend.BuildOptions{
Version: build.BuilderV1, // Builder V1 is the default, but can be overridden
func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBuildOptions, error) {
options := &types.ImageBuildOptions{
Version: types.BuilderV1, // Builder V1 is the default, but can be overridden
Dockerfile: r.FormValue("dockerfile"),
SuppressOutput: httputils.BoolValue(r, "q"),
NoCache: httputils.BoolValue(r, "nocache"),
ForceRemove: httputils.BoolValue(r, "forcerm"),
PullParent: httputils.BoolValue(r, "pull"),
MemorySwap: httputils.Int64ValueOrZero(r, "memswap"),
Memory: httputils.Int64ValueOrZero(r, "memory"),
CPUShares: httputils.Int64ValueOrZero(r, "cpushares"),
@@ -67,21 +66,24 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*buildbackend.B
return nil, invalidParam{errors.New("security options are not supported on " + runtime.GOOS)}
}
if httputils.BoolValue(r, "forcerm") {
version := httputils.VersionFromContext(ctx)
if httputils.BoolValue(r, "forcerm") && versions.GreaterThanOrEqualTo(version, "1.12") {
options.Remove = true
} else if r.FormValue("rm") == "" {
} else if r.FormValue("rm") == "" && versions.GreaterThanOrEqualTo(version, "1.12") {
options.Remove = true
} else {
options.Remove = httputils.BoolValue(r, "rm")
}
version := httputils.VersionFromContext(ctx)
if httputils.BoolValue(r, "pull") && versions.GreaterThanOrEqualTo(version, "1.16") {
options.PullParent = true
}
if versions.GreaterThanOrEqualTo(version, "1.32") {
options.Platform = r.FormValue("platform")
}
if versions.GreaterThanOrEqualTo(version, "1.40") {
outputsJSON := r.FormValue("outputs")
if outputsJSON != "" {
var outputs []buildbackend.BuildOutput
var outputs []types.ImageBuildOutput
if err := json.Unmarshal([]byte(outputsJSON), &outputs); err != nil {
return nil, invalidParam{errors.Wrap(err, "invalid outputs specified")}
}
@@ -105,7 +107,7 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*buildbackend.B
}
if ulimitsJSON := r.FormValue("ulimits"); ulimitsJSON != "" {
buildUlimits := []*container.Ulimit{}
buildUlimits := []*units.Ulimit{}
if err := json.Unmarshal([]byte(ulimitsJSON), &buildUlimits); err != nil {
return nil, invalidParam{errors.Wrap(err, "error reading ulimit settings")}
}
@@ -159,12 +161,12 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*buildbackend.B
return options, nil
}
func parseVersion(s string) (build.BuilderVersion, error) {
switch build.BuilderVersion(s) {
case build.BuilderV1:
return build.BuilderV1, nil
case build.BuilderBuildKit:
return build.BuilderBuildKit, nil
func parseVersion(s string) (types.BuilderVersion, error) {
switch types.BuilderVersion(s) {
case types.BuilderV1:
return types.BuilderV1, nil
case types.BuilderBuildKit:
return types.BuilderBuildKit, nil
default:
return "", invalidParam{errors.Errorf("invalid version %q", s)}
}
@@ -178,56 +180,19 @@ func (br *buildRouter) postPrune(ctx context.Context, w http.ResponseWriter, r *
if err != nil {
return err
}
opts := buildbackend.CachePruneOptions{
All: httputils.BoolValue(r, "all"),
Filters: fltrs,
ksfv := r.FormValue("keep-storage")
if ksfv == "" {
ksfv = "0"
}
ks, err := strconv.Atoi(ksfv)
if err != nil {
return invalidParam{errors.Wrapf(err, "keep-storage is in bytes and expects an integer, got %v", ksfv)}
}
parseBytesFromFormValue := func(name string) (int64, error) {
if fv := r.FormValue(name); fv != "" {
bs, err := strconv.Atoi(fv)
if err != nil {
return 0, invalidParam{errors.Wrapf(err, "%s is in bytes and expects an integer, got %v", name, fv)}
}
return int64(bs), nil
}
return 0, nil
}
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, "1.48") {
if bs, err := parseBytesFromFormValue("reserved-space"); err != nil {
return err
} else {
if bs == 0 {
// Deprecated parameter. Only checked if reserved-space is not used.
bs, err = parseBytesFromFormValue("keep-storage")
if err != nil {
return err
}
}
opts.ReservedSpace = bs
}
if bs, err := parseBytesFromFormValue("max-used-space"); err != nil {
return err
} else {
opts.MaxUsedSpace = bs
}
if bs, err := parseBytesFromFormValue("min-free-space"); err != nil {
return err
} else {
opts.MinFreeSpace = bs
}
} else {
// Only keep-storage was valid in pre-1.48 versions.
if bs, err := parseBytesFromFormValue("keep-storage"); err != nil {
return err
} else {
opts.ReservedSpace = bs
}
opts := types.BuildCachePruneOptions{
All: httputils.BoolValue(r, "all"),
Filters: fltrs,
KeepStorage: int64(ks),
}
report, err := br.backend.PruneCache(ctx, opts)
@@ -282,9 +247,8 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
return err
}
_, err = output.Write(streamformatter.FormatError(err))
// don't log broken pipe errors as this is the normal case when a client aborts.
if err != nil && !errors.Is(err, syscall.EPIPE) {
log.G(ctx).WithError(err).Warn("could not write error response")
if err != nil {
log.G(ctx).Warnf("could not write error response: %v", err)
}
return nil
}
@@ -313,15 +277,12 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
wantAux := versions.GreaterThanOrEqualTo(version, "1.30")
imgID, err := br.backend.Build(ctx, buildbackend.BuildConfig{
imgID, err := br.backend.Build(ctx, backend.BuildConfig{
Source: body,
Options: buildOptions,
ProgressWriter: buildProgressWriter(out, wantAux, createProgressReader),
})
if err != nil {
if errors.Is(err, context.Canceled) {
log.G(ctx).Debug("build canceled")
}
return errf(err)
}
@@ -353,14 +314,14 @@ type syncWriter struct {
mu sync.Mutex
}
func (s *syncWriter) Write(b []byte) (int, error) {
func (s *syncWriter) Write(b []byte) (count int, err error) {
s.mu.Lock()
defer s.mu.Unlock()
return s.w.Write(b)
count, err = s.w.Write(b)
s.mu.Unlock()
return
}
func buildProgressWriter(out io.Writer, wantAux bool, createProgressReader func(io.ReadCloser) io.ReadCloser) buildbackend.ProgressWriter {
// see https://github.com/moby/moby/pull/21406
func buildProgressWriter(out io.Writer, wantAux bool, createProgressReader func(io.ReadCloser) io.ReadCloser) backend.ProgressWriter {
out = &syncWriter{w: out}
var aux *streamformatter.AuxFormatter
@@ -368,7 +329,7 @@ func buildProgressWriter(out io.Writer, wantAux bool, createProgressReader func(
aux = &streamformatter.AuxFormatter{Writer: out}
}
return buildbackend.ProgressWriter{
return backend.ProgressWriter{
Output: out,
StdoutFormatter: streamformatter.NewStdoutWriter(out),
StderrFormatter: streamformatter.NewStderrWriter(out),
@@ -381,12 +342,8 @@ type flusher interface {
Flush()
}
type nopFlusher struct{}
func (f *nopFlusher) Flush() {}
func wrapOutputBufferedUntilRequestRead(rc io.ReadCloser, out io.Writer) (io.ReadCloser, io.Writer) {
var fl flusher = &nopFlusher{}
var fl flusher = &ioutils.NopFlusher{}
if f, ok := out.(flusher); ok {
fl = f
}

View File

@@ -0,0 +1,10 @@
package checkpoint // import "github.com/docker/docker/api/server/router/checkpoint"
import "github.com/docker/docker/api/types/checkpoint"
// Backend for Checkpoint
type Backend interface {
CheckpointCreate(container string, config checkpoint.CreateOptions) error
CheckpointDelete(container string, config checkpoint.DeleteOptions) error
CheckpointList(container string, config checkpoint.ListOptions) ([]checkpoint.Summary, error)
}

View File

@@ -0,0 +1,36 @@
package checkpoint // import "github.com/docker/docker/api/server/router/checkpoint"
import (
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/server/router"
)
// checkpointRouter is a router to talk with the checkpoint controller
type checkpointRouter struct {
backend Backend
decoder httputils.ContainerDecoder
routes []router.Route
}
// NewRouter initializes a new checkpoint router
func NewRouter(b Backend, decoder httputils.ContainerDecoder) router.Router {
r := &checkpointRouter{
backend: b,
decoder: decoder,
}
r.initRoutes()
return r
}
// Routes returns the available routers to the checkpoint controller
func (cr *checkpointRouter) Routes() []router.Route {
return cr.routes
}
func (cr *checkpointRouter) initRoutes() {
cr.routes = []router.Route{
router.NewGetRoute("/containers/{name:.*}/checkpoints", cr.getContainerCheckpoints, router.Experimental),
router.NewPostRoute("/containers/{name:.*}/checkpoints", cr.postContainerCheckpoint, router.Experimental),
router.NewDeleteRoute("/containers/{name}/checkpoints/{checkpoint}", cr.deleteContainerCheckpoint, router.Experimental),
}
}

View File

@@ -0,0 +1,60 @@
package checkpoint // import "github.com/docker/docker/api/server/router/checkpoint"
import (
"context"
"net/http"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types/checkpoint"
)
func (cr *checkpointRouter) postContainerCheckpoint(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
var options checkpoint.CreateOptions
if err := httputils.ReadJSON(r, &options); err != nil {
return err
}
err := cr.backend.CheckpointCreate(vars["name"], options)
if err != nil {
return err
}
w.WriteHeader(http.StatusCreated)
return nil
}
func (cr *checkpointRouter) getContainerCheckpoints(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
checkpoints, err := cr.backend.CheckpointList(vars["name"], checkpoint.ListOptions{
CheckpointDir: r.Form.Get("dir"),
})
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusOK, checkpoints)
}
func (cr *checkpointRouter) deleteContainerCheckpoint(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
err := cr.backend.CheckpointDelete(vars["name"], checkpoint.DeleteOptions{
CheckpointDir: r.Form.Get("dir"),
CheckpointID: vars["checkpoint"],
})
if err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}

View File

@@ -0,0 +1,82 @@
package container // import "github.com/docker/docker/api/server/router/container"
import (
"context"
"io"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
containerpkg "github.com/docker/docker/container"
"github.com/docker/docker/pkg/archive"
)
// execBackend includes functions to implement to provide exec functionality.
type execBackend interface {
ContainerExecCreate(name string, config *types.ExecConfig) (string, error)
ContainerExecInspect(id string) (*backend.ExecInspect, error)
ContainerExecResize(name string, height, width int) error
ContainerExecStart(ctx context.Context, name string, options container.ExecStartOptions) error
ExecExists(name string) (bool, error)
}
// copyBackend includes functions to implement to provide container copy functionality.
type copyBackend interface {
ContainerArchivePath(name string, path string) (content io.ReadCloser, stat *types.ContainerPathStat, err error)
ContainerCopy(name string, res string) (io.ReadCloser, error)
ContainerExport(ctx context.Context, name string, out io.Writer) error
ContainerExtractToDir(name, path string, copyUIDGID, noOverwriteDirNonDir bool, content io.Reader) error
ContainerStatPath(name string, path string) (stat *types.ContainerPathStat, err error)
}
// stateBackend includes functions to implement to provide container state lifecycle functionality.
type stateBackend interface {
ContainerCreate(ctx context.Context, config backend.ContainerCreateConfig) (container.CreateResponse, error)
ContainerKill(name string, signal string) error
ContainerPause(name string) error
ContainerRename(oldName, newName string) error
ContainerResize(name string, height, width int) error
ContainerRestart(ctx context.Context, name string, options container.StopOptions) error
ContainerRm(name string, config *backend.ContainerRmConfig) error
ContainerStart(ctx context.Context, name string, hostConfig *container.HostConfig, checkpoint string, checkpointDir string) error
ContainerStop(ctx context.Context, name string, options container.StopOptions) error
ContainerUnpause(name string) error
ContainerUpdate(name string, hostConfig *container.HostConfig) (container.ContainerUpdateOKBody, error)
ContainerWait(ctx context.Context, name string, condition containerpkg.WaitCondition) (<-chan containerpkg.StateStatus, error)
}
// monitorBackend includes functions to implement to provide containers monitoring functionality.
type monitorBackend interface {
ContainerChanges(ctx context.Context, name string) ([]archive.Change, error)
ContainerInspect(ctx context.Context, name string, size bool, version string) (interface{}, error)
ContainerLogs(ctx context.Context, name string, config *container.LogsOptions) (msgs <-chan *backend.LogMessage, tty bool, err error)
ContainerStats(ctx context.Context, name string, config *backend.ContainerStatsConfig) error
ContainerTop(name string, psArgs string) (*container.ContainerTopOKBody, error)
Containers(ctx context.Context, config *container.ListOptions) ([]*types.Container, error)
}
// attachBackend includes function to implement to provide container attaching functionality.
type attachBackend interface {
ContainerAttach(name string, c *backend.ContainerAttachConfig) error
}
// systemBackend includes functions to implement to provide system wide containers functionality
type systemBackend interface {
ContainersPrune(ctx context.Context, pruneFilters filters.Args) (*types.ContainersPruneReport, error)
}
type commitBackend interface {
CreateImageFromContainer(ctx context.Context, name string, config *backend.CreateImageConfig) (imageID string, err error)
}
// Backend is all the methods that need to be implemented to provide container specific functionality.
type Backend interface {
commitBackend
execBackend
copyBackend
stateBackend
monitorBackend
attachBackend
systemBackend
}

View File

@@ -0,0 +1,72 @@
package container // import "github.com/docker/docker/api/server/router/container"
import (
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/server/router"
)
// containerRouter is a router to talk with the container controller
type containerRouter struct {
backend Backend
decoder httputils.ContainerDecoder
routes []router.Route
cgroup2 bool
}
// NewRouter initializes a new container router
func NewRouter(b Backend, decoder httputils.ContainerDecoder, cgroup2 bool) router.Router {
r := &containerRouter{
backend: b,
decoder: decoder,
cgroup2: cgroup2,
}
r.initRoutes()
return r
}
// Routes returns the available routes to the container controller
func (r *containerRouter) Routes() []router.Route {
return r.routes
}
// initRoutes initializes the routes in container router
func (r *containerRouter) initRoutes() {
r.routes = []router.Route{
// HEAD
router.NewHeadRoute("/containers/{name:.*}/archive", r.headContainersArchive),
// GET
router.NewGetRoute("/containers/json", r.getContainersJSON),
router.NewGetRoute("/containers/{name:.*}/export", r.getContainersExport),
router.NewGetRoute("/containers/{name:.*}/changes", r.getContainersChanges),
router.NewGetRoute("/containers/{name:.*}/json", r.getContainersByName),
router.NewGetRoute("/containers/{name:.*}/top", r.getContainersTop),
router.NewGetRoute("/containers/{name:.*}/logs", r.getContainersLogs),
router.NewGetRoute("/containers/{name:.*}/stats", r.getContainersStats),
router.NewGetRoute("/containers/{name:.*}/attach/ws", r.wsContainersAttach),
router.NewGetRoute("/exec/{id:.*}/json", r.getExecByID),
router.NewGetRoute("/containers/{name:.*}/archive", r.getContainersArchive),
// POST
router.NewPostRoute("/containers/create", r.postContainersCreate),
router.NewPostRoute("/containers/{name:.*}/kill", r.postContainersKill),
router.NewPostRoute("/containers/{name:.*}/pause", r.postContainersPause),
router.NewPostRoute("/containers/{name:.*}/unpause", r.postContainersUnpause),
router.NewPostRoute("/containers/{name:.*}/restart", r.postContainersRestart),
router.NewPostRoute("/containers/{name:.*}/start", r.postContainersStart),
router.NewPostRoute("/containers/{name:.*}/stop", r.postContainersStop),
router.NewPostRoute("/containers/{name:.*}/wait", r.postContainersWait),
router.NewPostRoute("/containers/{name:.*}/resize", r.postContainersResize),
router.NewPostRoute("/containers/{name:.*}/attach", r.postContainersAttach),
router.NewPostRoute("/containers/{name:.*}/copy", r.postContainersCopy), // Deprecated since 1.8 (API v1.20), errors out since 1.12 (API v1.24)
router.NewPostRoute("/containers/{name:.*}/exec", r.postContainerExecCreate),
router.NewPostRoute("/exec/{name:.*}/start", r.postContainerExecStart),
router.NewPostRoute("/exec/{name:.*}/resize", r.postContainerExecResize),
router.NewPostRoute("/containers/{name:.*}/rename", r.postContainerRename),
router.NewPostRoute("/containers/{name:.*}/update", r.postContainerUpdate),
router.NewPostRoute("/containers/prune", r.postContainersPrune),
router.NewPostRoute("/commit", r.postCommit),
// PUT
router.NewPutRoute("/containers/{name:.*}/archive", r.putContainersArchive),
// DELETE
router.NewDeleteRoute("/containers/{name:.*}", r.deleteContainers),
}
}

View File

@@ -0,0 +1,943 @@
package container // import "github.com/docker/docker/api/server/router/container"
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"runtime"
"strconv"
"strings"
"github.com/containerd/containerd/platforms"
"github.com/containerd/log"
"github.com/docker/docker/api/server/httpstatus"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/api/types/versions"
containerpkg "github.com/docker/docker/container"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/runconfig"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"golang.org/x/net/websocket"
)
func (s *containerRouter) postCommit(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
if err := httputils.CheckForJSON(r); err != nil {
return err
}
// TODO: remove pause arg, and always pause in backend
pause := httputils.BoolValue(r, "pause")
version := httputils.VersionFromContext(ctx)
if r.FormValue("pause") == "" && versions.GreaterThanOrEqualTo(version, "1.13") {
pause = true
}
config, _, _, err := s.decoder.DecodeConfig(r.Body)
if err != nil && !errors.Is(err, io.EOF) { // Do not fail if body is empty.
return err
}
ref, err := httputils.RepoTagReference(r.Form.Get("repo"), r.Form.Get("tag"))
if err != nil {
return errdefs.InvalidParameter(err)
}
imgID, err := s.backend.CreateImageFromContainer(ctx, r.Form.Get("container"), &backend.CreateImageConfig{
Pause: pause,
Tag: ref,
Author: r.Form.Get("author"),
Comment: r.Form.Get("comment"),
Config: config,
Changes: r.Form["changes"],
})
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusCreated, &types.IDResponse{ID: imgID})
}
func (s *containerRouter) getContainersJSON(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
filter, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return err
}
config := &container.ListOptions{
All: httputils.BoolValue(r, "all"),
Size: httputils.BoolValue(r, "size"),
Since: r.Form.Get("since"),
Before: r.Form.Get("before"),
Filters: filter,
}
if tmpLimit := r.Form.Get("limit"); tmpLimit != "" {
limit, err := strconv.Atoi(tmpLimit)
if err != nil {
return err
}
config.Limit = limit
}
containers, err := s.backend.Containers(ctx, config)
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusOK, containers)
}
func (s *containerRouter) getContainersStats(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
stream := httputils.BoolValueOrDefault(r, "stream", true)
if !stream {
w.Header().Set("Content-Type", "application/json")
}
var oneShot bool
if versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.41") {
oneShot = httputils.BoolValueOrDefault(r, "one-shot", false)
}
config := &backend.ContainerStatsConfig{
Stream: stream,
OneShot: oneShot,
OutStream: w,
Version: httputils.VersionFromContext(ctx),
}
return s.backend.ContainerStats(ctx, vars["name"], config)
}
func (s *containerRouter) getContainersLogs(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
// Args are validated before the stream starts because when it starts we're
// sending HTTP 200 by writing an empty chunk of data to tell the client that
// daemon is going to stream. By sending this initial HTTP 200 we can't report
// any error after the stream starts (i.e. container not found, wrong parameters)
// with the appropriate status code.
stdout, stderr := httputils.BoolValue(r, "stdout"), httputils.BoolValue(r, "stderr")
if !(stdout || stderr) {
return errdefs.InvalidParameter(errors.New("Bad parameters: you must choose at least one stream"))
}
containerName := vars["name"]
logsConfig := &container.LogsOptions{
Follow: httputils.BoolValue(r, "follow"),
Timestamps: httputils.BoolValue(r, "timestamps"),
Since: r.Form.Get("since"),
Until: r.Form.Get("until"),
Tail: r.Form.Get("tail"),
ShowStdout: stdout,
ShowStderr: stderr,
Details: httputils.BoolValue(r, "details"),
}
msgs, tty, err := s.backend.ContainerLogs(ctx, containerName, logsConfig)
if err != nil {
return err
}
contentType := types.MediaTypeRawStream
if !tty && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
contentType = types.MediaTypeMultiplexedStream
}
w.Header().Set("Content-Type", contentType)
// if has a tty, we're not muxing streams. if it doesn't, we are. simple.
// this is the point of no return for writing a response. once we call
// WriteLogStream, the response has been started and errors will be
// returned in band by WriteLogStream
httputils.WriteLogStream(ctx, w, msgs, logsConfig, !tty)
return nil
}
func (s *containerRouter) getContainersExport(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return s.backend.ContainerExport(ctx, vars["name"], w)
}
type bodyOnStartError struct{}
func (bodyOnStartError) Error() string {
return "starting container with non-empty request body was deprecated since API v1.22 and removed in v1.24"
}
func (bodyOnStartError) InvalidParameter() {}
func (s *containerRouter) postContainersStart(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
// If contentLength is -1, we can assumed chunked encoding
// or more technically that the length is unknown
// https://golang.org/src/pkg/net/http/request.go#L139
// net/http otherwise seems to swallow any headers related to chunked encoding
// including r.TransferEncoding
// allow a nil body for backwards compatibility
version := httputils.VersionFromContext(ctx)
var hostConfig *container.HostConfig
// A non-nil json object is at least 7 characters.
if r.ContentLength > 7 || r.ContentLength == -1 {
if versions.GreaterThanOrEqualTo(version, "1.24") {
return bodyOnStartError{}
}
if err := httputils.CheckForJSON(r); err != nil {
return err
}
c, err := s.decoder.DecodeHostConfig(r.Body)
if err != nil {
return err
}
hostConfig = c
}
if err := httputils.ParseForm(r); err != nil {
return err
}
checkpoint := r.Form.Get("checkpoint")
checkpointDir := r.Form.Get("checkpoint-dir")
if err := s.backend.ContainerStart(ctx, vars["name"], hostConfig, checkpoint, checkpointDir); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func (s *containerRouter) postContainersStop(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
var (
options container.StopOptions
version = httputils.VersionFromContext(ctx)
)
if versions.GreaterThanOrEqualTo(version, "1.42") {
options.Signal = r.Form.Get("signal")
}
if tmpSeconds := r.Form.Get("t"); tmpSeconds != "" {
valSeconds, err := strconv.Atoi(tmpSeconds)
if err != nil {
return err
}
options.Timeout = &valSeconds
}
if err := s.backend.ContainerStop(ctx, vars["name"], options); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func (s *containerRouter) postContainersKill(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
name := vars["name"]
if err := s.backend.ContainerKill(name, r.Form.Get("signal")); err != nil {
var isStopped bool
if errdefs.IsConflict(err) {
isStopped = true
}
// Return error that's not caused because the container is stopped.
// Return error if the container is not running and the api is >= 1.20
// to keep backwards compatibility.
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, "1.20") || !isStopped {
return errors.Wrapf(err, "Cannot kill container: %s", name)
}
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func (s *containerRouter) postContainersRestart(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
var (
options container.StopOptions
version = httputils.VersionFromContext(ctx)
)
if versions.GreaterThanOrEqualTo(version, "1.42") {
options.Signal = r.Form.Get("signal")
}
if tmpSeconds := r.Form.Get("t"); tmpSeconds != "" {
valSeconds, err := strconv.Atoi(tmpSeconds)
if err != nil {
return err
}
options.Timeout = &valSeconds
}
if err := s.backend.ContainerRestart(ctx, vars["name"], options); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func (s *containerRouter) postContainersPause(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
if err := s.backend.ContainerPause(vars["name"]); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func (s *containerRouter) postContainersUnpause(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
if err := s.backend.ContainerUnpause(vars["name"]); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func (s *containerRouter) postContainersWait(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
// Behavior changed in version 1.30 to handle wait condition and to
// return headers immediately.
version := httputils.VersionFromContext(ctx)
legacyBehaviorPre130 := versions.LessThan(version, "1.30")
legacyRemovalWaitPre134 := false
// The wait condition defaults to "not-running".
waitCondition := containerpkg.WaitConditionNotRunning
if !legacyBehaviorPre130 {
if err := httputils.ParseForm(r); err != nil {
return err
}
if v := r.Form.Get("condition"); v != "" {
switch container.WaitCondition(v) {
case container.WaitConditionNotRunning:
waitCondition = containerpkg.WaitConditionNotRunning
case container.WaitConditionNextExit:
waitCondition = containerpkg.WaitConditionNextExit
case container.WaitConditionRemoved:
waitCondition = containerpkg.WaitConditionRemoved
legacyRemovalWaitPre134 = versions.LessThan(version, "1.34")
default:
return errdefs.InvalidParameter(errors.Errorf("invalid condition: %q", v))
}
}
}
waitC, err := s.backend.ContainerWait(ctx, vars["name"], waitCondition)
if err != nil {
return err
}
w.Header().Set("Content-Type", "application/json")
if !legacyBehaviorPre130 {
// Write response header immediately.
w.WriteHeader(http.StatusOK)
if flusher, ok := w.(http.Flusher); ok {
flusher.Flush()
}
}
// Block on the result of the wait operation.
status := <-waitC
// With API < 1.34, wait on WaitConditionRemoved did not return
// in case container removal failed. The only way to report an
// error back to the client is to not write anything (i.e. send
// an empty response which will be treated as an error).
if legacyRemovalWaitPre134 && status.Err() != nil {
return nil
}
var waitError *container.WaitExitError
if status.Err() != nil {
waitError = &container.WaitExitError{Message: status.Err().Error()}
}
return json.NewEncoder(w).Encode(&container.WaitResponse{
StatusCode: int64(status.ExitCode()),
Error: waitError,
})
}
func (s *containerRouter) getContainersChanges(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
changes, err := s.backend.ContainerChanges(ctx, vars["name"])
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusOK, changes)
}
func (s *containerRouter) getContainersTop(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
procList, err := s.backend.ContainerTop(vars["name"], r.Form.Get("ps_args"))
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusOK, procList)
}
func (s *containerRouter) postContainerRename(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
name := vars["name"]
newName := r.Form.Get("name")
if err := s.backend.ContainerRename(name, newName); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func (s *containerRouter) postContainerUpdate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
var updateConfig container.UpdateConfig
if err := httputils.ReadJSON(r, &updateConfig); err != nil {
return err
}
if versions.LessThan(httputils.VersionFromContext(ctx), "1.40") {
updateConfig.PidsLimit = nil
}
if versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
// Ignore KernelMemory removed in API 1.42.
updateConfig.KernelMemory = 0
}
if updateConfig.PidsLimit != nil && *updateConfig.PidsLimit <= 0 {
// Both `0` and `-1` are accepted to set "unlimited" when updating.
// Historically, any negative value was accepted, so treat them as
// "unlimited" as well.
var unlimited int64
updateConfig.PidsLimit = &unlimited
}
hostConfig := &container.HostConfig{
Resources: updateConfig.Resources,
RestartPolicy: updateConfig.RestartPolicy,
}
name := vars["name"]
resp, err := s.backend.ContainerUpdate(name, hostConfig)
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusOK, resp)
}
func (s *containerRouter) postContainersCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
if err := httputils.CheckForJSON(r); err != nil {
return err
}
name := r.Form.Get("name")
config, hostConfig, networkingConfig, err := s.decoder.DecodeConfig(r.Body)
if err != nil {
if errors.Is(err, io.EOF) {
return errdefs.InvalidParameter(errors.New("invalid JSON: got EOF while reading request body"))
}
return err
}
if config == nil {
return errdefs.InvalidParameter(runconfig.ErrEmptyConfig)
}
if hostConfig == nil {
hostConfig = &container.HostConfig{}
}
if hostConfig.NetworkMode == "" {
hostConfig.NetworkMode = "default"
}
if networkingConfig == nil {
networkingConfig = &network.NetworkingConfig{}
}
if networkingConfig.EndpointsConfig == nil {
networkingConfig.EndpointsConfig = make(map[string]*network.EndpointSettings)
}
version := httputils.VersionFromContext(ctx)
adjustCPUShares := versions.LessThan(version, "1.19")
// When using API 1.24 and under, the client is responsible for removing the container
if versions.LessThan(version, "1.25") {
hostConfig.AutoRemove = false
}
if versions.LessThan(version, "1.40") {
// Ignore BindOptions.NonRecursive because it was added in API 1.40.
for _, m := range hostConfig.Mounts {
if bo := m.BindOptions; bo != nil {
bo.NonRecursive = false
}
}
// Ignore KernelMemoryTCP because it was added in API 1.40.
hostConfig.KernelMemoryTCP = 0
// Older clients (API < 1.40) expects the default to be shareable, make them happy
if hostConfig.IpcMode.IsEmpty() {
hostConfig.IpcMode = container.IPCModeShareable
}
}
if versions.LessThan(version, "1.41") {
// Older clients expect the default to be "host" on cgroup v1 hosts
if !s.cgroup2 && hostConfig.CgroupnsMode.IsEmpty() {
hostConfig.CgroupnsMode = container.CgroupnsModeHost
}
}
var platform *ocispec.Platform
if versions.GreaterThanOrEqualTo(version, "1.41") {
if v := r.Form.Get("platform"); v != "" {
p, err := platforms.Parse(v)
if err != nil {
return errdefs.InvalidParameter(err)
}
platform = &p
}
}
if versions.LessThan(version, "1.42") {
for _, m := range hostConfig.Mounts {
// Ignore BindOptions.CreateMountpoint because it was added in API 1.42.
if bo := m.BindOptions; bo != nil {
bo.CreateMountpoint = false
}
// These combinations are invalid, but weren't validated in API < 1.42.
// We reset them here, so that validation doesn't produce an error.
if o := m.VolumeOptions; o != nil && m.Type != mount.TypeVolume {
m.VolumeOptions = nil
}
if o := m.TmpfsOptions; o != nil && m.Type != mount.TypeTmpfs {
m.TmpfsOptions = nil
}
if bo := m.BindOptions; bo != nil {
// Ignore BindOptions.CreateMountpoint because it was added in API 1.42.
bo.CreateMountpoint = false
}
}
if runtime.GOOS == "linux" {
// ConsoleSize is not respected by Linux daemon before API 1.42
hostConfig.ConsoleSize = [2]uint{0, 0}
}
}
if versions.GreaterThanOrEqualTo(version, "1.42") {
// Ignore KernelMemory removed in API 1.42.
hostConfig.KernelMemory = 0
for _, m := range hostConfig.Mounts {
if o := m.VolumeOptions; o != nil && m.Type != mount.TypeVolume {
return errdefs.InvalidParameter(fmt.Errorf("VolumeOptions must not be specified on mount type %q", m.Type))
}
if o := m.BindOptions; o != nil && m.Type != mount.TypeBind {
return errdefs.InvalidParameter(fmt.Errorf("BindOptions must not be specified on mount type %q", m.Type))
}
if o := m.TmpfsOptions; o != nil && m.Type != mount.TypeTmpfs {
return errdefs.InvalidParameter(fmt.Errorf("TmpfsOptions must not be specified on mount type %q", m.Type))
}
}
}
if versions.LessThan(version, "1.43") {
// Ignore Annotations because it was added in API v1.43.
hostConfig.Annotations = nil
}
defaultReadOnlyNonRecursive := false
if versions.LessThan(version, "1.44") {
if config.Healthcheck != nil {
// StartInterval was added in API 1.44
config.Healthcheck.StartInterval = 0
}
// Set ReadOnlyNonRecursive to true because it was added in API 1.44
// Before that all read-only mounts were non-recursive.
// Keep that behavior for clients on older APIs.
defaultReadOnlyNonRecursive = true
for _, m := range hostConfig.Mounts {
if m.Type == mount.TypeBind {
if m.BindOptions != nil && m.BindOptions.ReadOnlyForceRecursive {
// NOTE: that technically this is a breaking change for older
// API versions, and we should ignore the new field.
// However, this option may be incorrectly set by a client with
// the expectation that the failing to apply recursive read-only
// is enforced, so we decided to produce an error instead,
// instead of silently ignoring.
return errdefs.InvalidParameter(errors.New("BindOptions.ReadOnlyForceRecursive needs API v1.44 or newer"))
}
}
}
// Creating a container connected to several networks is not supported until v1.44.
if len(networkingConfig.EndpointsConfig) > 1 {
l := make([]string, 0, len(networkingConfig.EndpointsConfig))
for k := range networkingConfig.EndpointsConfig {
l = append(l, k)
}
return errdefs.InvalidParameter(errors.Errorf("Container cannot be created with multiple network endpoints: %s", strings.Join(l, ", ")))
}
}
var warnings []string
if warn, err := handleMACAddressBC(config, hostConfig, networkingConfig, version); err != nil {
return err
} else if warn != "" {
warnings = append(warnings, warn)
}
if hostConfig.PidsLimit != nil && *hostConfig.PidsLimit <= 0 {
// Don't set a limit if either no limit was specified, or "unlimited" was
// explicitly set.
// Both `0` and `-1` are accepted as "unlimited", and historically any
// negative value was accepted, so treat those as "unlimited" as well.
hostConfig.PidsLimit = nil
}
ccr, err := s.backend.ContainerCreate(ctx, backend.ContainerCreateConfig{
Name: name,
Config: config,
HostConfig: hostConfig,
NetworkingConfig: networkingConfig,
AdjustCPUShares: adjustCPUShares,
Platform: platform,
DefaultReadOnlyNonRecursive: defaultReadOnlyNonRecursive,
})
if err != nil {
return err
}
ccr.Warnings = append(ccr.Warnings, warnings...)
return httputils.WriteJSON(w, http.StatusCreated, ccr)
}
// handleMACAddressBC takes care of backward-compatibility for the container-wide MAC address by mutating the
// networkingConfig to set the endpoint-specific MACAddress field introduced in API v1.44. It returns a warning message
// or an error if the container-wide field was specified for API >= v1.44.
func handleMACAddressBC(config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, version string) (string, error) {
deprecatedMacAddress := config.MacAddress //nolint:staticcheck // ignore SA1019: field is deprecated, but still used on API < v1.44.
// For older versions of the API, migrate the container-wide MAC address to EndpointsConfig.
if versions.LessThan(version, "1.44") {
if deprecatedMacAddress == "" {
// If a MAC address is supplied in EndpointsConfig, discard it because the old API
// would have ignored it.
for _, ep := range networkingConfig.EndpointsConfig {
ep.MacAddress = ""
}
return "", nil
}
if !hostConfig.NetworkMode.IsDefault() && !hostConfig.NetworkMode.IsBridge() && !hostConfig.NetworkMode.IsUserDefined() {
return "", runconfig.ErrConflictContainerNetworkAndMac
}
// There cannot be more than one entry in EndpointsConfig with API < 1.44.
// If there's no EndpointsConfig, create a place to store the configured address. It is
// safe to use NetworkMode as the network name, whether it's a name or id/short-id, as
// it will be normalised later and there is no other EndpointSettings object that might
// refer to this network/endpoint.
if len(networkingConfig.EndpointsConfig) == 0 {
nwName := hostConfig.NetworkMode.NetworkName()
networkingConfig.EndpointsConfig[nwName] = &network.EndpointSettings{}
}
// There's exactly one network in EndpointsConfig, either from the API or just-created.
// Migrate the container-wide setting to it.
// No need to check for a match between NetworkMode and the names/ids in EndpointsConfig,
// the old version of the API would have applied the address to this network anyway.
for _, ep := range networkingConfig.EndpointsConfig {
ep.MacAddress = deprecatedMacAddress
}
return "", nil
}
// The container-wide MacAddress parameter is deprecated and should now be specified in EndpointsConfig.
if deprecatedMacAddress == "" {
return "", nil
}
var warning string
if hostConfig.NetworkMode.IsDefault() || hostConfig.NetworkMode.IsBridge() || hostConfig.NetworkMode.IsUserDefined() {
nwName := hostConfig.NetworkMode.NetworkName()
// If there's no endpoint config, create a place to store the configured address.
if len(networkingConfig.EndpointsConfig) == 0 {
networkingConfig.EndpointsConfig[nwName] = &network.EndpointSettings{
MacAddress: deprecatedMacAddress,
}
} else {
// There is existing endpoint config - if it's not indexed by NetworkMode.Name(), we
// can't tell which network the container-wide settings was intended for. NetworkMode,
// the keys in EndpointsConfig and the NetworkID in EndpointsConfig may mix network
// name/id/short-id. It's not safe to create EndpointsConfig under the NetworkMode
// name to store the container-wide MAC address, because that may result in two sets
// of EndpointsConfig for the same network and one set will be discarded later. So,
// reject the request ...
ep, ok := networkingConfig.EndpointsConfig[nwName]
if !ok {
return "", errdefs.InvalidParameter(errors.New("if a container-wide MAC address is supplied, HostConfig.NetworkMode must match the identity of a network in NetworkSettings.Networks"))
}
// ep is the endpoint that needs the container-wide MAC address; migrate the address
// to it, or bail out if there's a mismatch.
if ep.MacAddress == "" {
ep.MacAddress = deprecatedMacAddress
} else if ep.MacAddress != deprecatedMacAddress {
return "", errdefs.InvalidParameter(errors.New("the container-wide MAC address must match the endpoint-specific MAC address for the main network, or be left empty"))
}
}
}
warning = "The container-wide MacAddress field is now deprecated. It should be specified in EndpointsConfig instead."
config.MacAddress = "" //nolint:staticcheck // ignore SA1019: field is deprecated, but still used on API < v1.44.
return warning, nil
}
func (s *containerRouter) deleteContainers(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
name := vars["name"]
config := &backend.ContainerRmConfig{
ForceRemove: httputils.BoolValue(r, "force"),
RemoveVolume: httputils.BoolValue(r, "v"),
RemoveLink: httputils.BoolValue(r, "link"),
}
if err := s.backend.ContainerRm(name, config); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func (s *containerRouter) postContainersResize(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
height, err := strconv.Atoi(r.Form.Get("h"))
if err != nil {
return errdefs.InvalidParameter(err)
}
width, err := strconv.Atoi(r.Form.Get("w"))
if err != nil {
return errdefs.InvalidParameter(err)
}
return s.backend.ContainerResize(vars["name"], height, width)
}
func (s *containerRouter) postContainersAttach(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
err := httputils.ParseForm(r)
if err != nil {
return err
}
containerName := vars["name"]
_, upgrade := r.Header["Upgrade"]
detachKeys := r.FormValue("detachKeys")
hijacker, ok := w.(http.Hijacker)
if !ok {
return errdefs.InvalidParameter(errors.Errorf("error attaching to container %s, hijack connection missing", containerName))
}
contentType := types.MediaTypeRawStream
setupStreams := func(multiplexed bool) (io.ReadCloser, io.Writer, io.Writer, error) {
conn, _, err := hijacker.Hijack()
if err != nil {
return nil, nil, nil, err
}
// set raw mode
conn.Write([]byte{})
if upgrade {
if multiplexed && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
contentType = types.MediaTypeMultiplexedStream
}
// FIXME(thaJeztah): we should not ignore errors here; see https://github.com/moby/moby/pull/48359#discussion_r1725562802
fmt.Fprintf(conn, "HTTP/1.1 101 UPGRADED\r\nContent-Type: %v\r\nConnection: Upgrade\r\nUpgrade: tcp\r\n\r\n", contentType)
} else {
// FIXME(thaJeztah): we should not ignore errors here; see https://github.com/moby/moby/pull/48359#discussion_r1725562802
fmt.Fprint(conn, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n")
}
closer := func() error {
httputils.CloseStreams(conn)
return nil
}
return ioutils.NewReadCloserWrapper(conn, closer), conn, conn, nil
}
attachConfig := &backend.ContainerAttachConfig{
GetStreams: setupStreams,
UseStdin: httputils.BoolValue(r, "stdin"),
UseStdout: httputils.BoolValue(r, "stdout"),
UseStderr: httputils.BoolValue(r, "stderr"),
Logs: httputils.BoolValue(r, "logs"),
Stream: httputils.BoolValue(r, "stream"),
DetachKeys: detachKeys,
MuxStreams: true,
}
if err = s.backend.ContainerAttach(containerName, attachConfig); err != nil {
log.G(ctx).WithError(err).Errorf("Handler for %s %s returned error", r.Method, r.URL.Path)
// Remember to close stream if error happens
conn, _, errHijack := hijacker.Hijack()
if errHijack != nil {
log.G(ctx).WithError(err).Errorf("Handler for %s %s: unable to close stream; error when hijacking connection", r.Method, r.URL.Path)
} else {
statusCode := httpstatus.FromError(err)
statusText := http.StatusText(statusCode)
fmt.Fprintf(conn, "HTTP/1.1 %d %s\r\nContent-Type: %s\r\n\r\n%s\r\n", statusCode, statusText, contentType, err.Error())
httputils.CloseStreams(conn)
}
}
return nil
}
func (s *containerRouter) wsContainersAttach(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
containerName := vars["name"]
var err error
detachKeys := r.FormValue("detachKeys")
done := make(chan struct{})
started := make(chan struct{})
version := httputils.VersionFromContext(ctx)
setupStreams := func(multiplexed bool) (io.ReadCloser, io.Writer, io.Writer, error) {
wsChan := make(chan *websocket.Conn)
h := func(conn *websocket.Conn) {
wsChan <- conn
<-done
}
srv := websocket.Server{Handler: h, Handshake: nil}
go func() {
close(started)
srv.ServeHTTP(w, r)
}()
conn := <-wsChan
// In case version 1.28 and above, a binary frame will be sent.
// See 28176 for details.
if versions.GreaterThanOrEqualTo(version, "1.28") {
conn.PayloadType = websocket.BinaryFrame
}
return conn, conn, conn, nil
}
useStdin, useStdout, useStderr := true, true, true
if versions.GreaterThanOrEqualTo(version, "1.42") {
useStdin = httputils.BoolValue(r, "stdin")
useStdout = httputils.BoolValue(r, "stdout")
useStderr = httputils.BoolValue(r, "stderr")
}
attachConfig := &backend.ContainerAttachConfig{
GetStreams: setupStreams,
UseStdin: useStdin,
UseStdout: useStdout,
UseStderr: useStderr,
Logs: httputils.BoolValue(r, "logs"),
Stream: httputils.BoolValue(r, "stream"),
DetachKeys: detachKeys,
MuxStreams: false, // never multiplex, as we rely on websocket to manage distinct streams
}
err = s.backend.ContainerAttach(containerName, attachConfig)
close(done)
select {
case <-started:
if err != nil {
log.G(ctx).Errorf("Error attaching websocket: %s", err)
} else {
log.G(ctx).Debug("websocket connection was closed by client")
}
return nil
default:
}
return err
}
func (s *containerRouter) postContainersPrune(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
pruneFilters, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return err
}
pruneReport, err := s.backend.ContainersPrune(ctx, pruneFilters)
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusOK, pruneReport)
}

View File

@@ -0,0 +1,160 @@
package container
import (
"testing"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/network"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
)
func TestHandleMACAddressBC(t *testing.T) {
testcases := []struct {
name string
apiVersion string
ctrWideMAC string
networkMode container.NetworkMode
epConfig map[string]*network.EndpointSettings
expEpWithCtrWideMAC string
expEpWithNoMAC string
expCtrWideMAC string
expWarning string
expError string
}{
{
name: "old api ctr-wide mac mix id and name",
apiVersion: "1.43",
ctrWideMAC: "11:22:33:44:55:66",
networkMode: "aNetId",
epConfig: map[string]*network.EndpointSettings{"aNetName": {}},
expEpWithCtrWideMAC: "aNetName",
expCtrWideMAC: "11:22:33:44:55:66",
},
{
name: "old api clear ep mac",
apiVersion: "1.43",
networkMode: "aNetId",
epConfig: map[string]*network.EndpointSettings{"aNetName": {MacAddress: "11:22:33:44:55:66"}},
expEpWithNoMAC: "aNetName",
},
{
name: "old api no-network ctr-wide mac",
apiVersion: "1.43",
networkMode: "none",
ctrWideMAC: "11:22:33:44:55:66",
expError: "conflicting options: mac-address and the network mode",
expCtrWideMAC: "11:22:33:44:55:66",
},
{
name: "old api create ep",
apiVersion: "1.43",
networkMode: "aNetId",
ctrWideMAC: "11:22:33:44:55:66",
epConfig: map[string]*network.EndpointSettings{},
expEpWithCtrWideMAC: "aNetId",
expCtrWideMAC: "11:22:33:44:55:66",
},
{
name: "old api migrate ctr-wide mac",
apiVersion: "1.43",
ctrWideMAC: "11:22:33:44:55:66",
networkMode: "aNetName",
epConfig: map[string]*network.EndpointSettings{"aNetName": {}},
expEpWithCtrWideMAC: "aNetName",
expCtrWideMAC: "11:22:33:44:55:66",
},
{
name: "new api no macs",
apiVersion: "1.44",
networkMode: "aNetId",
epConfig: map[string]*network.EndpointSettings{"aNetName": {}},
},
{
name: "new api ep specific mac",
apiVersion: "1.44",
networkMode: "aNetName",
epConfig: map[string]*network.EndpointSettings{"aNetName": {MacAddress: "11:22:33:44:55:66"}},
},
{
name: "new api migrate ctr-wide mac to new ep",
apiVersion: "1.44",
ctrWideMAC: "11:22:33:44:55:66",
networkMode: "aNetName",
epConfig: map[string]*network.EndpointSettings{},
expEpWithCtrWideMAC: "aNetName",
expWarning: "The container-wide MacAddress field is now deprecated",
expCtrWideMAC: "",
},
{
name: "new api migrate ctr-wide mac to existing ep",
apiVersion: "1.44",
ctrWideMAC: "11:22:33:44:55:66",
networkMode: "aNetName",
epConfig: map[string]*network.EndpointSettings{"aNetName": {}},
expEpWithCtrWideMAC: "aNetName",
expWarning: "The container-wide MacAddress field is now deprecated",
expCtrWideMAC: "",
},
{
name: "new api mode vs name mismatch",
apiVersion: "1.44",
ctrWideMAC: "11:22:33:44:55:66",
networkMode: "aNetId",
epConfig: map[string]*network.EndpointSettings{"aNetName": {}},
expError: "if a container-wide MAC address is supplied, HostConfig.NetworkMode must match the identity of a network in NetworkSettings.Networks",
expCtrWideMAC: "11:22:33:44:55:66",
},
{
name: "new api mac mismatch",
apiVersion: "1.44",
ctrWideMAC: "11:22:33:44:55:66",
networkMode: "aNetName",
epConfig: map[string]*network.EndpointSettings{"aNetName": {MacAddress: "00:11:22:33:44:55"}},
expError: "the container-wide MAC address must match the endpoint-specific MAC address",
expCtrWideMAC: "11:22:33:44:55:66",
},
}
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
cfg := &container.Config{
MacAddress: tc.ctrWideMAC, //nolint:staticcheck // ignore SA1019: field is deprecated, but still used on API < v1.44.
}
hostCfg := &container.HostConfig{
NetworkMode: tc.networkMode,
}
epConfig := make(map[string]*network.EndpointSettings, len(tc.epConfig))
for k, v := range tc.epConfig {
v := v
epConfig[k] = v
}
netCfg := &network.NetworkingConfig{
EndpointsConfig: epConfig,
}
warning, err := handleMACAddressBC(cfg, hostCfg, netCfg, tc.apiVersion)
if tc.expError == "" {
assert.Check(t, err)
} else {
assert.Check(t, is.ErrorContains(err, tc.expError))
}
if tc.expWarning == "" {
assert.Check(t, is.Equal(warning, ""))
} else {
assert.Check(t, is.Contains(warning, tc.expWarning))
}
if tc.expEpWithCtrWideMAC != "" {
got := netCfg.EndpointsConfig[tc.expEpWithCtrWideMAC].MacAddress
assert.Check(t, is.Equal(got, tc.ctrWideMAC))
}
if tc.expEpWithNoMAC != "" {
got := netCfg.EndpointsConfig[tc.expEpWithNoMAC].MacAddress
assert.Check(t, is.Equal(got, ""))
}
gotCtrWideMAC := cfg.MacAddress //nolint:staticcheck // ignore SA1019: field is deprecated, but still used on API < v1.44.
assert.Check(t, is.Equal(gotCtrWideMAC, tc.expCtrWideMAC))
})
}
}

View File

@@ -0,0 +1,138 @@
package container // import "github.com/docker/docker/api/server/router/container"
import (
"compress/flate"
"compress/gzip"
"context"
"encoding/base64"
"encoding/json"
"io"
"net/http"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions"
gddohttputil "github.com/golang/gddo/httputil"
)
type pathError struct{}
func (pathError) Error() string {
return "Path cannot be empty"
}
func (pathError) InvalidParameter() {}
// postContainersCopy is deprecated in favor of getContainersArchive.
//
// Deprecated since 1.8 (API v1.20), errors out since 1.12 (API v1.24)
func (s *containerRouter) postContainersCopy(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, "1.24") {
w.WriteHeader(http.StatusNotFound)
return nil
}
cfg := types.CopyConfig{}
if err := httputils.ReadJSON(r, &cfg); err != nil {
return err
}
if cfg.Resource == "" {
return pathError{}
}
data, err := s.backend.ContainerCopy(vars["name"], cfg.Resource)
if err != nil {
return err
}
defer data.Close()
w.Header().Set("Content-Type", "application/x-tar")
_, err = io.Copy(w, data)
return err
}
// // Encode the stat to JSON, base64 encode, and place in a header.
func setContainerPathStatHeader(stat *types.ContainerPathStat, header http.Header) error {
statJSON, err := json.Marshal(stat)
if err != nil {
return err
}
header.Set(
"X-Docker-Container-Path-Stat",
base64.StdEncoding.EncodeToString(statJSON),
)
return nil
}
func (s *containerRouter) headContainersArchive(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
v, err := httputils.ArchiveFormValues(r, vars)
if err != nil {
return err
}
stat, err := s.backend.ContainerStatPath(v.Name, v.Path)
if err != nil {
return err
}
return setContainerPathStatHeader(stat, w.Header())
}
func writeCompressedResponse(w http.ResponseWriter, r *http.Request, body io.Reader) error {
var cw io.Writer
switch gddohttputil.NegotiateContentEncoding(r, []string{"gzip", "deflate"}) {
case "gzip":
gw := gzip.NewWriter(w)
defer gw.Close()
cw = gw
w.Header().Set("Content-Encoding", "gzip")
case "deflate":
fw, err := flate.NewWriter(w, flate.DefaultCompression)
if err != nil {
return err
}
defer fw.Close()
cw = fw
w.Header().Set("Content-Encoding", "deflate")
default:
cw = w
}
_, err := io.Copy(cw, body)
return err
}
func (s *containerRouter) getContainersArchive(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
v, err := httputils.ArchiveFormValues(r, vars)
if err != nil {
return err
}
tarArchive, stat, err := s.backend.ContainerArchivePath(v.Name, v.Path)
if err != nil {
return err
}
defer tarArchive.Close()
if err := setContainerPathStatHeader(stat, w.Header()); err != nil {
return err
}
w.Header().Set("Content-Type", "application/x-tar")
return writeCompressedResponse(w, r, tarArchive)
}
func (s *containerRouter) putContainersArchive(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
v, err := httputils.ArchiveFormValues(r, vars)
if err != nil {
return err
}
noOverwriteDirNonDir := httputils.BoolValue(r, "noOverwriteDirNonDir")
copyUIDGID := httputils.BoolValue(r, "copyUIDGID")
return s.backend.ContainerExtractToDir(v.Name, v.Path, copyUIDGID, noOverwriteDirNonDir, r.Body)
}

View File

@@ -0,0 +1,176 @@
package container // import "github.com/docker/docker/api/server/router/container"
import (
"context"
"fmt"
"io"
"net/http"
"strconv"
"github.com/containerd/log"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/stdcopy"
)
func (s *containerRouter) getExecByID(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
eConfig, err := s.backend.ContainerExecInspect(vars["id"])
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusOK, eConfig)
}
type execCommandError struct{}
func (execCommandError) Error() string {
return "No exec command specified"
}
func (execCommandError) InvalidParameter() {}
func (s *containerRouter) postContainerExecCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
execConfig := &types.ExecConfig{}
if err := httputils.ReadJSON(r, execConfig); err != nil {
return err
}
if len(execConfig.Cmd) == 0 {
return execCommandError{}
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.42") {
// Not supported by API versions before 1.42
execConfig.ConsoleSize = nil
}
// Register an instance of Exec in container.
id, err := s.backend.ContainerExecCreate(vars["name"], execConfig)
if err != nil {
log.G(ctx).Errorf("Error setting up exec command in container %s: %v", vars["name"], err)
return err
}
return httputils.WriteJSON(w, http.StatusCreated, &types.IDResponse{
ID: id,
})
}
// TODO(vishh): Refactor the code to avoid having to specify stream config as part of both create and start.
func (s *containerRouter) postContainerExecStart(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.22") {
// API versions before 1.22 did not enforce application/json content-type.
// Allow older clients to work by patching the content-type.
if r.Header.Get("Content-Type") != "application/json" {
r.Header.Set("Content-Type", "application/json")
}
}
var (
execName = vars["name"]
stdin, inStream io.ReadCloser
stdout, stderr, outStream io.Writer
)
execStartCheck := &types.ExecStartCheck{}
if err := httputils.ReadJSON(r, execStartCheck); err != nil {
return err
}
if exists, err := s.backend.ExecExists(execName); !exists {
return err
}
if execStartCheck.ConsoleSize != nil {
// Not supported before 1.42
if versions.LessThan(version, "1.42") {
execStartCheck.ConsoleSize = nil
}
// No console without tty
if !execStartCheck.Tty {
execStartCheck.ConsoleSize = nil
}
}
if !execStartCheck.Detach {
var err error
// Setting up the streaming http interface.
inStream, outStream, err = httputils.HijackConnection(w)
if err != nil {
return err
}
defer httputils.CloseStreams(inStream, outStream)
if _, ok := r.Header["Upgrade"]; ok {
contentType := types.MediaTypeRawStream
if !execStartCheck.Tty && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
contentType = types.MediaTypeMultiplexedStream
}
fmt.Fprint(outStream, "HTTP/1.1 101 UPGRADED\r\nContent-Type: "+contentType+"\r\nConnection: Upgrade\r\nUpgrade: tcp\r\n")
} else {
fmt.Fprint(outStream, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n")
}
// copy headers that were removed as part of hijack
if err := w.Header().WriteSubset(outStream, nil); err != nil {
return err
}
fmt.Fprint(outStream, "\r\n")
stdin = inStream
stdout = outStream
if !execStartCheck.Tty {
stderr = stdcopy.NewStdWriter(outStream, stdcopy.Stderr)
stdout = stdcopy.NewStdWriter(outStream, stdcopy.Stdout)
}
}
options := container.ExecStartOptions{
Stdin: stdin,
Stdout: stdout,
Stderr: stderr,
ConsoleSize: execStartCheck.ConsoleSize,
}
// Now run the user process in container.
// Maybe we should we pass ctx here if we're not detaching?
if err := s.backend.ContainerExecStart(context.Background(), execName, options); err != nil {
if execStartCheck.Detach {
return err
}
stdout.Write([]byte(err.Error() + "\r\n"))
log.G(ctx).Errorf("Error running exec %s in container: %v", execName, err)
}
return nil
}
func (s *containerRouter) postContainerExecResize(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
height, err := strconv.Atoi(r.Form.Get("h"))
if err != nil {
return errdefs.InvalidParameter(err)
}
width, err := strconv.Atoi(r.Form.Get("w"))
if err != nil {
return errdefs.InvalidParameter(err)
}
return s.backend.ContainerExecResize(vars["name"], height, width)
}

View File

@@ -0,0 +1,21 @@
package container // import "github.com/docker/docker/api/server/router/container"
import (
"context"
"net/http"
"github.com/docker/docker/api/server/httputils"
)
// getContainersByName inspects container's configuration and serializes it as json.
func (s *containerRouter) getContainersByName(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
displaySize := httputils.BoolValue(r, "size")
version := httputils.VersionFromContext(ctx)
json, err := s.backend.ContainerInspect(ctx, vars["name"], displaySize, version)
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusOK, json)
}

View File

@@ -0,0 +1,53 @@
package debug // import "github.com/docker/docker/api/server/router/debug"
import (
"context"
"expvar"
"net/http"
"net/http/pprof"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/server/router"
)
// NewRouter creates a new debug router
// The debug router holds endpoints for debug the daemon, such as those for pprof.
func NewRouter() router.Router {
r := &debugRouter{}
r.initRoutes()
return r
}
type debugRouter struct {
routes []router.Route
}
func (r *debugRouter) initRoutes() {
r.routes = []router.Route{
router.NewGetRoute("/vars", frameworkAdaptHandler(expvar.Handler())),
router.NewGetRoute("/pprof/", frameworkAdaptHandlerFunc(pprof.Index)),
router.NewGetRoute("/pprof/cmdline", frameworkAdaptHandlerFunc(pprof.Cmdline)),
router.NewGetRoute("/pprof/profile", frameworkAdaptHandlerFunc(pprof.Profile)),
router.NewGetRoute("/pprof/symbol", frameworkAdaptHandlerFunc(pprof.Symbol)),
router.NewGetRoute("/pprof/trace", frameworkAdaptHandlerFunc(pprof.Trace)),
router.NewGetRoute("/pprof/{name}", handlePprof),
}
}
func (r *debugRouter) Routes() []router.Route {
return r.routes
}
func frameworkAdaptHandler(handler http.Handler) httputils.APIFunc {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
handler.ServeHTTP(w, r)
return nil
}
}
func frameworkAdaptHandlerFunc(handler http.HandlerFunc) httputils.APIFunc {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
handler(w, r)
return nil
}
}

View File

@@ -0,0 +1,12 @@
package debug // import "github.com/docker/docker/api/server/router/debug"
import (
"context"
"net/http"
"net/http/pprof"
)
func handlePprof(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
pprof.Handler(vars["name"]).ServeHTTP(w, r)
return nil
}

View File

@@ -0,0 +1,15 @@
package distribution // import "github.com/docker/docker/api/server/router/distribution"
import (
"context"
"github.com/distribution/reference"
"github.com/docker/distribution"
"github.com/docker/docker/api/types/registry"
)
// Backend is all the methods that need to be implemented
// to provide image specific functionality.
type Backend interface {
GetRepositories(context.Context, reference.Named, *registry.AuthConfig) ([]distribution.Repository, error)
}

View File

@@ -1,6 +1,6 @@
package distribution
package distribution // import "github.com/docker/docker/api/server/router/distribution"
import "github.com/moby/moby/v2/daemon/server/router"
import "github.com/docker/docker/api/server/router"
// distributionRouter is a router to talk with the registry
type distributionRouter struct {
@@ -26,6 +26,6 @@ func (dr *distributionRouter) Routes() []router.Route {
func (dr *distributionRouter) initRoutes() {
dr.routes = []router.Route{
// GET
router.NewGetRoute("/distribution/{name:.*}/json", dr.getDistributionInfo, router.WithMinimumAPIVersion("1.30")),
router.NewGetRoute("/distribution/{name:.*}/json", dr.getDistributionInfo),
}
}

View File

@@ -1,4 +1,4 @@
package distribution
package distribution // import "github.com/docker/docker/api/server/router/distribution"
import (
"context"
@@ -8,12 +8,11 @@ import (
"github.com/distribution/reference"
"github.com/docker/distribution"
"github.com/docker/distribution/manifest/manifestlist"
"github.com/docker/distribution/manifest/schema1"
"github.com/docker/distribution/manifest/schema2"
"github.com/moby/moby/api/pkg/authconfig"
"github.com/moby/moby/api/types/registry"
distributionpkg "github.com/moby/moby/v2/daemon/internal/distribution"
"github.com/moby/moby/v2/daemon/server/httputils"
"github.com/moby/moby/v2/errdefs"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/errdefs"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
@@ -25,10 +24,10 @@ func (dr *distributionRouter) getDistributionInfo(ctx context.Context, w http.Re
w.Header().Set("Content-Type", "application/json")
imgName := vars["name"]
image := vars["name"]
// TODO why is reference.ParseAnyReference() / reference.ParseNormalizedNamed() not using the reference.ErrTagInvalidFormat (and so on) errors?
ref, err := reference.ParseAnyReference(imgName)
ref, err := reference.ParseAnyReference(image)
if err != nil {
return errdefs.InvalidParameter(err)
}
@@ -38,12 +37,12 @@ func (dr *distributionRouter) getDistributionInfo(ctx context.Context, w http.Re
// full image ID
return errors.Errorf("no manifest found for full image ID")
}
return errdefs.InvalidParameter(errors.Errorf("unknown image reference format: %s", imgName))
return errdefs.InvalidParameter(errors.Errorf("unknown image reference format: %s", image))
}
// For a search it is not an error if no auth was given. Ignore invalid
// AuthConfig to increase compatibility with the existing API.
authConfig, _ := authconfig.Decode(r.Header.Get(registry.AuthHeader))
authConfig, _ := registry.DecodeAuthConfig(r.Header.Get(registry.AuthHeader))
repos, err := dr.backend.GetRepositories(ctx, namedRef, authConfig)
if err != nil {
return err
@@ -65,7 +64,7 @@ func (dr *distributionRouter) getDistributionInfo(ctx context.Context, w http.Re
// - https://github.com/moby/moby/blob/12c7411b6b7314bef130cd59f1c7384a7db06d0b/distribution/pull.go#L76-L152
var lastErr error
for _, repo := range repos {
distributionInspect, err := fetchManifest(ctx, repo, namedRef)
distributionInspect, err := dr.fetchManifest(ctx, repo, namedRef)
if err != nil {
lastErr = err
continue
@@ -75,7 +74,7 @@ func (dr *distributionRouter) getDistributionInfo(ctx context.Context, w http.Re
return lastErr
}
func fetchManifest(ctx context.Context, distrepo distribution.Repository, namedRef reference.Named) (registry.DistributionInspect, error) {
func (dr *distributionRouter) fetchManifest(ctx context.Context, distrepo distribution.Repository, namedRef reference.Named) (registry.DistributionInspect, error) {
var distributionInspect registry.DistributionInspect
if canonicalRef, ok := namedRef.(reference.Canonical); !ok {
namedRef = reference.TagNameOnly(namedRef)
@@ -108,14 +107,14 @@ func fetchManifest(ctx context.Context, distrepo distribution.Repository, namedR
}
mnfst, err := mnfstsrvc.Get(ctx, distributionInspect.Descriptor.Digest)
if err != nil {
switch {
case errors.Is(err, reference.ErrReferenceInvalidFormat),
errors.Is(err, reference.ErrTagInvalidFormat),
errors.Is(err, reference.ErrDigestInvalidFormat),
errors.Is(err, reference.ErrNameContainsUppercase),
errors.Is(err, reference.ErrNameEmpty),
errors.Is(err, reference.ErrNameTooLong),
errors.Is(err, reference.ErrNameNotCanonical):
switch err {
case reference.ErrReferenceInvalidFormat,
reference.ErrTagInvalidFormat,
reference.ErrDigestInvalidFormat,
reference.ErrNameContainsUppercase,
reference.ErrNameEmpty,
reference.ErrNameTooLong,
reference.ErrNameNotCanonical:
return registry.DistributionInspect{}, errdefs.InvalidParameter(err)
}
return registry.DistributionInspect{}, err
@@ -125,11 +124,6 @@ func fetchManifest(ctx context.Context, distrepo distribution.Repository, namedR
if err != nil {
return registry.DistributionInspect{}, err
}
switch mediaType {
case distributionpkg.MediaTypeDockerSchema1Manifest, distributionpkg.MediaTypeDockerSchema1SignedManifest:
return registry.DistributionInspect{}, distributionpkg.DeprecatedSchema1ImageError(namedRef)
}
// update MediaType because registry might return something incorrect
distributionInspect.Descriptor.MediaType = mediaType
if distributionInspect.Descriptor.Size == 0 {
@@ -158,6 +152,12 @@ func fetchManifest(ctx context.Context, distrepo distribution.Repository, namedR
distributionInspect.Platforms = append(distributionInspect.Platforms, platform)
}
}
case *schema1.SignedManifest:
platform := ocispec.Platform{
Architecture: mnfstObj.Architecture,
OS: "linux",
}
distributionInspect.Platforms = append(distributionInspect.Platforms, platform)
}
return distributionInspect, nil
}

View File

@@ -0,0 +1,68 @@
package router // import "github.com/docker/docker/api/server/router"
import (
"context"
"net/http"
"github.com/docker/docker/api/server/httputils"
)
// ExperimentalRoute defines an experimental API route that can be enabled or disabled.
type ExperimentalRoute interface {
Route
Enable()
Disable()
}
// experimentalRoute defines an experimental API route that can be enabled or disabled.
// It implements ExperimentalRoute
type experimentalRoute struct {
local Route
handler httputils.APIFunc
}
// Enable enables this experimental route
func (r *experimentalRoute) Enable() {
r.handler = r.local.Handler()
}
// Disable disables the experimental route
func (r *experimentalRoute) Disable() {
r.handler = experimentalHandler
}
type notImplementedError struct{}
func (notImplementedError) Error() string {
return "This experimental feature is disabled by default. Start the Docker daemon in experimental mode in order to enable it."
}
func (notImplementedError) NotImplemented() {}
func experimentalHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return notImplementedError{}
}
// Handler returns the APIFunc to let the server wrap it in middlewares.
func (r *experimentalRoute) Handler() httputils.APIFunc {
return r.handler
}
// Method returns the http method that the route responds to.
func (r *experimentalRoute) Method() string {
return r.local.Method()
}
// Path returns the subpath where the route responds to.
func (r *experimentalRoute) Path() string {
return r.local.Path()
}
// Experimental will mark a route as experimental.
func Experimental(r Route) Route {
return &experimentalRoute{
local: r,
handler: experimentalHandler,
}
}

View File

@@ -0,0 +1,8 @@
package grpc // import "github.com/docker/docker/api/server/router/grpc"
import "google.golang.org/grpc"
// Backend abstracts a registerable GRPC service.
type Backend interface {
RegisterGRPC(*grpc.Server)
}

View File

@@ -0,0 +1,60 @@
package grpc // import "github.com/docker/docker/api/server/router/grpc"
import (
"context"
"strings"
"github.com/docker/docker/api/server/router"
grpc_middleware "github.com/grpc-ecosystem/go-grpc-middleware"
"github.com/moby/buildkit/util/grpcerrors"
"go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
"golang.org/x/net/http2"
"google.golang.org/grpc"
)
type grpcRouter struct {
routes []router.Route
grpcServer *grpc.Server
h2Server *http2.Server
}
// NewRouter initializes a new grpc http router
func NewRouter(backends ...Backend) router.Router {
unary := grpc.UnaryInterceptor(grpc_middleware.ChainUnaryServer(unaryInterceptor(), grpcerrors.UnaryServerInterceptor))
stream := grpc.StreamInterceptor(grpc_middleware.ChainStreamServer(otelgrpc.StreamServerInterceptor(), grpcerrors.StreamServerInterceptor)) //nolint:staticcheck // TODO(thaJeztah): ignore SA1019 for deprecated options: see https://github.com/moby/moby/issues/47437
r := &grpcRouter{
h2Server: &http2.Server{},
grpcServer: grpc.NewServer(unary, stream),
}
for _, b := range backends {
b.RegisterGRPC(r.grpcServer)
}
r.initRoutes()
return r
}
// Routes returns the available routers to the session controller
func (gr *grpcRouter) Routes() []router.Route {
return gr.routes
}
func (gr *grpcRouter) initRoutes() {
gr.routes = []router.Route{
router.NewPostRoute("/grpc", gr.serveGRPC),
}
}
func unaryInterceptor() grpc.UnaryServerInterceptor {
withTrace := otelgrpc.UnaryServerInterceptor() //nolint:staticcheck // TODO(thaJeztah): ignore SA1019 for deprecated options: see https://github.com/moby/moby/issues/47437
return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (resp interface{}, err error) {
// This method is used by the clients to send their traces to buildkit so they can be included
// in the daemon trace and stored in the build history record. This method can not be traced because
// it would cause an infinite loop.
if strings.HasSuffix(info.FullMethod, "opentelemetry.proto.collector.trace.v1.TraceService/Export") {
return handler(ctx, req)
}
return withTrace(ctx, req, info, handler)
}
}

View File

@@ -1,4 +1,4 @@
package grpc
package grpc // import "github.com/docker/docker/api/server/router/grpc"
import (
"context"

View File

@@ -0,0 +1,46 @@
package image // import "github.com/docker/docker/api/server/router/image"
import (
"context"
"io"
"github.com/distribution/reference"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/registry"
dockerimage "github.com/docker/docker/image"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
// Backend is all the methods that need to be implemented
// to provide image specific functionality.
type Backend interface {
imageBackend
importExportBackend
registryBackend
}
type imageBackend interface {
ImageDelete(ctx context.Context, imageRef string, force, prune bool) ([]image.DeleteResponse, error)
ImageHistory(ctx context.Context, imageName string) ([]*image.HistoryResponseItem, error)
Images(ctx context.Context, opts types.ImageListOptions) ([]*image.Summary, error)
GetImage(ctx context.Context, refOrID string, options image.GetImageOpts) (*dockerimage.Image, error)
TagImage(ctx context.Context, id dockerimage.ID, newRef reference.Named) error
ImagesPrune(ctx context.Context, pruneFilters filters.Args) (*types.ImagesPruneReport, error)
}
type importExportBackend interface {
LoadImage(ctx context.Context, inTar io.ReadCloser, outStream io.Writer, quiet bool) error
ImportImage(ctx context.Context, ref reference.Named, platform *ocispec.Platform, msg string, layerReader io.Reader, changes []string) (dockerimage.ID, error)
ExportImage(ctx context.Context, names []string, outStream io.Writer) error
}
type registryBackend interface {
PullImage(ctx context.Context, ref reference.Named, platform *ocispec.Platform, metaHeaders map[string][]string, authConfig *registry.AuthConfig, outStream io.Writer) error
PushImage(ctx context.Context, ref reference.Named, metaHeaders map[string][]string, authConfig *registry.AuthConfig, outStream io.Writer) error
}
type Searcher interface {
Search(ctx context.Context, searchFilters filters.Args, term string, limit int, authConfig *registry.AuthConfig, headers map[string][]string) ([]registry.SearchResult, error)
}

View File

@@ -0,0 +1,57 @@
package image // import "github.com/docker/docker/api/server/router/image"
import (
"github.com/docker/docker/api/server/router"
"github.com/docker/docker/image"
"github.com/docker/docker/layer"
"github.com/docker/docker/reference"
)
// imageRouter is a router to talk with the image controller
type imageRouter struct {
backend Backend
searcher Searcher
referenceBackend reference.Store
imageStore image.Store
layerStore layer.Store
routes []router.Route
}
// NewRouter initializes a new image router
func NewRouter(backend Backend, searcher Searcher, referenceBackend reference.Store, imageStore image.Store, layerStore layer.Store) router.Router {
ir := &imageRouter{
backend: backend,
searcher: searcher,
referenceBackend: referenceBackend,
imageStore: imageStore,
layerStore: layerStore,
}
ir.initRoutes()
return ir
}
// Routes returns the available routes to the image controller
func (ir *imageRouter) Routes() []router.Route {
return ir.routes
}
// initRoutes initializes the routes in the image router
func (ir *imageRouter) initRoutes() {
ir.routes = []router.Route{
// GET
router.NewGetRoute("/images/json", ir.getImagesJSON),
router.NewGetRoute("/images/search", ir.getImagesSearch),
router.NewGetRoute("/images/get", ir.getImagesGet),
router.NewGetRoute("/images/{name:.*}/get", ir.getImagesGet),
router.NewGetRoute("/images/{name:.*}/history", ir.getImagesHistory),
router.NewGetRoute("/images/{name:.*}/json", ir.getImagesByName),
// POST
router.NewPostRoute("/images/load", ir.postImagesLoad),
router.NewPostRoute("/images/create", ir.postImagesCreate),
router.NewPostRoute("/images/{name:.*}/push", ir.postImagesPush),
router.NewPostRoute("/images/{name:.*}/tag", ir.postImagesTag),
router.NewPostRoute("/images/prune", ir.postImagesPrune),
// DELETE
router.NewDeleteRoute("/images/{name:.*}", ir.deleteImages),
}
}

View File

@@ -0,0 +1,534 @@
package image // import "github.com/docker/docker/api/server/router/image"
import (
"context"
"fmt"
"io"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"github.com/containerd/containerd/platforms"
"github.com/distribution/reference"
"github.com/docker/docker/api"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
opts "github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/builder/remotecontext"
"github.com/docker/docker/dockerversion"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/image"
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/progress"
"github.com/docker/docker/pkg/streamformatter"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
// Creates an image from Pull or from Import
func (ir *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
var (
img = r.Form.Get("fromImage")
repo = r.Form.Get("repo")
tag = r.Form.Get("tag")
comment = r.Form.Get("message")
progressErr error
output = ioutils.NewWriteFlusher(w)
platform *ocispec.Platform
)
defer output.Close()
w.Header().Set("Content-Type", "application/json")
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, "1.32") {
if p := r.FormValue("platform"); p != "" {
sp, err := platforms.Parse(p)
if err != nil {
return errdefs.InvalidParameter(err)
}
platform = &sp
}
}
if img != "" { // pull
metaHeaders := map[string][]string{}
for k, v := range r.Header {
if strings.HasPrefix(k, "X-Meta-") {
metaHeaders[k] = v
}
}
// Special case: "pull -a" may send an image name with a
// trailing :. This is ugly, but let's not break API
// compatibility.
image := strings.TrimSuffix(img, ":")
ref, err := reference.ParseNormalizedNamed(image)
if err != nil {
return errdefs.InvalidParameter(err)
}
// TODO(thaJeztah) this could use a WithTagOrDigest() utility
if tag != "" {
// The "tag" could actually be a digest.
var dgst digest.Digest
dgst, err = digest.Parse(tag)
if err == nil {
ref, err = reference.WithDigest(reference.TrimNamed(ref), dgst)
} else {
ref, err = reference.WithTag(ref, tag)
}
if err != nil {
return errdefs.InvalidParameter(err)
}
}
if err := validateRepoName(ref); err != nil {
return errdefs.Forbidden(err)
}
// For a pull it is not an error if no auth was given. Ignore invalid
// AuthConfig to increase compatibility with the existing API.
authConfig, _ := registry.DecodeAuthConfig(r.Header.Get(registry.AuthHeader))
progressErr = ir.backend.PullImage(ctx, ref, platform, metaHeaders, authConfig, output)
} else { // import
src := r.Form.Get("fromSrc")
tagRef, err := httputils.RepoTagReference(repo, tag)
if err != nil {
return errdefs.InvalidParameter(err)
}
if len(comment) == 0 {
comment = "Imported from " + src
}
var layerReader io.ReadCloser
defer r.Body.Close()
if src == "-" {
layerReader = r.Body
} else {
if len(strings.Split(src, "://")) == 1 {
src = "http://" + src
}
u, err := url.Parse(src)
if err != nil {
return errdefs.InvalidParameter(err)
}
resp, err := remotecontext.GetWithStatusError(u.String())
if err != nil {
return err
}
output.Write(streamformatter.FormatStatus("", "Downloading from %s", u))
progressOutput := streamformatter.NewJSONProgressOutput(output, true)
layerReader = progress.NewProgressReader(resp.Body, progressOutput, resp.ContentLength, "", "Importing")
defer layerReader.Close()
}
var id image.ID
id, progressErr = ir.backend.ImportImage(ctx, tagRef, platform, comment, layerReader, r.Form["changes"])
if progressErr == nil {
_, _ = output.Write(streamformatter.FormatStatus("", "%v", id.String()))
}
}
if progressErr != nil {
if !output.Flushed() {
return progressErr
}
_, _ = output.Write(streamformatter.FormatError(progressErr))
}
return nil
}
func (ir *imageRouter) postImagesPush(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
metaHeaders := map[string][]string{}
for k, v := range r.Header {
if strings.HasPrefix(k, "X-Meta-") {
metaHeaders[k] = v
}
}
if err := httputils.ParseForm(r); err != nil {
return err
}
var authConfig *registry.AuthConfig
if authEncoded := r.Header.Get(registry.AuthHeader); authEncoded != "" {
// the new format is to handle the authConfig as a header. Ignore invalid
// AuthConfig to increase compatibility with the existing API.
authConfig, _ = registry.DecodeAuthConfig(authEncoded)
} else {
// the old format is supported for compatibility if there was no authConfig header
var err error
authConfig, err = registry.DecodeAuthConfigBody(r.Body)
if err != nil {
return errors.Wrap(err, "bad parameters and missing X-Registry-Auth")
}
}
output := ioutils.NewWriteFlusher(w)
defer output.Close()
w.Header().Set("Content-Type", "application/json")
img := vars["name"]
tag := r.Form.Get("tag")
var ref reference.Named
// Tag is empty only in case ImagePushOptions.All is true.
if tag != "" {
r, err := httputils.RepoTagReference(img, tag)
if err != nil {
return errdefs.InvalidParameter(err)
}
ref = r
} else {
r, err := reference.ParseNormalizedNamed(img)
if err != nil {
return errdefs.InvalidParameter(err)
}
ref = r
}
if err := ir.backend.PushImage(ctx, ref, metaHeaders, authConfig, output); err != nil {
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
}
return nil
}
func (ir *imageRouter) getImagesGet(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
w.Header().Set("Content-Type", "application/x-tar")
output := ioutils.NewWriteFlusher(w)
defer output.Close()
var names []string
if name, ok := vars["name"]; ok {
names = []string{name}
} else {
names = r.Form["names"]
}
if err := ir.backend.ExportImage(ctx, names, output); err != nil {
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
}
return nil
}
func (ir *imageRouter) postImagesLoad(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
quiet := httputils.BoolValueOrDefault(r, "quiet", true)
w.Header().Set("Content-Type", "application/json")
output := ioutils.NewWriteFlusher(w)
defer output.Close()
if err := ir.backend.LoadImage(ctx, r.Body, output, quiet); err != nil {
_, _ = output.Write(streamformatter.FormatError(err))
}
return nil
}
type missingImageError struct{}
func (missingImageError) Error() string {
return "image name cannot be blank"
}
func (missingImageError) InvalidParameter() {}
func (ir *imageRouter) deleteImages(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
name := vars["name"]
if strings.TrimSpace(name) == "" {
return missingImageError{}
}
force := httputils.BoolValue(r, "force")
prune := !httputils.BoolValue(r, "noprune")
list, err := ir.backend.ImageDelete(ctx, name, force, prune)
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusOK, list)
}
func (ir *imageRouter) getImagesByName(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
img, err := ir.backend.GetImage(ctx, vars["name"], opts.GetImageOpts{Details: true})
if err != nil {
return err
}
imageInspect, err := ir.toImageInspect(img)
if err != nil {
return err
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.44") {
imageInspect.VirtualSize = imageInspect.Size //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.44.
if imageInspect.Created == "" {
// backwards compatibility for Created not existing returning "0001-01-01T00:00:00Z"
// https://github.com/moby/moby/issues/47368
imageInspect.Created = time.Time{}.Format(time.RFC3339Nano)
}
}
return httputils.WriteJSON(w, http.StatusOK, imageInspect)
}
func (ir *imageRouter) toImageInspect(img *image.Image) (*types.ImageInspect, error) {
var repoTags, repoDigests []string
for _, ref := range img.Details.References {
switch ref.(type) {
case reference.NamedTagged:
repoTags = append(repoTags, reference.FamiliarString(ref))
case reference.Canonical:
repoDigests = append(repoDigests, reference.FamiliarString(ref))
}
}
comment := img.Comment
if len(comment) == 0 && len(img.History) > 0 {
comment = img.History[len(img.History)-1].Comment
}
// Make sure we output empty arrays instead of nil.
if repoTags == nil {
repoTags = []string{}
}
if repoDigests == nil {
repoDigests = []string{}
}
var created string
if img.Created != nil {
created = img.Created.Format(time.RFC3339Nano)
}
return &types.ImageInspect{
ID: img.ID().String(),
RepoTags: repoTags,
RepoDigests: repoDigests,
Parent: img.Parent.String(),
Comment: comment,
Created: created,
Container: img.Container, //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.45.
ContainerConfig: &img.ContainerConfig, //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.45.
DockerVersion: img.DockerVersion,
Author: img.Author,
Config: img.Config,
Architecture: img.Architecture,
Variant: img.Variant,
Os: img.OperatingSystem(),
OsVersion: img.OSVersion,
Size: img.Details.Size,
GraphDriver: types.GraphDriverData{
Name: img.Details.Driver,
Data: img.Details.Metadata,
},
RootFS: rootFSToAPIType(img.RootFS),
Metadata: opts.Metadata{
LastTagTime: img.Details.LastUpdated,
},
}, nil
}
func rootFSToAPIType(rootfs *image.RootFS) types.RootFS {
var layers []string
for _, l := range rootfs.DiffIDs {
layers = append(layers, l.String())
}
return types.RootFS{
Type: rootfs.Type,
Layers: layers,
}
}
func (ir *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
imageFilters, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return err
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.41") {
// NOTE: filter is a shell glob string applied to repository names.
filterParam := r.Form.Get("filter")
if filterParam != "" {
imageFilters.Add("reference", filterParam)
}
}
var sharedSize bool
if versions.GreaterThanOrEqualTo(version, "1.42") {
// NOTE: Support for the "shared-size" parameter was added in API 1.42.
sharedSize = httputils.BoolValue(r, "shared-size")
}
images, err := ir.backend.Images(ctx, types.ImageListOptions{
All: httputils.BoolValue(r, "all"),
Filters: imageFilters,
SharedSize: sharedSize,
})
if err != nil {
return err
}
useNone := versions.LessThan(version, "1.43")
withVirtualSize := versions.LessThan(version, "1.44")
for _, img := range images {
if useNone {
if len(img.RepoTags) == 0 && len(img.RepoDigests) == 0 {
img.RepoTags = append(img.RepoTags, "<none>:<none>")
img.RepoDigests = append(img.RepoDigests, "<none>@<none>")
}
} else {
if img.RepoTags == nil {
img.RepoTags = []string{}
}
if img.RepoDigests == nil {
img.RepoDigests = []string{}
}
}
if withVirtualSize {
img.VirtualSize = img.Size //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.44.
}
}
return httputils.WriteJSON(w, http.StatusOK, images)
}
func (ir *imageRouter) getImagesHistory(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
history, err := ir.backend.ImageHistory(ctx, vars["name"])
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusOK, history)
}
func (ir *imageRouter) postImagesTag(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
ref, err := httputils.RepoTagReference(r.Form.Get("repo"), r.Form.Get("tag"))
if ref == nil || err != nil {
return errdefs.InvalidParameter(err)
}
refName := reference.FamiliarName(ref)
if refName == string(digest.Canonical) {
return errdefs.InvalidParameter(errors.New("refusing to create an ambiguous tag using digest algorithm as name"))
}
img, err := ir.backend.GetImage(ctx, vars["name"], opts.GetImageOpts{})
if err != nil {
return errdefs.NotFound(err)
}
if err := ir.backend.TagImage(ctx, img.ID(), ref); err != nil {
return err
}
w.WriteHeader(http.StatusCreated)
return nil
}
func (ir *imageRouter) getImagesSearch(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
var limit int
if r.Form.Get("limit") != "" {
var err error
limit, err = strconv.Atoi(r.Form.Get("limit"))
if err != nil || limit < 0 {
return errdefs.InvalidParameter(errors.Wrap(err, "invalid limit specified"))
}
}
searchFilters, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return err
}
// For a search it is not an error if no auth was given. Ignore invalid
// AuthConfig to increase compatibility with the existing API.
authConfig, _ := registry.DecodeAuthConfig(r.Header.Get(registry.AuthHeader))
headers := http.Header{}
for k, v := range r.Header {
k = http.CanonicalHeaderKey(k)
if strings.HasPrefix(k, "X-Meta-") {
headers[k] = v
}
}
headers.Set("User-Agent", dockerversion.DockerUserAgent(ctx))
res, err := ir.searcher.Search(ctx, searchFilters, r.Form.Get("term"), limit, authConfig, headers)
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusOK, res)
}
func (ir *imageRouter) postImagesPrune(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
pruneFilters, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return err
}
pruneReport, err := ir.backend.ImagesPrune(ctx, pruneFilters)
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusOK, pruneReport)
}
// validateRepoName validates the name of a repository.
func validateRepoName(name reference.Named) error {
familiarName := reference.FamiliarName(name)
if familiarName == api.NoBaseImageSpecifier {
return fmt.Errorf("'%s' is a reserved name", familiarName)
}
return nil
}

View File

@@ -0,0 +1,73 @@
package router // import "github.com/docker/docker/api/server/router"
import (
"net/http"
"github.com/docker/docker/api/server/httputils"
)
// RouteWrapper wraps a route with extra functionality.
// It is passed in when creating a new route.
type RouteWrapper func(r Route) Route
// localRoute defines an individual API route to connect
// with the docker daemon. It implements Route.
type localRoute struct {
method string
path string
handler httputils.APIFunc
}
// Handler returns the APIFunc to let the server wrap it in middlewares.
func (l localRoute) Handler() httputils.APIFunc {
return l.handler
}
// Method returns the http method that the route responds to.
func (l localRoute) Method() string {
return l.method
}
// Path returns the subpath where the route responds to.
func (l localRoute) Path() string {
return l.path
}
// NewRoute initializes a new local route for the router.
func NewRoute(method, path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
var r Route = localRoute{method, path, handler}
for _, o := range opts {
r = o(r)
}
return r
}
// NewGetRoute initializes a new route with the http method GET.
func NewGetRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodGet, path, handler, opts...)
}
// NewPostRoute initializes a new route with the http method POST.
func NewPostRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodPost, path, handler, opts...)
}
// NewPutRoute initializes a new route with the http method PUT.
func NewPutRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodPut, path, handler, opts...)
}
// NewDeleteRoute initializes a new route with the http method DELETE.
func NewDeleteRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodDelete, path, handler, opts...)
}
// NewOptionsRoute initializes a new route with the http method OPTIONS.
func NewOptionsRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodOptions, path, handler, opts...)
}
// NewHeadRoute initializes a new route with the http method HEAD.
func NewHeadRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodHead, path, handler, opts...)
}

View File

@@ -0,0 +1,31 @@
package network // import "github.com/docker/docker/api/server/router/network"
import (
"context"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/network"
)
// Backend is all the methods that need to be implemented
// to provide network specific functionality.
type Backend interface {
GetNetworks(filters.Args, backend.NetworkListConfig) ([]types.NetworkResource, error)
CreateNetwork(nc types.NetworkCreateRequest) (*types.NetworkCreateResponse, error)
ConnectContainerToNetwork(containerName, networkName string, endpointConfig *network.EndpointSettings) error
DisconnectContainerFromNetwork(containerName string, networkName string, force bool) error
DeleteNetwork(networkID string) error
NetworksPrune(ctx context.Context, pruneFilters filters.Args) (*types.NetworksPruneReport, error)
}
// ClusterBackend is all the methods that need to be implemented
// to provide cluster network specific functionality.
type ClusterBackend interface {
GetNetworks(filters.Args) ([]types.NetworkResource, error)
GetNetwork(name string) (types.NetworkResource, error)
GetNetworksByName(name string) ([]types.NetworkResource, error)
CreateNetwork(nc types.NetworkCreateRequest) (string, error)
RemoveNetwork(name string) error
}

Some files were not shown because too many files have changed in this diff Show More