Compare commits

...

201 Commits

Author SHA1 Message Date
Cory Snider
f415784c1a Merge pull request #51575 from smerkviladze/25.0-add-windows-integration-tests
[25.0 backport] integration: add Windows network driver and isolation tests
2025-11-26 18:55:24 -05:00
Sopho Merkviladze
4ef26e4c35 integration: add Windows network driver and isolation tests
Add integration tests for Windows container functionality focusing on network drivers and container isolation modes.

Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
2025-11-26 13:24:13 +04:00
Sebastiaan van Stijn
2b409606ac Merge pull request #51344 from smerkviladze/50179-25.0-windows-gha-updates
[25.0 backport] gha: update to windows 2022 / 2025
2025-10-30 22:20:37 +01:00
Sebastiaan van Stijn
00fbff3423 integration/networking: increase context timeout for attach
The TestBridgeICCWindows test was failing on Windows due to a context timeout:

=== FAIL: github.com/docker/docker/integration/networking TestBridgeICCWindows/User_defined_nat_network (9.02s)
    bridge_test.go:243: assertion failed: error is not nil: Post "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.44/containers/62a4ed964f125e023cc298fde2d4d2f8f35415da970fd163b24e181b8c0c6654/start": context deadline exceeded
    panic.go:635: assertion failed: error is not nil: Error response from daemon: error while removing network: network mynat id 25066355c070294c1d8d596c204aa81f056cc32b3e12bf7c56ca9c5746a85b0c has active endpoints

=== FAIL: github.com/docker/docker/integration/networking TestBridgeICCWindows (17.65s)

Windows appears to be slower to start, so these timeouts are expected.
Increase the context timeout to give it a little more time.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0ea28fede0)
Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
2025-10-30 21:58:10 +04:00
Sebastiaan van Stijn
92df858a5b integration-cli: TestCopyFromContainerPathIsNotDir: adjust for win 2025
It looks like the error returned by Windows changed in Windows 2025; before
Windows 2025, this produced a `ERROR_INVALID_NAME`;

    The filename, directory name, or volume label syntax is incorrect.

But Windows 2025 produces a `ERROR_DIRECTORY` ("The directory name is invalid."):

    CreateFile \\\\?\\Volume{d9f06b05-0405-418b-b3e5-4fede64f3cdc}\\windows\\system32\\drivers\\etc\\hosts\\: The directory name is invalid.

Docs; https://learn.microsoft.com/en-us/windows/win32/debug/system-error-codes--0-499-

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d3d20b9195)
Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
2025-10-30 14:19:54 +04:00
Sebastiaan van Stijn
00f9f839c6 gha: run windows 2025 on PRs, 2022 scheduled
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9316396db0)
Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
2025-10-30 13:37:45 +04:00
Sebastiaan van Stijn
acd2546285 gha: update to windows 2022 / 2025
The hosted Windows 2019 runners reach EOL on June 30;
https://github.com/actions/runner-images/issues/12045

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6f484d0d4c)
Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
2025-10-30 13:07:40 +04:00
Sebastiaan van Stijn
d334795adb Merge pull request #51129 from smerkviladze/25.0-bump-swarmkit-to-v2.1.1
[25.0 backport] vendor: github.com/moby/swarmkit/v2 v2.1.1
2025-10-09 20:51:21 +02:00
Sopho Merkviladze
71967c3a82 vendor: github.com/moby/swarmkit/v2 v2.1.1
- Afford NetworkAllocator passing per-app state in Control API responses
- manager: restore NewServer to v2 signature
- api: afford passing params to OnGetNetwork hook
- Remove weak TLS cipher suites

full diff: https://github.com/moby/swarmkit/compare/v2.0.0...v2.1.1

Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
(cherry picked from commit ca9c5c6f7b)
Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
2025-10-09 21:52:39 +04:00
Sebastiaan van Stijn
f06fd6d3c9 Merge pull request #51117 from austinvazquez/cherry-pick-fix-go-validation-checks-to-25.0
[25.0] Rework Go mod tidy/vendor checks
2025-10-07 23:09:20 +02:00
Sebastiaan van Stijn
ce61e5777b Merge pull request #51084 from smerkviladze/25.0-enable-strong-ciphers
[25.0] daemon: add support for DOCKER_DISABLE_WEAK_CIPHERS env-var to enforce strong TLS ciphers
2025-10-07 20:49:07 +02:00
Sopho Merkviladze
26d6c35b1b daemon: optionally enforce strong TLS ciphers via environment variable
Introduce the DOCKER_DISABLE_WEAK_CIPHERS environment variable to allow
disabling weak TLS ciphers. When set to true, the daemon restricts
TLS to a modern, secure subset of cipher suites, disabling known weak
ciphers such as CBC-mode ciphers.

This is intended as an edge-case option and is not exposed via a CLI flag or
config option. By default, weak ciphers remain enabled for backward compatibility.

Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
2025-10-07 21:24:07 +04:00
Sebastiaan van Stijn
a14b16e1f3 Merge pull request #51123 from crazy-max/25.0_ci-cache-fixes
[25.0] ci: update gha cache attributes
2025-10-07 19:19:53 +02:00
CrazyMax
3ea40f50ef ci: update gha cache attributes
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-10-07 12:42:46 +02:00
Austin Vazquez
7c47f6d831 Rework Go mod tidy/vendor checks
This change reworks the Go mod tidy/vendor checks to run for all tracked Go modules by the project and fail for any uncommitted changes.

Signed-off-by: Austin Vazquez <austin.vazquez@docker.com>
(cherry picked from commit f6e1bf2808)
Signed-off-by: Austin Vazquez <austin.vazquez@docker.com>
2025-10-06 19:32:59 -05:00
Austin Vazquez
0847330073 Add existence check for go.mod and go.sum files
Signed-off-by: Austin Vazquez <austin.vazquez@docker.com>
(cherry picked from commit 0ad35e3ef0)
Signed-off-by: Austin Vazquez <austin.vazquez@docker.com>
2025-10-06 19:25:50 -05:00
Paweł Gronowski
b4c0ebf6d4 Merge pull request #50939 from vvoland/50936-25.0
[25.0 backport] Dockerfile.windows: remove deprecated 7Zip4Powershell
2025-09-09 19:35:21 +02:00
Paweł Gronowski
00f6814357 Dockerfile.windows: remove deprecated 7Zip4Powershell
`tar` utility is included in Windows 10 (17063+) and Windows Server
2019+ so we can use it directly.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 8c8324b37f)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2025-09-09 17:35:49 +02:00
Cory Snider
165516eb47 Merge pull request #50551 from corhere/backport-25.0/libn/all-the-overlay-fixes
[25.0] libnetwork/overlay: backport all the fixes
2025-08-11 16:52:52 -04:00
Cory Snider
f099e911bd libnetwork: handle coalesced endpoint events
The eventually-consistent nature of NetworkDB means we cannot depend on
events being received in the same order that they were sent. Nor can we
depend on receiving events for all intermediate states. It is possible
for a series of entry UPDATEs, or a DELETE followed by a CREATE with the
same key, to get coalesced into a single UPDATE event on the receiving
node. Watchers of NetworkDB tables therefore need to be prepared to
gracefully handle arbitrary UPDATEs of a key, including those where the
new value may have nothing in common with the previous value.

The libnetwork controller naively handled events for endpoint_table
assuming that an endpoint leave followed by a rejoin of the same
endpoint would always be expressed as a DELETE event followed by a
CREATE. It would handle a coalesced UPDATE as a CREATE, adding a new
service binding without removing the old one. This would
have various side effects, such as having the "transient state" of
having multiple conflicting service bindings where more than one
endpoint is assigned an IP address never settling.

Modify the libnetwork controller to handle an UPDATE by removing the
previous service binding then adding the new one.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 4538a1de0a)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
bace1b8a3b libnetwork/d/overlay: handle coalesced peer updates
The eventually-consistent nature of NetworkDB means we cannot depend on
events being received in the same order that they were sent. Nor can we
depend on receiving events for all intermediate states. It is possible
for a series of entry UPDATEs, or a DELETE followed by a CREATE with the
same key, to get coalesced into a single UPDATE event on the receiving
node. Watchers of NetworkDB tables therefore need to be prepared to
gracefully handle arbitrary UPDATEs of a key, including those where the
new value may have nothing in common with the previous value.

The overlay driver naively handled events for overlay_peer_table
assuming that an endpoint leave followed by a rejoin of the same
endpoint would always be expressed as a DELETE event followed by a
CREATE. It would handle a coalesced UPDATE as a CREATE, inserting a new
entry into peerDB without removing the old one. This would
have various side effects, such as having the "transient state" of
multiple entries in peerDB with the same peer IP never settle.

Update driverapi to pass both the previous and new value of a table
entry into the driver. Modify the overlay driver to handle an UPDATE by
removing the previous peer entry from peerDB then adding the new one.
Modify the Windows overlay driver to match.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit e1a586a9a7)

libn/d/overlay: don't deref nil PeerRecord on error

If unmarshaling the peer record fails, there is no need to check if it's
a record for a local peer. Attempting to do so anyway will result in a
nil-dereference panic. Don't do that.

The Windows overlay driver has a typo: prevPeer is being checked twice
for whether it was a local-peer record. Check prevPeer once and newPeer
once each, as intended.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 12c6345d3a)

Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
f9e54290b5 libn/d/win/overlay: dedupe NetworkDB definitions
Windows and Linux overlay driver instances are interoperable, working
from the same NetworkDB table for peer discovery. As both drivers
produce and consume serialized data through the table, they both need to
have a shared understanding of the shape and semantics of that data.
The Windows overlay driver contains a duplicate copy of the protobuf
definitions used for marshaling and unmarshaling the NetworkDB peer
entries for dubious reasons. It gives us the flexibility to have the
definitions diverge, which is only really useful for shooting ourselves
in the foot.

Make libnetwork/drivers/overlay the source of truth for the peer record
definitions and the name of the NetworkDB table for distributing peer
records.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 8340e109de)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
fc3df55230 libn/d/overlay: extract hashable address types
The macAddr and ipmac types are generally useful within libnetwork. Move
them to a dedicated package and overhaul the API to be more like that of
the net/netip package.

Update the overlay driver to utilize these types, adapting to the new
API.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit c7b93702b9)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
b22872af60 libnetwork/driverapi: make EventNotify optional
Overlay is the only driver which makes use of the EventNotify facility,
yet all other driver implementations are forced to provide a stub
implementation. Move the EventNotify and DecodeTableEntry methods into a
new optional TableWatcher interface and remove the stubs from all the
other drivers.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 844023f794)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
c7e17ae65d libn/networkdb: report prev value in update events
When handling updates to existing entries, it is often necessary to know
what the previous value was. NetworkDB knows the previous and new values
when it broadcasts an update event for an entry. Include both values in
the update event so the watchers do not have to do their own parallel
bookkeeping.

Unify the event types under WatchEvent as representing the operation kind
in the type system has been inconvenient, not useful. The operation is
now implied by the nilness of the Value and Prev event fields.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 69c3c56eba)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
d60c71a9d7 libnetwork/d/overlay: fix logical race conditions
The concurrency control in the overlay driver is logically unsound.
While the use of mutexes is sufficient to prevent data races --
violations of the Go memory model -- many operations which need to be
atomic are performed with unbounded concurrency.

Overhaul the use of locks in the overlay network driver. Implement sound
locking at the network granularity: operations may proceed concurrently
iff they are being applied to distinct networks. Push the responsibility
of locking up to the code which calls methods or accesses struct fields
to avoid deadlock situations like we had previously with
d.initSandboxPeerDB() and to make the code easier to reason about.

Each overlay network has a distinct peer db. The NetworkDB watch for the
overlay peer table for the network will only start after
(*driver).CreateNetwork returns and will be stopped before libnetwork
calls (*driver).DeleteNetwork, therefore the lifetime of the peer db for
a network is constrained to the lifetime of the network itself. Yet the
peer db for a network is tracked in a dedicated map, separately from the
network objects themselves. This has resulted in a parallel set of
mutexes to manage concurrency of the peer db distinct from the mutexes
for the driver and networks. Move the peer db for a network into a field
of the network struct and guard it from concurrent access using the
per-network lock. Move the methods for manipulating the peer db into the
network struct so that the methods can only be called if the caller has
a reference to the network object.

Network creation and deletion are synchronized using the driver-scope
mutex, but some of the kernel programming is performed outside of the
critical section. It is possible for network deletion to race with
recreating the network, interleaving the kernel programming for the
network creation and deletion, resulting in inconsistent kernel state.
Parallelize network creation and deletion soundly. Use a double-checked
locking scheme to soundly handle the case of concurrent CreateNetwork
and DeleteNetwork for the same network id without blocking operations
on other networks. Synchronize operations on a network so that
operations on the network such as adding a neighbor to the peer db are
performed atomically, not interleaved with deleting the network.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 89d3419093)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
ad54b8f9ce libn/d/overlay: fix encryption race conditions
There is a dedicated mutex for synchronizing access to the encrMap.
Separately, the main driver mutex is used for synchronizing access to
the encryption keys. Their use is sufficient to prevent data races (if
used correctly, which is not the case) but not logical race conditions.
Programming the encryption parameters for a peer can race with
encryption keys being updated, which could lead to inconsistencies
between the parameters programmed into the kernel and the desired state.

Introduce a new mutex for synchronizing encryption operations. Use that
mutex to synchronize access to both encrMap and keys. Handle encryption
key updates in a critical section so they can no longer be interleaved
with kernel programming of encryption parameters.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 843cd96725)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
8075689abd libn/d/overlay: inline secMapWalk into only caller
func (*driver) secMapWalk is a curious beast. It is named walk, yet it
also mutates the collection being iterated over. It returns an error,
but that error is always nil. It takes a callback that can break
iteration, yet the only caller makes no use of that affordance. Its
utility is limited and the abstraction hinders readability more than it
helps. Open-code the d.secMap.nodes loop into
func (*driver) updateKeys(), the only caller.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit a1d299749c)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
480dfaef06 libnetwork/d/overlay: un-embed mutexes
It is easier to find all references when they are struct fields rather
than embedded structs.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 74713e1a7d)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
e604d70e22 libnetwork/d/overlay: ref-count encryption params
The IPsec encryption parameters (Security Association Database and
Security Policy Database entries) for a particular overlay network peer
(VTEP) are shared global state as they have to be programmed into the
root network namespace. The same parameters are used when encrypting
VXLAN traffic to a particular VTEP for all overlay networks. Deleting
the entries for a VTEP will break encryption to that VTEP across all
encrypted overlay networks, therefore the decision of when to delete the
entries must take the state of all overlay networks into account.
Unfortunately this is not the case.

The overlay driver uses local per-network state to decide when to
program and delete the parameters for a VTEP. In practice, the
parameters for all VTEPs participating in an encrypted overlay network
are deleted when the network is deleted. Encryption to that VTEP over
all other active encrypted overlay networks would be broken until some
other incidental peerDB event triggered a re-programming of the
parameters for that VTEP.

Change the setupEncryption and removeEncryption functions to be
reference-counted. The removeEncryption function needs to be called the
same number of times as addEncryption before the parameters are deleted
from the kernel.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 057e35dd65)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Sebastiaan van Stijn
b6b13b20af libnetwork/drivers/overlay: fix naked returns, output variables
libnetwork/drivers/overlay/encryption.go:370:2: naked return in func `programSA` with 64 lines of code (nakedret)
        return
        ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 02b4c7cc52)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
b539aea3cd libnetwork/d/overlay: properly model peer db
The overlay driver assumes that the peer table in NetworkDB will always
converge to a 1:1:1 mapping from peer endpoint IP address to MAC address
to VTEP. While this currently holds true in practice most of the time,
it is not an invariant and there are ways that users can violate this
assumption.

The driver detects whether peer entries conflict with each other by
matching up (IP, MAC) tuples. In the common case this works out fine as
the MAC address for an endpoint is generally derived from the assigned
IP address. If an IP address gets reassigned to a container on another
node the MAC address will follow, so the driver's conflict resolution
logic will behave as intended. However users may explicitly configure
the MAC address for a container's network endpoints. If an IP address
gets reassigned from a container with an auto-generated MAC address to a
container with a manually-configured MAC, or vice versa, the driver
would not detect the conflict as the (IP, MAC) tuples won't match up. It
would attempt to program the kernel's neighbor table with two
conflicting MAC addresses for one IP, which will fail. And since it
does not realize that there is a conflict, the driver won't reprogram
the kernel from the remaining entry when the other entry is deleted.

The assumption that only one IP address may resolve to a given MAC
address is violated if multiple IP addresses are assigned to an
endpoint. This rarely comes up in practice today as the overlay driver
only supports IPv4 single-stack connectivity for endpoints. If multiple
distinct peer entries exist with the same MAC address, the driver will
delete the MAC->VTEP mapping from the kernel's forwarding database when
any entry is deleted, even if other entries remain active. This
limitation is one of the biggest obstacles in the way of supporting IPv6
and dual-stack connectivity for endpoints attached to overlay networks.

Modify the peer db logic to correctly handle the cases where peer
entries have non-unique MAC or VTEP values. Treat any set of entries
with non-unique IP addresses as a conflict, irrespective of the entries'
MAC addresses. Maintain a reference count of forwarding database entries
and only delete the MAC->VTEP mapping from the kernel when there are no
longer any neighbor entries which resolve to that MAC.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 1c2b744ca2)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
e43e322a3b libnetwork/d/overlay: refactor peer db impl
The peer db implementation is more complex than it needs to be.
Notably, the peerCRUD / peerCRUDOp function split is a vestige of its
evolution from a worker goroutine receiving commands over a channel.

Refactor the peer db operations to be easier to read, understand and
modify. Factor the kernel-programming operations out into dedicated
addNeighbor and deleteNeighbor functions. Inline the rest of the
peerCRUDOp functions into their respective peerCRUD wrappers.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 59437f56f9)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
89ea2469df libnetwork/d/overlay: drop initEncryption function
The (*driver).Join function does many things to set up overlay
networking. One of the first things it does is call
(*network).joinSandbox, which in turn calls (*driver).initSandboxPeerDB.
The initSandboxPeerDB function iterates through the peer db to add
entries to the VXLAN FDB, neighbor table and IPsec security association
database in the kernel for all known peers on the overlay network.

One of the last things the (*driver).Join function does is call
(*driver).initEncryption. The initEncryption function iterates through
the peer db to add entries to the IPsec security association database in
the kernel for all known peers on the overlay network. But the preceding
initSandboxPeerDB call already did that! The initEncryption function is
redundant and can safely be removed.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit df6b405796)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
f69e64ab12 libnetwork/d/overlay: drop checkEncryption function
In addition to being three functions in a trenchcoat, the
checkEncryption function has a very subtle implementation which is
difficult to reason about. That is not a good property for security
relevant code to have.

Replace two of the three calls to checkEncryption with conditional calls
to setupEncryption and removeEncryption, lifting the conditional logic
which was hidden away in checkEncryption into the call sites to make it
easier to reason about the code. Replace the third call with a call to a
new initEncryption function.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 713f887698)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
67fbdf3c28 libnetwork/d/overlay: make setupEncryption a method
The setupEncryption and removeEncryption functions take several
parameters, but all call sites pass the same values for all the
parameters aside from remoteIP: values taken from fields of the driver
struct. Refactor these functions to be methods of the driver struct and
drop the redundant parameters.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit cb4e7b2f03)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
33a7e83e6d libnetwork/d/overlay: checkEncryption: drop isLocal param
Since it is not meaningful to add or remove encryption between the local
node and itself, the isLocal parameter is redundant. Setting up
encryption for all network peers is now invoked by calling

    checkEncryption(nid, netip.Addr{}, true)

Calling checkEncryption with isLocal=true, add=false is now more
explicitly a no-op. It always was effectively a no-op, but that was not
easy to spot by inspection. In the world with the isLocal flag,
calls to checkEncryption where isLocal=true and add=false would have rIP
set to d.advertiseAddr. In other words, it was a request to remove
encryption parameters between the local peer and itself if peerDB had no
remote-peer entries for the network. So either the call would do
nothing, or it would remove encryption parameters that aren't used for
anything. Now the equivalent call always does nothing.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 0d893252ac)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
684b2688d2 libnetwork/d/overlay: peerdb: drop isLocal param
Drop the isLocal boolean parameters from the peerDB functions. Local
peers have vtep == netip.Addr{}.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 4b1c1236b9)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
b61930cc82 libnetwork/d/overlay: elide vtep for local peers
The VTEP value for a peer in peerDB is only accurate for a remote peer.
The VTEP for a local peer would be the driver's advertise address, which
is not necessarily constant for the lifetime of the driver instance.
The VTEP values persisted in the peerDB entries for local peers could be
stale or missing if not kept in sync with the advertise address. And the
peerDB could get polluted with duplicate entries for local peers if the
advertise address was to change, as entries which differ only by VTEP
are considered distinct by SetMatrix. Persisting the advertise address
as the VTEP for local peers creates lots of problems that are not easy
to solve.

Stop persisting the VTEP for local peers in peerDB. Any code that needs
to know the VTEP for local peers can look that up from the source of
truth: the driver's advertise address. Use the lack of a VTEP in peerDB
entries to signify local peers, making the isLocal flag redundant.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 48e0b24ff7)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
1db0510301 libnetwork/d/overlay: filter local peers explicitly
The overlay driver's checkEncryption function configures the IPSec
parameters for the VXLAN tunnels to peer nodes. When called with
isLocal=true, it configures encryption for all peer nodes with at least
one peerDB entry. Since the local peers are also included in the peerDB,
it needs to filter those entries out. It does so by filtering out any
peer entries whose VTEP address is equal to the current local advertise
address. Trouble is, the local advertise address is not necessarily
constant. The driver tries to handle this case by calling
peerDBUpdateSelf() when the advertise address changes. This function
iterates through the peerDB and tries to update the VTEP address for all
local peer entries, but it does not actually do anything: it mutates a
temporary copy of the entry which is not persisted back into the peerDB.
(It used to be functional, but was broken when the peerDB was extended
to use SetMatrix.) So there may be cases where local peer entries are
not filtered out properly, resulting in spurious encryption parameters
being programmed into the kernel.

Filter out local peers when walking the peerDB by filtering on whether
the entry has the isLocal flag set. Remove the no-op code which attempts
to update local entries in the peerDB. No other code takes any interest
in the VTEP value for isLocal peer entries.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit a9e2d6d06e)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
9ff06c515c libn/d/overlay: use netip types more
The netip types are really useful for tracking state in the overlay
driver as they are hashable, unlike net.IP and friends, making them
directly useable as map keys. Converting between netip and net types is
fairly trivial, but fewer conversions is more ergonomic.

The NetworkDB entries for the overlay peer table encode the IP addresses
as strings. We need to parse them to some representation before
processing them further. Parse directly into netip types and pass those
values around to cut down on the number of conversions needed.

The peerDB needs to marshal the keys and entries to structs of hashable
values to be able to insert them into the SetMatrix. Use netip.Addr in
peerEntry so that peerEntry values can be directly inserted into the
SetMatrix without conversions. Use a hashable struct type as the
SetMatrix key to avoid having to marshal the whole struct to a string
and parse it back out.

Use netip.Addr as the map key for the driver's encryption map so the
values do not need to be converted to and from strings. Change the
encryption configuration methods to take netip types so the peerDB code
can pass netip values directly.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit d188df0039)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
8f0a803fc6 libnetwork/internal/setmatrix: make keys generic
Make the SetMatrix key's type generic so that e.g. netip.Addr values can
be used as matrix keys.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 0317f773a6)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
7d8c7c21f2 libnetwork/osl: stop tracking neighbor entries
The Namespace keeps some state for each inserted neighbor-table entry
which is used to delete the entry (and any related entries) given only
the IP and MAC address of the entry to delete. This state is not
strictly required as the retained data is a pure function of the
parameters passed to AddNeighbor(), and the kernel can inform us whether
an attempt to add a neighbor entry would conflict with an existing
entry. Get rid of the neighbor state in Namespace. It's just one more
piece of state that can cause lots of grief if it falls out of sync with
ground truth. Require callers to call DeleteNeighbor() with the same
aguments as they had passed to AddNeighbor(). Push the responsibility
for detecting attempts to insert conflicting entries into the neighbor
table onto the kernel by using (*netlink.Handle).NeighAdd() instead of
NeighSet().

Modernize the error messages and logging in DeleteNeighbor() and
AddNeighbor().

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 0d6e7cd983)

libn/d/overlay: delete FDB entry from AF_BRIDGE

Starting with commit 0d6e7cd983
DeleteNeighbor() needs to be called with the same options as the
AddNeighbor() call that created the neighbor entry. The calls in peerdb
were modified incorrectly, resulting in the deletes failing and leaking
neighbor entries. Fix up the DeleteNeighbor calls so that the FDB entry
is deleted from the FDB instead of the neighbor table, and the neighbor
is deleted from the neighbor table instead of the FDB.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 7a12bbe5d3)

Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
9cd4021dae libnetwork/osl: remove superfluous locks in Namespace
The isDefault and nlHandle fields are immutable once the Namespace is
constructed.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 9866738736)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
4d6c4e44d7 libn/osl: refactor func (*Namespace) AddNeighbor
Scope local variables as narrowly as possible.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit b6d76eb572)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
e5b652add3 libn/osl: drop unused AddNeighbor force parameter
func (*Namespace) AddNeighbor is only ever called with the force
parameter set to false. Remove the parameter and eliminate dead code.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 3bdf99d127)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:25 -04:00
Cory Snider
ca41647695 libn/d/overlay: drop miss flags from peerAddOp
as all callers unconditionally set them to false.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit a8e8a4cdad)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:24 -04:00
Cory Snider
199b2496e7 libnetwork/d/overlay: drop miss flags from peerAdd
as all callers unconditionally set them to false.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 6ee58c2d29)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:24 -04:00
Cory Snider
65ec8c89a6 libn/d/overlay: drop obsolete writeToStore comment
The writeToStore() call was removed from CreateNetwork in
commit 0fa873c0fe. The comment about
undoing the write is no longer applicable.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit d90277372f)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-08-11 15:13:24 -04:00
Austin Vazquez
c447682dee Merge pull request #50693 from corhere/backport-25.0/fix-frozen
[25.0 backport] Fix download-frozen-image-v2
2025-08-11 12:11:09 -07:00
Paweł Gronowski
a749f055d9 download-frozen-image-v2: Use curl -L
Passing the Auth to the redirected location was fixed in curl 7.58:
https://curl.se/changes.html#7_58_0 so we no longer need the extra
handling and can just use `-L` to let curl handle redirects.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2025-08-11 13:40:18 -04:00
Paweł Gronowski
5a12eaf718 download-frozen-image-v2: handle 307 responses without decimal
Correctly parse HTTP response that doesn't contain an HTTP version with a decimal place:

```
< HTTP/2 307
```

The previous version would only match strings like `HTTP/2.0 307`.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2025-08-11 13:40:18 -04:00
Cory Snider
59f062b233 Merge pull request #50511 from corhere/backport-25.0/libn/all-the-networkdb-fixes
[25.0] libnetwork/networkdb: backport all the fixes
2025-08-07 11:44:08 -04:00
Paweł Gronowski
842a9c522a Merge commit from fork
[25.0] Restore INC rules on firewalld reload
2025-07-29 10:01:01 +00:00
Rob Murray
651b2feb27 Restore INC iptables rules on firewalld reload
Signed-off-by: Rob Murray <rob.murray@docker.com>
2025-07-28 12:06:58 -04:00
Cory Snider
a43c1eef18 Merge pull request #50445 from corhere/backport-25.0/fix-firewalld-reload
[25.0 backport] libnetwork/d/{bridge,overlay}: fix firewalld reload handling
2025-07-28 12:05:05 -04:00
Cory Snider
728de37428 libnetwork/networkdb: improve quality of randomness
The property test for the mRandomNodes function revealed that it may
sometimes pick out a sample of fewer than m nodes even when the number
of nodes to pick from (excluding the local node) is >= m. Rewrite it
using a random shuffle or permutation so that it always picks a
uniformly-distributed sample of the requested size whenever the
population is large enough.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit ac5f464649)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:34 -04:00
Cory Snider
5bf90ded7a libnetwork/networkdb: test quality of mRandomNodes
TestNetworkDBAlwaysConverges will occasionally find a failure where one
entry is missing on one node even after waiting a full five minutes. One
possible explanation is that the selection of nodes to gossip with is
biased in some way. Test that the mRandomNodes function picks a
uniformly distributed sample of node IDs of sufficient length.

The new test reveals that mRandomNodes may sometimes pick out a sample
of fewer than m nodes even when the number of nodes to pick from
(excluding the local node) is >= m. Put the test behind an xfail tag so
it is opt-in to run, without interfering with CI or bisecting.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 5799deb853)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:34 -04:00
Cory Snider
51d13163c5 libnetwork/networkdb: add convergence test
Add a property-based test which asserts that a cluster of NetworkDB
nodes always eventually converges to a consistent state. As this test
takes a long time to run it is build-tagged to be excluded from CI.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit d8730dc1d3)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:34 -04:00
Cory Snider
9ca52f5fb9 libnetwork/networkdb: log encryption keys to file
Add a feature to NetworkDB to log the encryption keys to a file for the
Wireshark memberlist plugin to consume, configured using an environment
variable.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit ebfafa1561)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
ec820662de libn/networkdb: stop forging tombstone entries
When a node leaves a network, all entries owned by that node are
implicitly deleted. The other NetworkDB nodes handle the leave by
setting the deleted flag on the entries owned by the left node in their
local stores. This behaviour is problematic as it results in two
conflicting entries with the same Lamport timestamp propagating
through the cluster.

Consider two NetworkDB nodes, A, and B, which are both joined to some
network. Node A in quick succession leaves the network, immediately
rejoins it, then creates an entry. If Node B processes the
entry-creation event first, it will add the entry to its local store
then set the deleted flag upon processing the network-leave. No matter
how many times B bulk-syncs with A, B will ignore the live entry for
having the same timestamp as its local tombstone entry. Once this
situation occurs, the only way to recover is for the entry to get
updated by A with a new timestamp.

There is no need for a node to store forged tombstones for another
node's entries. All nodes will purge the entries naturally when they
process the network-leave or node-leave event. Simply delete the
non-owned entries from the local store so there is no inconsistent state
to interfere with convergence when nodes rejoin a network. Have nodes
update their local store with tombstones for entries when leaving a
network so that after a rapid leave-then-rejoin the entry deletions
propagate to nodes which may have missed the leave event.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 21d9109750)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
f3f1e091a8 libnetwork/networkdb: fix broadcast queue deadlocks
NetworkDB's JoinNetwork function enqueues a message onto a
TransmitLimitedQueue while holding the NetworkDB mutex locked for
writing. The TransmitLimitedQueue has its own synchronization;
it locks its mutex when enqueueing a message. Locking order:
  1. (NetworkDB).RWMutex.Lock()
  2. (TransmitLimitedQueue).mu.Lock()

NetworkDB's gossip periodic task calls GetBroadcasts on the same
TransmitLimitedQueue to retrieve the enqueued messages. GetBroadcasts
invokes the queue's NumNodes callback while the mutex is locked. The
NumNodes callback function that NetworkDB sets locks the NetworkDB mutex
for reading to take the length of the nodes map. Locking order:
  1. (TransmitLimitedQueue).mu.Lock()
  2. (NetworkDB).RWMutex.RLock()

If one goroutine calls GetBroadcasts on the queue concurrently with
another goroutine calling JoinNetwork on the NetworkDB, the goroutines
may deadlock due to the lock inversion.

Fix the deadlock by caching the number of nodes in an atomic variable so
that the NumNodes callback can load the value without blocking or
violating Go's memory model. And fix a similar deadlock situation with
the table-event broadcast queues.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 08bde5edfa)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
16dc168388 libn/networkdb: make TestNetworkDBIslands not flaky
With rejoinClusterBootStrap fixed in tests, split clusters should
reliably self-heal in tests as well as production. Work around the other
source of flakiness in TestNetworkDBIslands: timing out waiting for a
failed node to transition to gracefully left. This flake happens when
one of the leaving nodes sends its NodeLeft message to the other leaving
node, and the second is shut down before it has a chance to rebroadcast
the message to the remaining nodes. The proper fix would be to leverage
memberlist's own bookkeeping instead of duplicating it poorly with user
messages, but doing so requires a change in the memberlist module.
Instead have the test check that the sum of failed+left nodes is
expected instead of waiting for all nodes to have failed==3 && left==0.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit aff444df86)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
12aaf29287 libn/networkdb: prevent spurious rejoins in tests
The rejoinClusterBootStrap periodic task rejoins with the bootstrap
nodes if none of them are members of the cluster. It correlates the
cluster nodes with the bootstrap list by comparing IP addresses,
ignoring ports. In normal operation this works out fine as every node
has a unique IP address, but in unit tests every node listens on a
distinct port of 127.0.0.1. This situation causes the check to
incorrectly filter out all nodes from the list, mistaking them for the
local node.

Filter out the local node using pointer equality of the *node to avoid
any ambiguity. Correlate the remote nodes by IP:port so that the check
behaves the same in tests and in production.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 1e1be54d3e)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
ca5250dc9f libnetwork/networkdb: prioritize local broadcasts
A network node is responsible for both broadcasting table events for
entries it owns and for rebroadcasting table events from other nodes it
has received. Table events to be broadcast are added to a single queue
per network, including events for rebroadcasting. As the memberlist
TransmitLimitedQueue is (to a first approximation) LIFO, a flood of
events from other nodes could delay the broadcasting of
locally-generated events indefinitely. Prioritize broadcasting local
events by splitting up the queues and only pulling from the rebroadcast
queue if there is free space in the gossip packet after draining the
local-broadcast queue.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 6ec6e0991a)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
c912e5278b libnetwork/networkdb: improve TestCRUDTableEntries
Log more details when assertions fail to provide a more complete picture
of what went wrong when TestCRUDTableEntries fails. Log the state of
each NetworkDB instance at various points in TestCRUDTableEntries to
provide an even more complete picture.

Increase the global logger verbosity in tests so warnings and debug logs
are printed to the test log.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit e9a7154909)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
e9ed499888 libn/networkdb: use distinct type for own networks
NetworkDB uses a muli-dimensional map of struct network to keep track of
network attachments for both remote nodes and the local node. Only a
subset of the struct fields are used for remote nodes' network
attachments. The tableBroadcasts pointer field in particular is
always initialized for network values representing local attachments
(read: nDB.networks[nDB.config.NodeID]) and always nil for remote
attachments. Consequently, unnecessary defensive nil-pointer checks are
peppered throughout the code despite the aforementioned invariant.

Enshrine the invariant that tableBroadcasts is initialized iff the
network attachment is for the local node in the type system. Pare down
struct network to only the fields needed for remote network attachments
and move the local-only fields into a new struct thisNodeNetwork. Elide
the unnecessary nil-checks.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit dbb0d88109)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
6856a17655 libnetwork/networkdb: don't clear queue on rejoin
When joining a network that was previously joined but not yet reaped,
NetworkDB replaces the network struct value with a zeroed-out one with
the entries count copied over. This is also the case when joining a
network that is currently joined! Consequently, joining a network has
the side effect of clearing the broadcast queue. If the queue is cleared
while messages are still pending broadcast, convergence may be delayed
until the next bulk sync cycle.

Make it an error to join a network twice without leaving. Retain the
existing broadcast queue when rejoining a network that has not yet been
reaped.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 51f31826ee)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
31f4c5914e libnetwork/networkdb: drop id field from network
The map key for nDB.networks is the network ID. The struct field is not
actually used anywhere in practice.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 30b27ab6ea)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
35f7b1d7c9 libn/networkdb: take most tests off flaky list
The loopback-test fixes seem to be sufficient to resolve the flakiness
of all the tests aside from TestFlakyNetworkDBIslands.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 697c17ca95)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Cory Snider
743a0df9ec libnetwork/networkdb: always shut down memberlist
Gracefully leaving the memberlist cluster is a best-effort operation.
Failing to successfully broadcast the leave message to a peer should not
prevent NetworkDB from cleaning up the memberlist instance on close. But
that was not the case in practice. Log the error returned from
(*memberlist.Memberlist).Leave instead of returning it and proceed with
shutting down irrespective of whether Leave() returns an error.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 16ed51d864)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 16:20:33 -04:00
Matthieu MOREL
bacba3726f fix redefines-builtin-id from revive
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-07-25 16:20:29 -04:00
Andrey Epifanov
f93d90cee3 overlay: Reload Ingress iptables rules in swarm mode
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit a1f68bf5a6)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:17:16 -04:00
Andrey Epifanov
00232ac981 libnetwork: split programIngress() and dependent functions on Add and Del functions
- refactor programIngressPorts to use Rule.Insert/Append/Delete for improved rule management
- split programIngress() and dependent functions on Add and Del functions

Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit 8b208f1b95)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:17:16 -04:00
Andrey Epifanov
88d0ed889d libnetwork: refactor ingress chain management for improved rule handling and initialization
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit 50e6f4c4cb)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:17:16 -04:00
Andrey Epifanov
32c814a85f libnetwork: add FlushChain methods for improved iptables management
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit 4f0485e45f)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 15:17:16 -04:00
Andrey Epifanov
fb8e5d85f6 libnetwork: refactor rule management to use Ensure method for Append and Insert operations
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit 262c32565b)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:15:50 -04:00
Andrey Epifanov
fb6695de75 libnetwork: refactor iptable functions to include table parameter for improved rule management
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit 19a8083866)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:15:50 -04:00
Andrey Epifanov
089d70f3c8 libnetwork: extract plumpIngressProxy steps in a separate function
- Extract plumpIngressProxy steps in a separate function
- Don't create a new listener if there's already one in ingressProxyTbl

Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit c2e2e7fe24)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:15:50 -04:00
Andrey Epifanov
2710c239df libnetwork: extract programIngressPorts steps in a separate functions
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit 51ed289b06)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:15:50 -04:00
Andrey Epifanov
7982904677 libnetwork: extract creation/initiation of INGRESS-DOCKER chains in separate function
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit 752758ae77)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:15:50 -04:00
Rob Murray
fbffa88b76 Restore legacy links along with other iptables rules
On firewalld reload, all the iptables rules are deleted. Legacy
links use iptables.OnReloaded to achieve that - but there's no
way to deleted an OnRelaoded callback. So, a firewalld reload
after the linked containers are deleted results in zombie rules
being re-created.

Legacy links are created by ProgramExternalConnectivity, but
removed in Leave (rather than RevokeExternalConnectivity).

So, restore legacy links for current endpoints, along with the
other per-network/per-port rules.

Move link-removal to RevokeExternalConnectivity, so that it
happens with the configNetwork lock held.

Signed-off-by: Rob Murray <rob.murray@docker.com>
2025-07-25 15:15:50 -04:00
Rob Murray
41f080df25 Restore iptables for current networks on firewalld reload
Using iptables.OnReloaded to restore individual per-network rules
on firewalld reload means rules for deleted networks pop back in
to existence (because there was no way to delete the callbacks on
network-delete).

So, on firewalld reload, walk over current networks and ask them
to restore their iptables rules.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit a527e5a546)

Test that firewalld reload doesn't re-create deleted iptables rules

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit c3fa7c1779)

Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:15:50 -04:00
Rob Murray
c64e8a8117 Add test util "WithPortMap"
Backport the WithPortMap() function through a partial cherry-pick.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 20c99e4156)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:15:50 -04:00
Albin Kerouanton
0316eaaa23 Install and run firewalld for CI's firewalld tests
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
(cherry picked from commit 8883db20c5)
Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit adfed82ab8)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 15:15:50 -04:00
Rob Murray
270166cbe5 Fix TestPassthrough
Doesn't look like it would ever have worked, but:
- init the dbus connection to avoid a segv
- include the chain name when creating the rule
- remove the test rule if it's created

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 0ab6f07c31)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-25 15:15:50 -04:00
Rob Murray
a012739c2c Add test util "FirewalldRunning"
Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit b8cacdf324)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-07-25 14:46:24 -04:00
Matthieu MOREL
e53cf6bc02 fix(ST1016): Use consistent method receiver names
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 70139978d3)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-07-24 14:01:41 -04:00
Sebastiaan van Stijn
4c5a99d08c Merge pull request #50230 from corhere/backport-25.0/libn/fix-networkdb-dns-update-delete
[25.0 backport] libnetwork: fix flaky Swarm service DNS
2025-06-19 10:30:26 +02:00
Cory Snider
f2126bfc7f libnetwork: fix flaky Swarm service DNS
When libnetwork receives a watch event for a driver table entry from
NetworkDB it passes the event along to the interested driver. This code
contains a subtle bug: update events from NetworkDB are passed along to
the driver as Delete events! This bug was lying dormant as driver-table
entries can only be added by the driver, not updated. Now that NetworkDB
broadcasts an UpdateEvent to watchers if the entry is already known to
the local NetworkDB, irrespective of whether the event received from the
remote peer was a CREATE or UPDATE event, the bug is causing problems.
Whenever a remote node replaces an entry in the overlay_peer_table but
the intermediate delete state was not received by the local node, the
new CREATE event would be translated to an UpdateEvent by NetworkDB and
subsequently handled by the overlay driver as if the entry was deleted!

Bubble table UPDATE events up to the network driver as Update events.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit a7f01d238e)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-06-18 19:20:40 -04:00
Sebastiaan van Stijn
95aff4f75c Merge pull request #50203 from corhere/backport-25.0/cycle-free-swarmkit
[25.0] vendor: github.com/moby/swarmkit/v2 v2.0.0
2025-06-17 11:30:15 +02:00
Cory Snider
4d168615cc vendor: github.com/moby/swarmkit/v2 v2.0.0
- add Unwrap errror to custom error types
- removes dependency on github.com/rexray/gocsi
- fix CSI plugin load issue
- add ALPN next protos configuration to gRPC server
- fix task scheduler infinite loop

full diff: https://github.com/moby/swarmkit/compare/911c97650f2e...v2.0.0

Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-06-16 18:49:51 -04:00
Cory Snider
614ecc8201 hack: block imports of vendored testify packages
While github.com/stretchr/testify is not used directly by any of the
repository code, it is a transitive dependency via Swarmkit and
therefore still easy to use without having to revendor. Add lint rules
to ban importing testify packages to make sure nobody does.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 7ebd88d2d9)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-06-16 18:49:51 -04:00
Cory Snider
26a318189b libn/cnmallocator: migrate tests to gotest.tools/v3
Apply command gotest.tools/v3/assert/cmd/gty-migrate-from-testify to the
cnmallocator package to be consistent with the assertion library used
elsewhere in moby.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry-picked from commit 4f30a930ad)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-06-16 18:49:51 -04:00
Cory Snider
4e2e8fe181 Vendor dependency cycle-free swarmkit
Moby imports Swarmkit; Swarmkit no longer imports Moby. In order to
accomplish this feat, Swarmkit has introduced a new plugin.Getter
interface so it could stop importing our pkg/plugingetter package. This
new interface is not entirely compatible with our
plugingetter.PluginGetter interface, necessitating a thin adapter.

Swarmkit had to jettison the CNM network allocator to stop having to
import libnetwork as the cnmallocator package is deeply tied to
libnetwork. Move the CNM network allocator into libnetwork, where it
belongs. The package had a short an uninteresting Git history in the
Swarmkit repository so no effort was made to retain history.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry-picked from commit 7b0ab1011c)

d/cluster/convert: expose Addr() on plugins

The swarmPlugin type does not implement the Swarm plugin.AddrPlugin
interface because it embeds an interface value which does not include
that method in its method set. (You can type-assert an interface value
to another interface which the concrete type implements, but a struct
embedding an interface value is not itself an interface value.) Wrap the
plugin with a different adapter type which exposes the Addr() method if
the concrete plugin implements it.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 8b6d6b9ad5)

libnetwork/cnmallocator: fix non-constant format string in call (govet)

    libnetwork/cnmallocator/drivers_ipam.go:43:31: printf: non-constant format string in call to (*github.com/docker/docker/vendor/github.com/sirupsen/logrus.Entry).Infof (govet)
            log.G(context.TODO()).Infof("Swarm initialized global default address pool to: " + str.String())
                                        ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7b60a7047d)

Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-06-16 18:49:33 -04:00
Austin Vazquez
4f2d0e656b Merge pull request #50056 from thaJeztah/25.0_fluentd_migrate
[25.0] daemon: restore: migrate deprecated fluentd-async-connect
2025-06-12 11:02:37 -07:00
Sebastiaan van Stijn
8b44d5e80a [25.0] daemon: restore: migrate deprecated fluentd-async-connect
The "fluentd-async-connect" option was deprecated in 20.10 through
cc1f3c750e, and removed in 28.0 trough
49ec488036, which added migration code
on daemon startup.

This patch ports the migration code to the 25.0 branch to prevent future
disruption when upgrading existing containers to a new version of the
daemon.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-05-23 14:54:39 +02:00
Cory Snider
4749b46391 Merge pull request #50053 from aepifanov/dev/go1.23.9/25.0
[25.0] Update to go1.23.9
2025-05-22 18:05:54 -04:00
Andrey Epifanov
3acb76ef2f update to go1.23.9
https://github.com/golang/go/issues?q=milestone%3AGo1.23.9
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
(cherry picked from commit 40b0fcd12f)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/test.yml
#	Dockerfile.e2e
#	Dockerfile.windows
#	vendor.mod
2025-05-22 11:45:04 -07:00
Cory Snider
d4f1fb1db2 Merge pull request #50005 from dperny/25.0-backport-network-fixes
[25.0] Backport network fixes
2025-05-15 18:46:42 -04:00
Cory Snider
b7346c5fb5 libn/networkdb: stop table events from racing network leaves
When a node leaves a network or the cluster, or memberlist considers the
node as failed, NetworkDB atomically deletes all table entries (for the
left network) owned by the node. This maintains the invariant that table
entries owned by a node are present in the local database indices iff
that node is an active cluster member which is participating in the
network the entries pertain to.

(*NetworkDB).handleTableEvent() is written in a way which attempts to
minimize the amount of time it is in a critical section with the mutex
locked for writing. It first checks under a read-lock whether both the
local node and the node where the event originated are participating in
the network which the event pertains to. If the check passes, the mutex
is unlocked for reading and locked for writing so the local database
state is mutated in a critical section. That leaves a window of time
between the participation check the write-lock being acquired for a
network or node event to arrive and be processed. If a table event for a
node+network races a node or network event which triggers the purge of
all table entries for the same node+network, the invariant could be
violated. The table entry described by the table event may be reinserted
into the local database state after being purged by the node's leaving,
resulting in an orphaned table entry which the local node will bulk-sync
to other nodes indefinitely.

It's not completely wrong to perform a pre-flight check outside of the
critical section. It allows for an early return in the no-op case
without having to bear the cost of synchronization. But such optimistic
concurrency control is only sound if the condition is double-checked
inside the critical section. It is tricky to get right, and this
instance of optimistic concurrency control smells like a case of
premature optimization. Move the pre-flight check into the critical
section to ensure that the invariant is maintained.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 270a4d41dc)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-05-15 15:37:49 -04:00
Cory Snider
545e84c7ff libn/networkdb: Watch() without race conditions
NetworkDB's Watch() facility is problematic to use in practice. The
stream of events begins when the watch is started, so the watch cannot
be used to process table entries that existed beforehand. Either option
to process existing table entries is racy: walking the table before
starting the watch leaves a race window where events could be missed,
and walking the table after starting the watch leaves a race window
where created/updated entries could be processed twice.

Modify Watch() to initialize the channel with synthetic CREATE events
for all existing entries owned by remote nodes before hooking it up to
the live event stream. This way watchers observe an equivalent sequence
of events irrespective of whether the watch was started before or after
entries from remote nodes are added to the database. Remove the bespoke
and racy synthetic event replay logic for driver watches from the
libnetwork agent.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit a3aea15257)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-05-15 15:37:48 -04:00
Cory Snider
bc97de45b4 libn/networkdb: record tombstones for all deletes
The gossip protocol which powers NetworkDB does not guarantee in-order
reception of events. This poses a problem with deleting entries: without
some mechanism to discard stale CREATE or UPDATE events received after a
DELETE, out-of-order reception of events could result in a deleted entry
being spuriously resurrected in the local NetworkDB state! NetworkDB
handles this situation by storing "tombstone" entries for a period of
time with the Lamport timestamps of the entries' respective DELETE
events. Out-of-order CREATE or UPDATE events will be ignored by virtue
of having older timestmaps than the tombstone entry, just like how it
works for entries that have not yet been deleted.

NetworkDB was only storing a tombstone if the entry was already present
in the local database at the time of the DELETE event. If the first
event received for an entry is a DELETE, no tombstone is stored. If a
stale CREATE/UPDATE event for the entry (with an older timestamp than
the DELETE) is subsequently received, NetworkDB erroneously creates a
live entry in the local state with stale data. Modify NetworkDB to store
tombstones for DELETE events irrespective of whether the entry was known
to NetworkDB beforehand so that it correctly discards out-of-order
CREATEs and UPDATEs in all cases.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit ada8bc3695)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-05-15 15:33:32 -04:00
Cory Snider
e78b2bdb84 libn/networkdb: b'cast watch events from local POV
NetworkDB gossips changes to table entries to other nodes using distinct
CREATE, UPDATE and DELETE events. It is unfortunate that the wire
protocol distinguishes CREATEs from UPDATEs as nothing useful can be
done with this information. Newer events for an entry invalidate older
ones, so there is no guarantee that a CREATE event is broadcast to any
node before an UPDATE is broadcast. And due to the nature of gossip
protocols, even if the CREATE event is broadcast from the originating
node, there is no guarantee that any particular node will receive the
CREATE before an UPDATE. Any code which handles an UPDATE event
differently from a CREATE event is therefore going to behave in
unexpected ways in less than perfect conditions.

NetworkDB table watchers also receive CREATE, UPDATE and DELETE events.
Since the watched tables are local to the node, the events could all
have well-defined meanings that are actually useful. Unfortunately
NetworkDB is just bubbling up the wire-protocol event types to the
watchers. Redefine the table-watch events such that a CREATE event is
broadcast when an entry pops into existence in the local NetworkDB, an
UPDATE event is broadcast when an entry which was already present in the
NetworkDB state is modified, and a DELETE event is broadcast when an
entry which was already present in the NetworkDB state is marked for
deletion. DELETE events are broadcast with the same value as the most
recent CREATE or UPDATE event for the entry.

The handler for endpoint table events in the libnetwork agent assumed
incorrectly that CREATE events always correspond to adding a new active
endpoint and that UPDATE events always correspond to disabling an
endpoint. Fix up the handler to handle CREATE and UPDATE events using
the same code path, checking the table entry's ServiceDisabled flag to
determine which action to take.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit c68671d908)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2025-05-15 15:32:00 -04:00
Cory Snider
5d123a0ef8 libn/networkdb: fix data race in GetTableByNetwork
The function was accessing the index map without holding the mutex, so
it would race any mutation to the database indexes. Fetch the reference
to the tree's root while holding a read lock. Since the radix tree is
immutable, taking a reference to the root is equivalent to starting a
read-only database transaction, providing a consistent view of the data
at a snapshot in time, even as the live state is mutated concurrently.

Also optimize the WalkTable function by leveraging the immutability of
the radix tree.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit ec65f2d21b)
2025-05-15 15:17:19 -04:00
Cory Snider
9c1b0fb58f libn/networkdb: SetPrimaryKey() under a write lock
(*NetworkDB).SetPrimaryKey() acquires a read lock on the NetworkDB
instance. That seems sound on the surface as it is only reading from the
NetworkDB struct, not mutating it. However, concurrent calls to
(*memberlist.Keyring).UseKey() would get flagged by Go's race detector
due to some questionable locking in its implementation. Acquire an
exclusive lock in SetPrimaryKey so concurrent calls don't race each
other.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit c9b01e0c4c)
Signed-off-by: Drew Erny <derny@mirantis.com>
2025-05-15 13:18:05 -05:00
Cory Snider
5c192650eb Fix possible overlapping IPs when ingressNA == nil
Logic was added to the Swarm executor in commit 0d9b0ed678
to clean up managed networks whenever the node's load-balancer IP
address is removed or changed in order to free up the address in the
case where the container fails to start entirely. Unfortunately, due to
an oversight the function returns early if the Swarm is lacking
an ingress network. Remove the early return so that load-balancer IP
addresses for all the other networks are freed as appropriate,
irrespective of whether an ingress network exists in the Swarm.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 56ad941564)
Signed-off-by: Drew Erny <derny@mirantis.com>
2025-05-15 13:17:57 -05:00
Cory Snider
00d8bed6cf libn/networkdb: don't exceed broadcast size limit
NetworkDB uses a hierarchy of queues to prioritize messages for
broadcast. Unfortunately the logic to pull from multiple queues is
flawed. The length of the messages pulled from the first queue is not
taken into account when pulling messages from the second queue. A list
of messages up to tiwce as long as the limit could be returned! Messages
beyond the limit will be truncated unceremoniously by memberlist.

Memberlist broadcast queues assume that all messages returned from a
GetBroadcasts call will be broadcasted to other nodes in the cluster.
Messages are popped from the queue once they have hit their retransmit
limit. On a busy system messages may be broadcast fewer times than
intended, possibly even being dropped without ever being broadcast!

Subtract the length of messages pulled from the first queue from the
broadcast size limit so the limit is not exceeded when pulling from the
second queue.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit dacf445614)
Signed-off-by: Drew Erny <derny@mirantis.com>
2025-05-15 13:10:03 -05:00
Cory Snider
c31faaed8c libn/networkdb: listen only on loopback in tests
NetworkDB defaults to binding to the unspecified address for gossip
communications, with no advertise address set. In this configuration,
the memberlist instance listens on all network interfaces and picks one
of the host's public IP addresses as the advertise address.
The NetworkDB unit tests don't override this default, leaving them
vulnerable to flaking out as a result of rogue network traffic
perturbing the test, or the inferred advertise address not being useable
for loopback testing. And macOS prompts for permission to allow the test
executable to listen on public interfaces every time it is rebuilt.

Modify the NetworkDB tests to explicitly bind to, advertise, and join
ports on 127.0.0.1 to make the tests more robust to flakes in CI and
more convenient to run locally.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 90ec2c209b)
Signed-off-by: Drew Erny <derny@mirantis.com>
2025-05-15 13:10:03 -05:00
Cory Snider
e5e7d89092 libn/networkdb: advertise the configured bind port
The NetworkDB unit tests instantiate clusters which communicate over
loopback where every "node" listens on a distinct localhost port. The
tests make use of a NetworkDB configuration knob to set the port. When
the NetworkDB configuration's BindPort field is set to a nonzero value,
its memberlist instance is configured to bind to the specified port
number. However, the advertise port is left at the
memberlist.DefaultLANConfig() default value of 7946. Because of this,
nodes would be unable to contact any of the other nodes in the cluster
learned by gossip as the gossiped addresseses specify the wrong ports!
The flaky tests passed as often as they did thanks to the robustness of
the memberlist module: NetworkDB gossip and and memberlist node
liveness-probe pings to unreachable nodes can all be relayed through
the reachable nodes, the nodes on the bootstrap join list.

Make the NetworkDB unit tests less flaky by setting each node's
advertise port to the bind port.

The daemon is unaffected by this oversight as it unconditionally uses
the default listen port of 7946, which aligns with the advertise port.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit e3f9edd348)
Signed-off-by: Drew Erny <derny@mirantis.com>
2025-05-15 13:10:03 -05:00
Cory Snider
f19a4f7d9e Merge pull request #49970 from aepifanov/bump-containerd
[25.0] Dockerfile: update containerd binary to v1.7.27
2025-05-13 17:40:48 -04:00
Paweł Gronowski
74d2c7fd53 libnetwork: Mark flaky tests
Mark the following tests as flaky:
- TestNetworkDBCRUDTableEntry
- TestNetworkDBCRUDTableEntries
- TestNetworkDBIslands
- TestNetworkDBNodeLeave

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 9893520c62)
2025-05-13 13:48:14 -07:00
Paweł Gronowski
cad60979a1 hack/unit: Rerun failed flaky libnetwork tests
libnetwork tests tend to be flaky (namely `TestNetworkDBIslands` and
`TestNetworkDBCRUDTableEntries`).

Move execution of tests which name has `TestFlaky` prefix to a separate
gotestsum pass which allows them to be reran 4 times.

On Windows, the libnetwork test execution is not split into a separate
pass.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit d0d8d5d97d)
2025-05-13 13:48:14 -07:00
Austin Vazquez
7656928264 Dockerfile: update containerd binary to v1.7.27
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
(cherry picked from commit 35766af7d2)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	Dockerfile
#	Dockerfile.windows
#	hack/dockerfile/install/containerd.installer
2025-05-13 11:56:58 -07:00
Cory Snider
c96356750f Merge pull request #49967 from aepifanov/bump-golang-1.23
[25.0]Bump golang 1.23
2025-05-13 13:18:49 -04:00
Sebastiaan van Stijn
c231772a5c update go:build tags to go1.23 to align with vendor.mod
Go maintainers started to unconditionally update the minimum go version
for golang.org/x/ dependencies to go1.23, which means that we'll no longer
be able to support any version below that when updating those dependencies;

> all: upgrade go directive to at least 1.23.0 [generated]
>
> By now Go 1.24.0 has been released, and Go 1.22 is no longer supported
> per the Go Release Policy (https://go.dev/doc/devel/release#policy).
>
> For golang/go#69095.

This updates our minimum version to go1.23, as we won't be able to maintain
compatibility with older versions because of the above.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7c52c4d92e)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	api/server/router/container/inspect.go
#	api/server/router/grpc/grpc.go
#	api/server/router/system/system.go
#	api/server/router/system/system_routes.go
#	api/types/registry/registry.go
#	api/types/registry/registry_test.go
#	builder/builder-next/adapters/containerimage/pull.go
#	container/view.go
#	daemon/container_operations.go
#	daemon/containerd/image_inspect.go
#	daemon/containerd/image_push_test.go
#	daemon/create.go
#	daemon/daemon.go
#	daemon/daemon_unix.go
#	daemon/info.go
#	daemon/inspect.go
#	daemon/logger/loggerutils/logfile.go
#	internal/gocompat/modulegenerator.go
#	internal/maputil/maputil.go
#	internal/platform/platform_linux.go
#	internal/sliceutil/sliceutil.go
#	libnetwork/config/config.go
#	libnetwork/drivers/bridge/port_mapping_linux.go
#	libnetwork/drivers/overlay/peerdb.go
#	libnetwork/endpoint.go
#	libnetwork/endpoint_store.go
#	libnetwork/internal/l2disco/unsol_arp_linux.go
#	libnetwork/internal/l2disco/unsol_na_linux.go
#	libnetwork/internal/nftables/nftables_linux.go
#	libnetwork/internal/resolvconf/resolvconf.go
#	libnetwork/internal/setmatrix/setmatrix.go
#	libnetwork/ipams/defaultipam/address_space.go
#	libnetwork/ipamutils/utils.go
#	libnetwork/iptables/iptables.go
#	libnetwork/netutils/utils_linux.go
#	libnetwork/network.go
#	libnetwork/network_store.go
#	libnetwork/networkdb/networkdb.go
#	libnetwork/options/options.go
#	libnetwork/osl/interface_linux.go
#	libnetwork/osl/route_linux.go
#	libnetwork/portallocator/portallocator.go
#	libnetwork/sandbox.go
#	libnetwork/service.go
#	oci/defaults.go
#	plugin/v2/plugin_linux.go
#	testutil/daemon/daemon.go
#	testutil/helpers.go
2025-05-13 08:50:07 -07:00
Sebastiaan van Stijn
6ac44a4973 vendor.mod: update minimum go version to go1.23
Go maintainers started to unconditionally update the minimum go version
for golang.org/x/ dependencies to go1.23, which means that we'll no longer
be able to support any version below that when updating those dependencies;

> all: upgrade go directive to at least 1.23.0 [generated]
>
> By now Go 1.24.0 has been released, and Go 1.22 is no longer supported
> per the Go Release Policy (https://go.dev/doc/devel/release#policy).
>
> For golang/go#69095.

This updates our minimum version to go1.23, as we won't be able to maintain
compatibility with older versions because of the above.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6e8eb8a90f)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	hack/with-go-mod.sh
#	vendor.mod
2025-05-13 08:50:07 -07:00
Sebastiaan van Stijn
da1d6a4cac update to go1.23.8 (fix CVE-2025-22871)
full diff: https://github.com/golang/go/compare/go1.23.7...go1.23.8
release notes: https://go.dev/doc/devel/release#go1.24.2

go1.23.8 (released 2025-04-01) includes security fixes to the net/http package,
as well as bug fixes to the runtime and the go command. See the Go 1.23.8
milestone on our issue tracker for details;

https://github.com/golang/go/issues?q=milestone%3AGo1.23.8+label%3ACherryPickApproved

From the mailing list:

Hello gophers,

We have just released Go versions 1.24.2 and 1.23.8, minor point releases.
These minor releases include 1 security fixes following the security policy:

- net/http: request smuggling through invalid chunked data
  The net/http package accepted data in the chunked transfer encoding
  containing an invalid chunk-size line terminated by a bare LF.
  When used in conjunction with a server or proxy which incorrectly
  interprets a bare LF in a chunk extension as part of the extension,
  this could permit request smuggling.
  The net/http package now rejects chunk-size lines containing a bare LF.
  Thanks to Jeppe Bonde Weikop for reporting this issue.
  This is CVE-2025-22871 and Go issue https://go.dev/issue/71988.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 74b71c41ac)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/.test.yml
#	.github/workflows/.windows.yml
#	.github/workflows/arm64.yml
#	.github/workflows/buildkit.yml
#	.github/workflows/codeql.yml
#	.github/workflows/test.yml
#	.golangci.yml
#	Dockerfile
#	Dockerfile.simple
#	Dockerfile.windows
#	hack/dockerfiles/generate-files.Dockerfile
#	hack/dockerfiles/govulncheck.Dockerfile
2025-05-13 08:49:03 -07:00
Cory Snider
f91f92463d Merge pull request #49804 from aepifanov/backport-25.0/ubuntu-22.04-gha
[25.0] Update remaining Ubuntu 20.04 GHA uses to 22.04 and 24.04 #49775
2025-05-07 13:10:03 -04:00
Sebastiaan van Stijn
b1bcc6be66 gha: update buildkit to fix integration tests
full diff: 0bfcd83e6d...d77361423c

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c42b304f62)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/buildkit.yml
2025-05-05 13:54:03 -07:00
Sebastiaan van Stijn
341bb5120f Dockerfile: fix linting warnings
The 'as' keyword should match the case of the 'from' keyword
    FromAsCasing: 'as' and 'FROM' keywords' casing do not match
    More info: https://docs.docker.com/go/dockerfile/rule/from-as-casing/

    Setting platform to predefined $TARGETPLATFORM in FROM is redundant as this is the default behavior
    RedundantTargetPlatform: Setting platform to predefined $TARGETPLATFORM in FROM is redundant as this is the default behavior
    More info: https://docs.docker.com/go/dockerfile/rule/redundant-target-platform/

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b2b55903d0)
2025-05-05 13:44:27 -07:00
Andrey Epifanov
87b232ecbc gha: integration: replace Ubuntu 20.04 on 24.04
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-05-05 13:44:27 -07:00
Derek McGowan
b88ec510b5 Run CLI tests with cgroups v2
Signed-off-by: Derek McGowan <derek@mcg.dev>
(cherry picked from commit cd89a35ea0)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	integration-cli/docker_cli_update_unix_test.go
2025-05-05 05:17:39 -07:00
Derek McGowan
c1992d0de0 Update remaining Ubuntu 20.04 uses to 22.04 and 24.04
Signed-off-by: Derek McGowan <derek@mcg.dev>
(cherry picked from commit 45f9d679f8)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/.test.yml
2025-04-24 09:48:47 -07:00
Sebastiaan van Stijn
06ef6f04fd gha: test-prepare: update to Ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f87ae7c914)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:46 -07:00
Sebastiaan van Stijn
9dfebd37c2 gha: build, cross: update to Ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c41ed7c98c)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:45 -07:00
Sebastiaan van Stijn
14697bedd9 gha: integration-cli-prepare: update to Ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d29038d1cb)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:43 -07:00
Sebastiaan van Stijn
881e809b0a gha: integration-cli-report: update to Ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a23058e0d7)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:42 -07:00
Sebastiaan van Stijn
897614add1 gha: integration-report: update to Ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit de69b552ff)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:40 -07:00
Sebastiaan van Stijn
af633391ca gha: arm64: update Ubuntu 22.04 -> 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 651fb91c4d)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:39 -07:00
Sebastiaan van Stijn
6b3afa4c2d gha: arm64: test-integration-report: update to Ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f6a9ed5f0a)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:37 -07:00
Sebastiaan van Stijn
17c33f93ce gha: arm64: test-unit-report: update to ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 13e1ef6277)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:36 -07:00
Sebastiaan van Stijn
8de9fb4d4b gha: validate, build-dev: update to Ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 27404044a6)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:34 -07:00
Sebastiaan van Stijn
b67e668c0f gha: smoke: update to Ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3571982458)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:33 -07:00
Sebastiaan van Stijn
56885c08ba gha: docker-py: update to ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ee73f2e5da)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:31 -07:00
Sebastiaan van Stijn
3ea3d5c759 gha: unit: update to ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b9ca3d198e)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/.test.yml
2025-04-24 09:48:30 -07:00
Sebastiaan van Stijn
5e024a24a0 gha: bin-image: update to ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1a0afb0f9e)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:29 -07:00
Sebastiaan van Stijn
fa10b16add gha: buildkit: update to ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4919bf9f41)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:28 -07:00
Sebastiaan van Stijn
b3b6bc9770 gha: validate-pr: update to ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7b1fd61864)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/validate-pr.yml
2025-04-24 09:48:27 -07:00
Sebastiaan van Stijn
b8551186f7 gha: dco: update to ubuntu 24.04
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit eeffc099ef)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:26 -07:00
Sebastiaan van Stijn
3bd7323b96 gha: docker-py: set TEST_SKIP_INTEGRATION_CLI=1
These tests don't actually run the integration-cli suite, but
the global hack/xxx script errors because it's not set;

    ---> Making bundle: test-docker-py (in bundles/test-docker-py)
    ---> Making bundle: .integration-daemon-start (in bundles/test-docker-py)
    Using test binary /usr/local/cli-integration/docker
    # DOCKER_EXPERIMENTAL is set: starting daemon with experimental features enabled!
    # cgroup v2 requires TEST_SKIP_INTEGRATION_CLI to be set
    make: *** [Makefile:220: test-docker-py] Error 1
    Error: Process completed with exit code 2.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 06b87d80ee)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-04-24 09:48:24 -07:00
Sebastiaan van Stijn
ddc689206b Merge pull request #49790 from pendo324/update-jwt-v4.5.2
[25.0] vendor: github.com/golang-jwt/jwt/v4 v4.5.2
2025-04-10 21:53:32 +02:00
Justin Alvarez
ebbb4cad63 vendor: github.com/golang-jwt/jwt/v4 v4.5.2
Signed-off-by: Justin Alvarez <alvajus@amazon.com>
2025-04-10 17:51:07 +00:00
Sebastiaan van Stijn
a926bec8fc Merge pull request #49488 from austinvazquez/cherry-pick-838ae09a2337e6561b40d13be6ddf43005a92a9e-to-25.0
[25.0 backport] Dockerfile: update runc binary to v1.2.5
2025-02-18 18:07:24 +01:00
Sebastiaan van Stijn
89a48b65fc Dockerfile: update runc binary to v1.2.5
This is the fifth patch release in the 1.2.z series of runc. It primarily fixes
an issue caused by an upstream systemd bug.

* There was a regression in systemd v230 which made the way we define
  device rule restrictions require a systemctl daemon-reload for our
  transient units. This caused issues for workloads using NVIDIA GPUs.
  Workaround the upstream regression by re-arranging how the unit properties
  are defined.
* Dependency github.com/cyphar/filepath-securejoin is updated to v0.4.1,
  to allow projects that vendor runc to bump it as well.
* CI: fixed criu-dev compilation.
* Dependency golang.org/x/net is updated to 0.33.0.

full diff: https://github.com/opencontainers/runc/compare/v1.2.4...v1.2.5
release notes: https://github.com/opencontainers/runc/releases/tag/v1.2.5

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 838ae09a23)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2025-02-18 15:22:24 +00:00
Sebastiaan van Stijn
74360f99d7 Merge pull request #49400 from vvoland/49394-25.0
[25.0 backport] update to go1.22.12
2025-02-06 13:07:51 +01:00
Paweł Gronowski
aae4029600 update to go1.22.12
This minor release include 1 security fix following the security policy:

- crypto/elliptic: timing sidechannel for P-256 on ppc64le

  Due to the usage of a variable time instruction in the assembly implementation
  of an internal function, a small number of bits of secret scalars are leaked on
  the ppc64le architecture. Due to the way this function is used, we do not
  believe this leakage is enough to allow recovery of the private key when P-256
  is used in any well known protocols.

This is CVE-2025-22866 and Go issue https://go.dev/issue/71383.

View the release notes for more information:
https://go.dev/doc/devel/release#go1.22.12

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit a584f0b227)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2025-02-05 12:35:42 +01:00
Sebastiaan van Stijn
822b2b6a1d Merge pull request #49345 from austinvazquez/cherry-pick-c83862c5419508bcdfafb07165b1a21ecb73c1e2-to-25.0
[25.0 backport] update to go1.22.11 (fix CVE-2024-45341, CVE-2024-45336)
2025-01-29 12:07:42 +01:00
Sebastiaan van Stijn
a2802d0746 update to go1.22.11 (fix CVE-2024-45341, CVE-2024-45336)
go1.22.11 (released 2025-01-16) includes security fixes to the crypto/x509 and
net/http packages, as well as bug fixes to the runtime. See the Go 1.22.11
milestone on our issue tracker for details.

- https://github.com/golang/go/issues?q=milestone%3AGo1.22.11+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.22.10...go1.22.11

Hello gophers,

We have just released Go versions 1.23.5 and 1.22.11, minor point releases.

These minor releases include 2 security fixes following the security policy:

- crypto/x509: usage of IPv6 zone IDs can bypass URI name constraints

  A certificate with a URI which has a IPv6 address with a zone ID may
  incorrectly satisfy a URI name constraint that applies to the certificate
  chain.

  Certificates containing URIs are not permitted in the web PKI, so this
  only affects users of private PKIs which make use of URIs.

  Thanks to Juho Forsén of Mattermost for reporting this issue.

  This is CVE-2024-45341 and Go issue https://go.dev/issue/71156.

- net/http: sensitive headers incorrectly sent after cross-domain redirect

  The HTTP client drops sensitive headers after following a cross-domain redirect.
  For example, a request to a.com/ containing an Authorization header which is
  redirected to b.com/ will not send that header to b.com.

  In the event that the client received a subsequent same-domain redirect, however,
  the sensitive headers would be restored. For example, a chain of redirects from
  a.com/, to b.com/1, and finally to b.com/2 would incorrectly send the Authorization
  header to b.com/2.

  Thanks to Kyle Seely for reporting this issue.

  This is CVE-2024-45336 and Go issue https://go.dev/issue/70530.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c83862c541)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2025-01-28 16:11:51 +00:00
Austin Vazquez
9281aea6ce ci: update base container to alpine20 for buildkit workflow
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2025-01-28 16:10:45 +00:00
Austin Vazquez
0e655eaff2 Merge pull request #49321 from thaJeztah/25.0_backport_backport_gha_arm64
[25.0 backport] ci: switch from jenkins to gha for arm64 build and tests (and set correct go version for branch)
2025-01-28 10:08:23 -06:00
Sebastiaan van Stijn
b1d6fd957d gha: set arm64 GO_VERSION to 1.22.10
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6c832d05c4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-01-28 13:43:02 +01:00
CrazyMax
7540f88434 ci: switch from jenkins to gha for arm64 build and tests
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit 8c236de735)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-01-28 13:43:02 +01:00
Sebastiaan van Stijn
19dd685407 Merge pull request #49346 from austinvazquez/cherry-pick-f8a973ba4e7d4e5b90d5a89bb4a8633ceae26985-to-25.0
[25.0 backport] ci: update bake-action to v6
2025-01-28 13:42:11 +01:00
CrazyMax
f8d9617c43 ci(bin-image): fix bake build
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit d86920b9b3)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2025-01-28 04:50:37 +00:00
CrazyMax
bec5e8eed1 ci: update bake-action to v6
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit f8a973ba4e)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2025-01-28 04:44:57 +00:00
Sebastiaan van Stijn
71907ca48e Merge pull request #49269 from austinvazquez/cherry-pick-update-runc-1.2.4-to-25.0
[25.0 backport] Dockerfile: update runc binary to v1.2.4
2025-01-14 12:58:14 +01:00
Sebastiaan van Stijn
72f6828fd3 Merge pull request #49268 from austinvazquez/cherry-pick-update-containerd-1.7.25-to-25.0
[25.0 backport] Dockerfile: update containerd to v1.7.25
2025-01-13 19:43:04 +01:00
Sebastiaan van Stijn
fcb50183e4 Dockerfile: update runc binary to v1.2.4
This is the fourth patch release of the 1.2.z release branch of runc. It
includes a fix for a regression introduced in 1.2.0 related to the
default device list.

- Re-add tun/tap devices to built-in allowed devices lists.

 In runc 1.2.0 we removed these devices from the default allow-list
 (which were added seemingly by accident early in Docker's history) as
 a precaution in order to try to reduce the attack surface of device
 inodes available to most containers. At the time we thought
 that the vast majority of users using tun/tap would already be
 specifying what devices they need (such as by using --device with
 Docker/Podman) as opposed to doing the mknod manually, and thus
 there would've been no user-visible change.

 Unfortunately, it seems that this regressed a noticeable number of
 users (and not all higher-level tools provide easy ways to specify
 devices to allow) and so this change needed to be reverted. Users
 that do not need these devices are recommended to explicitly disable
 them by adding deny rules in their container configuration.

full diff: https://github.com/opencontainers/runc/compare/v1.2.3...v1.2.4
release notes: https://github.com/opencontainers/runc/releases/tag/v1.2.4

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit aad7bcedd2)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2025-01-13 10:24:10 -07:00
Sebastiaan van Stijn
20af9f77a6 Dockerfile: update containerd to v1.7.25
release notes: https://github.com/containerd/containerd/releases/tag/v1.7.25
full diff: https://github.com/containerd/containerd/compare/v1.7.24...v1.7.25

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c12bfda3cd)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2025-01-13 10:14:14 -07:00
Austin Vazquez
eee2f6d0de Merge pull request #49173 from austinvazquez/cherry-pick-ec5c9e06e39a4e6d29700f4ca5376773fae57fa0-to-25.0
[25.0 backport] Dockerfile: update runc binary to v1.2.3
2024-12-31 12:32:51 -06:00
Sebastiaan van Stijn
7d20eee4fd Dockerfile: update runc binary to v1.2.3
This is the third patch release of the 1.2.z release branch of runc. It
primarily fixes some minor regressions introduced in 1.2.0.

- Fixed a regression in use of securejoin.MkdirAll, where multiple
  runc processes racing to create the same mountpoint in a shared rootfs
  would result in spurious EEXIST errors. In particular, this regression
  caused issues with BuildKit.
- Fixed a regression in eBPF support for pre-5.6 kernels after upgrading
  Cilium's eBPF library version to 0.16 in runc.

full diff: https://github.com/opencontainers/runc/compare/v1.2.2...v1.2.3
release notes: https://github.com/opencontainers/runc/releases/tag/v1.2.3

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ec5c9e06e3)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-12-28 22:11:11 -07:00
Albin Kerouanton
d86f1d1cde Merge pull request #49112 from thaJeztah/25.0_backport_fix_setupIPChains_defer
[25.0 backport] libnetwork/drivers/bridge: setupIPChains: fix defer checking wrong err
2024-12-16 21:26:06 +01:00
Sebastiaan van Stijn
eacc3610f9 libnetwork/drivers/bridge: setupIPChains: fix defer checking wrong err
The output variable was renamed in 0503cf2510,
but that commit failed to change this defer, which was now checking the
wrong error.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 01a55860c6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-12-16 16:54:55 +01:00
Akihiro Suda
5bd40c3b0a Merge pull request #49082 from thaJeztah/25.0_backport_bump_xx
[25.0 backport] update xx to v1.6.1 for compatibility with alpine 3.21
2024-12-16 13:52:25 +09:00
Sebastiaan van Stijn
842024e721 update xx to v1.6.1 for compatibility with alpine 3.21
This fixes compatibility with alpine 3.21

- Fix additional possible `xx-cc`/`xx-cargo` compatibility issue with Alpine 3.21
- Support for Alpine 3.21
- Fix `xx-verify` with `file` 5.46+
- Fix possible error taking lock in `xx-apk` in latest Alpine without `coreutils`

full diff: https://github.com/tonistiigi/xx/compare/v1.5.0...v1.6.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 89899b71a0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-12-13 00:44:42 +01:00
Sebastiaan van Stijn
96b8a34d2b Dockerfile: update xx to v1.5.0
full diff: https://github.com/tonistiigi/xx/compare/v1.4.0...v1.5.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c4ba1f4718)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-12-13 00:44:42 +01:00
Sebastiaan van Stijn
5ed63409a2 Dockerfile: update xx to v1.4.0
full diff: https://github.com/tonistiigi/xx/compare/v1.2.1...v1.4.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4f46c44725)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-12-13 00:44:41 +01:00
Sebastiaan van Stijn
81ac5dace5 Merge pull request #49048 from austinvazquez/cherry-pick-0e34b3956b6e95324d67517305a3376d36896490-to-25.0
[25.0] update to go1.22.10
2024-12-07 23:38:31 +01:00
Sebastiaan van Stijn
03885ae2c0 update to go1.22.10
go1.22.10 (released 2024-12-03) includes fixes to the runtime and the syscall
package. See the Go 1.22.10 milestone on our issue tracker for details.

- https://github.com/golang/go/issues?q=milestone%3AGo1.22.10+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.22.9...go1.22.10

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0e34b3956b)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-12-07 12:32:08 -07:00
Sebastiaan van Stijn
a39b701d10 Merge pull request #49029 from thaJeztah/25.0_backport_cdi-rootless
[25.0 backport] Dockerd rootless: make {/etc,/var/run}/cdi available
2024-12-04 15:17:59 +01:00
Rafael Fernández López
ddc8a15eb5 Dockerd rootless: make {/etc,/var/run}/cdi available
When dockerd is executed with the `dockerd-rootless.sh` script, make
/etc/cdi and /var/run/cdi available to the daemon if they exist.

This makes it possible to enable the CDI integration in rootless mode.

Fixes: #47676

Signed-off-by: Rafael Fernández López <ereslibre@ereslibre.es>
(cherry picked from commit 4e30acb63f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-12-04 12:04:53 +01:00
Sebastiaan van Stijn
be15fac5cd Merge pull request #49011 from vvoland/49009-25.0
[25.0 backport] c8d/tag: Don't log a warning if the source image is not dangling
2024-12-02 13:32:21 +01:00
Paweł Gronowski
6648f3a10e c8d/tag: Don't log a warning if the source image is not dangling
After the image is tagged, the engine attempts to delete a dangling
image of the source image, so the image is no longer dangling.

When the source image is not dangling, the removal errors out (as
expected), but a warning is logged to the daemon log:

```
time="2024-12-02T10:44:25.386957553Z" level=warning msg="unexpected error when deleting dangling image" error="NotFound: image \"moby-dangling@sha256:54d8c2251c811295690b53af7767ecaf246f1186c36e4f2b2a63e0bfa42df045\": not found" imageID="sha256:54d8c2251c811295690b53af7767ecaf246f1186c36e4f2b2a63e0bfa42df045" spanID=bd10a21a07830d72 tag="docker.io/library/test:latest" traceID=4cf61671c2dc6da3dc7a09c0c6ac4e16
```

Remove that log as it causes unnecessary confusion, as the failure is
expected.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit a93f6c61db)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-12-02 11:52:01 +01:00
Sebastiaan van Stijn
5a7a2099b2 Merge pull request #48921 from austinvazquez/cherry-pick-runtime-updates-to-25.0
[25.0 backport] Dockerfile: update containerd v1.7.24, runc v1.2.2
2024-12-01 10:59:03 +01:00
Sebastiaan van Stijn
6f497b2d51 Dockerfile: update to runc v1.2.2
- 1.2.2 release notes: https://github.com/opencontainers/runc/releases/tag/v1.2.2
- 1.2.1 release notes: https://github.com/opencontainers/runc/releases/tag/v1.2.1
- 1.2.0 release notes: https://github.com/opencontainers/runc/releases/tag/v1.2.0

Breaking changes and deprecations are included below;

Breaking changes:

Several aspects of how mount options work has been adjusted in a way that
could theoretically break users that have very strange mount option strings.
This was necessary to fix glaring issues in how mount options were being
treated. The key changes are:

- Mount options on bind-mounts that clear a mount flag are now always
  applied. Previously, if a user requested a bind-mount with only clearing
  options (such as rw,exec,dev) the options would be ignored and the
  original bind-mount options would be set. Unfortunately this also means
  that container configurations which specified only clearing mount options
  will now actually get what they asked for, which could break existing
  containers (though it seems unlikely that a user who requested a specific
  mount option would consider it "broken" to get the mount options they
  asked foruser who requested a specific mount option would consider it
  "broken" to get the mount options they asked for). This also allows us to
  silently add locked mount flags the user did not explicitly request to be
  cleared in rootless mode, allowing for easier use of bind-mounts for
  rootless containers.
- Container configurations using bind-mounts with superblock mount flags
  (i.e. filesystem-specific mount flags, referred to as "data" in
  mount(2), as opposed to VFS generic mount flags like MS_NODEV) will
  now return an error. This is because superblock mount flags will also
  affect the host mount (as the superblock is shared when bind-mounting),
  which is obviously not acceptable. Previously, these flags were silently
  ignored so this change simply tells users that runc cannot fulfil their
  request rather than just ignoring it.

Deprecated

- runc option --criu is now ignored (with a warning), and the option will
  be removed entirely in a future release. Users who need a non-standard
  criu binary should rely on the standard way of looking up binaries in
  $PATH.
- runc kill option -a is now deprecated. Previously, it had to be specified
  to kill a container (with SIGKILL) which does not have its own private PID
  namespace (so that runc would send SIGKILL to all processes). Now, this is
  done automatically.
- github.com/opencontainers/runc/libcontainer/user is now deprecated, please
  use github.com/moby/sys/user instead. It will be removed in a future
  release.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e257856116)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-30 09:23:18 -07:00
Austin Vazquez
01c163d4ee Dockerfile: update containerd to v1.7.24
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
(cherry picked from commit 8cecf3a71c)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-30 09:23:15 -07:00
Sebastiaan van Stijn
7812180193 Merge pull request #49001 from austinvazquez/cherry-pick-fb6e650ab9dec7f9e8a67b278104881f03f63d08-to-25.0
[25.0 backport] integration: add wait
2024-11-30 09:59:18 +01:00
Sebastiaan van Stijn
cd20907cc5 Merge pull request #49003 from austinvazquez/cherry-pick-ci-updates-to-25.0
[25.0 backport] gha: more limits, update alpine version, and some minor improvements
2024-11-30 09:51:10 +01:00
Sebastiaan van Stijn
708c8dc304 gha: shorter time limits for smoke, validate
- validate-prepare and smoke-prepare took 10 seconds; limiting to 10 minutes
- smoke tests took less than 3 minutes; limiting to 10 minutes
- validate: most took under a minute, but "deprecate-integration-cli" took
  14 minutes; limiting to 30 minutes

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a051aba82e)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-30 05:59:38 +00:00
Sebastiaan van Stijn
f6bcbab7a1 gha: use "ubuntu-24.04" instead of "ubuntu-latest"
To be more explicit on what we're using.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 91c448bfb5)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-30 05:59:27 +00:00
Sebastiaan van Stijn
2de8143fa6 gha: dco: small tweaks to running the container
- add `--quiet` to suppress pull progress output
- use `./` instead of `$(pwd)` now that relative paths are supported
- set the working directory on the container, so that we don't have to `cd`

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9a14299540)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-30 05:59:18 +00:00
Sebastiaan van Stijn
e0857ef530 gha: dco: update ALPINE_VERSION to 3.20
Alpine 3.16 has been EOL for some time. Update to the latest version.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3cb98d759d)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-30 05:59:03 +00:00
Sebastiaan van Stijn
1b7b596513 gha: build (binary), build (dynbinary): limit to 20 minutes
Regular runs are under 5 minutes.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit cfe0d2a131)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-30 05:58:53 +00:00
Sebastiaan van Stijn
2e43cd5450 gha: dco: limit to 10 minutes
Regular runs are under a minute.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e75f7aca2f)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-30 05:58:43 +00:00
Akihiro Suda
bdb21cd779 integration: add wait
Cherry-picked several WIP commits from
b0a592798f/

Originally-authored-by: Rodrigo Campos <rodrigoca@microsoft.com>
Co-Authored-by: Kir Kolyshkin <kolyshkin@gmail.com>
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit fb6e650ab9)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-30 05:23:41 +00:00
Austin Vazquez
cf1608cf12 Merge pull request #48997 from thaJeztah/25.0_backport_modprobe_br_netfilter
[25.0 backport] Jenkinsfile: modprobe br_netfilter
2024-11-29 19:44:49 -08:00
Sebastiaan van Stijn
911478fb28 Jenkinsfile: modprobe br_netfilter
Make sure the module is loaded, as we're not able to load it from within
the dev-container;

    time="2024-11-29T20:40:42Z" level=error msg="Running modprobe br_netfilter failed with message: modprobe: WARNING: Module br_netfilter not found in directory /lib/modules/5.15.0-1072-aws\n" error="exit status 1"

Also moving these steps _before_ the "print info" step, so that docker info
doesn't show warnings that bridge-nf-call-iptables and bridge-nf-call-ip6tables
are not loaded.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit cce5dfe1e7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-11-29 23:28:49 +01:00
Austin Vazquez
1bb77b9532 Merge pull request #48928 from thaJeztah/25.0_backport_own-cgroup-path
[25.0 backport] daemon: use OwnCgroupPath in withCgroups
2024-11-22 08:24:21 -08:00
Kir Kolyshkin
2278d180a7 daemon: use OwnCgroupPath in withCgroups
Note: this usage comes from commit 56f77d5ade (part of PR 23430).

cgroups.InitCgroupPath is removed from runc (see [1]), and it is
suggested that users use OwnCgroupPath instead, because using init's is
problematic when in host PID namespace (see [2]) and is generally not
the right thing to do (see [3]).

[1]: https://github.com/opencontainers/runc/commit/fd5debf3
[2]: https://github.com/opencontainers/runc/commit/2b28b3c2
[3]: https://github.com/opencontainers/runc/commit/54e20217

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 6be2074aef)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-11-22 10:44:11 +01:00
Sebastiaan van Stijn
2440ce0527 Merge pull request #48920 from austinvazquez/cherry-pick-1eccc326deec9e39916c227b2684329b7f010bfd-to-25.0
[25.0 backport] vendor: github.com/golang-jwt/jwt/v4@v4.5.1
2024-11-22 10:18:06 +01:00
Austin Vazquez
a6d1d0693f vendor: github.com/golang-jwt/jwt/v4@v4.5.1
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
(cherry picked from commit 1eccc326de)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-21 18:37:27 +00:00
Akihiro Suda
5b6e0e970e Merge pull request #48876 from austinvazquez/cherry-pick-0e4ab47f232391954a4deb8b781cc8cb25d88469-to-25.0
[25.0 backport] update to go1.22.9
2024-11-14 22:30:49 -07:00
Paweł Gronowski
0ed4861f9c update to go1.22.9
- https://github.com/golang/go/issues?q=milestone%3AGo1.22.9+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.22.8...go1.22.9

go1.22.9 (released 2024-11-06) includes fixes to the linker. See the
[Go 1.22.9 milestone](https://github.com/golang/go/issues?q=milestone%3AGo1.22.9+label%3ACherryPickApproved)
milestone for details.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 0e4ab47f23)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-11-14 14:27:27 -07:00
Sebastiaan van Stijn
1c354d1f7a Merge pull request #48803 from austinvazquez/cherry-pick-runc-updates-to-25.0
[25.0 backport] Dockerfile: update runc to v1.1.14
2024-10-31 08:56:37 +01:00
Sebastiaan van Stijn
2df019330c update runc binary to 1.1.14
Update the runc binary that's used in CI and for the static packages.

diff: https://github.com/opencontainers/runc/compare/v1.1.13...v1.1.14

Release Notes:

- Fix CVE-2024-45310, a low-severity attack that allowed maliciously configured containers to create empty files and directories on the host.
- Add support for Go 1.23.
- Revert "allow overriding VERSION value in Makefile" and add EXTRA_VERSION.
- rootfs: consolidate mountpoint creation logic.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2189aa2426)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-10-30 20:31:09 -07:00
Sebastiaan van Stijn
e6de0b8f3b update runc binary to v1.1.13
Update the runc binary that's used in CI and for the static packages.

full diff: https://github.com/opencontainers/runc/compare/v1.1.12...v1.1.13

Release notes:

* If building with Go 1.22.x, make sure to use 1.22.4 or a later version.

* Support go 1.22.4+.
* runc list: fix race with runc delete.
* Fix set nofile rlimit error.
* libct/cg/fs: fix setting rt_period vs rt_runtime.
* Fix a debug msg for user ns in nsexec.
* script/*: fix gpg usage wrt keyboxd.
* CI fixes and misc backports.
* Fix codespell warnings.

* Silence security false positives from golang/net.
* libcontainer: allow containers to make apps think fips is enabled/disabled for testing.
* allow overriding VERSION value in Makefile.
* Vagrantfile.fedora: bump Fedora to 39.
* ci/cirrus: rm centos stream 8.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9101392309)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-10-30 20:30:44 -07:00
Sebastiaan van Stijn
b7902b3391 Merge pull request #48787 from austinvazquez/cherry-pick-10d57fde4497fb1e141f2020697528acece38425-to-25.0
[25.0 backport] volume/mounts: fix anonymous volume not being labeled
2024-10-28 22:41:12 +01:00
Sebastiaan van Stijn
cb56070132 volume: VolumesService.Create: fix log-level for debug logs
These log-entries were added in 10d57fde44,
but it looks like I accidentally left them as Error-logs following some
debugging (whoops!).

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 352b4ff2f1)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-10-28 15:22:15 +00:00
Sebastiaan van Stijn
480b01a532 volume/mounts: fix anonymous volume not being labeled
`Parser.ParseMountRaw()` labels anonymous volumes with a `AnonymousLabel` label
(`com.docker.volume.anonymous`) label based on whether a volume has a name
(named volume) or no name (anonymous) (see [1]).

However both `VolumesService.Create()` (see [1]) and `Parser.ParseMountRaw()`
(see [2], [3]) were generating a random name for anonymous volumes. The latter
is called before `VolumesService.Create()` is called, resulting in such volumes
not being labeled as anonymous.

Generating the name was originally done in Create (fc7b904dce),
but duplicated in b3b7eb2723 with the introduction
of the new Mounts field in HostConfig. Duplicating this effort didn't have a
real effect until (`Create` would just skip generating the name), until
618f26ccbc introduced the `AnonymousLabel` in
(v24.0.0, backported to v23.0.0).

Parsing generally should not fill in defaults / generate names, so this patch;

- Removes generating volume names from  `Parser.ParseMountRaw()`
- Adds a debug-log entry to `VolumesService.Create()`
- Touches up some logs to use structured logs for easier correlating logs

With this patch applied:

    docker run --rm --mount=type=volume,target=/toto hello-world

    DEBU[2024-10-24T22:50:36.359990376Z] creating anonymous volume                     volume-name=0cfd63d4df363571e7b3e9c04e37c74054cc16ff1d00d9a005232d83e92eda02
    DEBU[2024-10-24T22:50:36.360069209Z] probing all drivers for volume                volume-name=0cfd63d4df363571e7b3e9c04e37c74054cc16ff1d00d9a005232d83e92eda02
    DEBU[2024-10-24T22:50:36.360341209Z] Registering new volume reference              driver=local volume-name=0cfd63d4df363571e7b3e9c04e37c74054cc16ff1d00d9a005232d83e92eda02

[1]: 032721ff75/volume/service/service.go (L72-L83)
[2]: 032721ff75/volume/mounts/linux_parser.go (L330-L336)
[3]: 032721ff75/volume/mounts/windows_parser.go (L394-L400)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 10d57fde44)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-10-28 15:21:40 +00:00
Sebastiaan van Stijn
f7b7ec14b8 volume/service: change some logs to use structured logs
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4e840b9e29)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-10-28 15:21:27 +00:00
390 changed files with 45981 additions and 4530 deletions

View File

@@ -16,12 +16,12 @@ on:
workflow_call:
env:
ALPINE_VERSION: 3.16
ALPINE_VERSION: "3.20"
jobs:
run:
runs-on: ubuntu-20.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-24.04
timeout-minutes: 10 # guardrails timeout for the whole job
steps:
-
name: Checkout
@@ -49,10 +49,12 @@ jobs:
name: Validate
run: |
docker run --rm \
-v "$(pwd):/workspace" \
--quiet \
-v ./:/workspace \
-w /workspace \
-e VALIDATE_REPO \
-e VALIDATE_BRANCH \
alpine:${{ env.ALPINE_VERSION }} sh -c 'apk add --no-cache -q bash git openssh-client && git config --system --add safe.directory /workspace && cd /workspace && hack/validate/dco'
alpine:${{ env.ALPINE_VERSION }} sh -c 'apk add --no-cache -q bash git openssh-client && git config --system --add safe.directory /workspace && hack/validate/dco'
env:
VALIDATE_REPO: ${{ github.server_url }}/${{ github.repository }}.git
VALIDATE_BRANCH: ${{ steps.base-ref.outputs.result }}

View File

@@ -21,7 +21,7 @@ on:
jobs:
run:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
outputs:
matrix: ${{ steps.set.outputs.matrix }}

View File

@@ -21,7 +21,7 @@ on:
default: "graphdriver"
env:
GO_VERSION: "1.22.8"
GO_VERSION: "1.23.9"
GOTESTLIST_VERSION: v0.3.1
TESTSTAT_VERSION: v0.1.25
ITG_CLI_MATRIX_SIZE: 6
@@ -31,7 +31,7 @@ env:
jobs:
unit:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
steps:
@@ -41,12 +41,31 @@ jobs:
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Prepare
run: |
CACHE_DEV_SCOPE=dev
if [[ "${{ matrix.mode }}" == *"rootless"* ]]; then
# In rootless mode, tests will run in the host's namspace not the rootlesskit
# namespace. So, probably no different to non-rootless unit tests and can be
# removed from the test matrix.
echo "DOCKER_ROOTLESS=1" >> $GITHUB_ENV
fi
if [[ "${{ matrix.mode }}" == *"firewalld"* ]]; then
echo "FIREWALLD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}firewalld"
fi
if [[ "${{ matrix.mode }}" == *"systemd"* ]]; then
echo "SYSTEMD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}systemd"
fi
echo "CACHE_DEV_SCOPE=${CACHE_DEV_SCOPE}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v4
uses: docker/bake-action@v6
with:
targets: dev
set: |
@@ -82,7 +101,7 @@ jobs:
retention-days: 1
unit-report:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 10
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: always()
@@ -110,7 +129,7 @@ jobs:
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY
docker-py:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
steps:
@@ -128,7 +147,7 @@ jobs:
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v4
uses: docker/bake-action@v6
with:
targets: dev
set: |
@@ -136,7 +155,7 @@ jobs:
-
name: Test
run: |
make -o build test-docker-py
make TEST_SKIP_INTEGRATION_CLI=1 -o build test-docker-py
-
name: Prepare reports
if: always()
@@ -163,7 +182,7 @@ jobs:
retention-days: 1
integration-flaky:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
steps:
@@ -178,7 +197,7 @@ jobs:
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v4
uses: docker/bake-action@v6
with:
targets: dev
set: |
@@ -198,13 +217,17 @@ jobs:
fail-fast: false
matrix:
os:
- ubuntu-20.04
- ubuntu-22.04
- ubuntu-24.04
mode:
- ""
- rootless
- systemd
- firewalld
#- rootless-systemd FIXME: https://github.com/moby/moby/issues/44084
exclude:
- os: ubuntu-24.04 # FIXME: https://github.com/moby/moby/pull/49579#issuecomment-2698622223
mode: rootless
steps:
-
name: Checkout
@@ -226,13 +249,17 @@ jobs:
echo "SYSTEMD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}systemd"
fi
if [[ "${{ matrix.mode }}" == *"firewalld"* ]]; then
echo "FIREWALLD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}firewalld"
fi
echo "CACHE_DEV_SCOPE=${CACHE_DEV_SCOPE}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v4
uses: docker/bake-action@v6
with:
targets: dev
set: |
@@ -285,7 +312,7 @@ jobs:
retention-days: 1
integration-report:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 10
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: always()
@@ -314,7 +341,7 @@ jobs:
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY
integration-cli-prepare:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
outputs:
@@ -350,7 +377,7 @@ jobs:
echo ${{ steps.tests.outputs.matrix }}
integration-cli:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
needs:
@@ -369,12 +396,21 @@ jobs:
-
name: Set up tracing
uses: ./.github/actions/setup-tracing
-
name: Prepare
run: |
CACHE_DEV_SCOPE=dev
if [[ "${{ matrix.mode }}" == *"firewalld"* ]]; then
echo "FIREWALLD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}firewalld"
fi
echo "CACHE_DEV_SCOPE=${CACHE_DEV_SCOPE}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v4
uses: docker/bake-action@v6
with:
targets: dev
set: |
@@ -426,7 +462,7 @@ jobs:
retention-days: 1
integration-cli-report:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 10
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: always()

View File

@@ -28,12 +28,12 @@ on:
default: false
env:
GO_VERSION: "1.22.8"
GO_VERSION: "1.23.9"
GOTESTLIST_VERSION: v0.3.1
TESTSTAT_VERSION: v0.1.25
WINDOWS_BASE_IMAGE: mcr.microsoft.com/windows/servercore
WINDOWS_BASE_TAG_2019: ltsc2019
WINDOWS_BASE_TAG_2022: ltsc2022
WINDOWS_BASE_TAG_2025: ltsc2025
TEST_IMAGE_NAME: moby:test
TEST_CTN_NAME: moby
DOCKER_BUILDKIT: 0
@@ -65,8 +65,8 @@ jobs:
run: |
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go-build"
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go\pkg\mod"
If ("${{ inputs.os }}" -eq "windows-2019") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2019 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
If ("${{ inputs.os }}" -eq "windows-2025") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2025 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
@@ -145,8 +145,8 @@ jobs:
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go-build"
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go\pkg\mod"
New-Item -ItemType "directory" -Path "bundles"
If ("${{ inputs.os }}" -eq "windows-2019") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2019 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
If ("${{ inputs.os }}" -eq "windows-2025") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2025 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
@@ -203,7 +203,7 @@ jobs:
retention-days: 1
unit-test-report:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
if: always()
needs:
@@ -230,7 +230,7 @@ jobs:
find /tmp/artifacts -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY
integration-test-prepare:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
outputs:
matrix: ${{ steps.tests.outputs.matrix }}
@@ -319,8 +319,8 @@ jobs:
name: Init
run: |
New-Item -ItemType "directory" -Path "bundles"
If ("${{ inputs.os }}" -eq "windows-2019") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2019 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
If ("${{ inputs.os }}" -eq "windows-2025") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2025 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
@@ -524,7 +524,7 @@ jobs:
retention-days: 1
integration-test-report:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ inputs.storage == 'snapshotter' && github.event_name != 'pull_request' }}
if: always()

275
.github/workflows/arm64.yml vendored Normal file
View File

@@ -0,0 +1,275 @@
name: arm64
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
pull_request:
env:
GO_VERSION: "1.23.9"
TESTSTAT_VERSION: v0.1.25
DESTDIR: ./build
SETUP_BUILDX_VERSION: edge
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
DOCKER_EXPERIMENTAL: 1
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
build:
runs-on: ubuntu-24.04-arm
timeout-minutes: 20 # guardrails timeout for the whole job
needs:
- validate-dco
strategy:
fail-fast: false
matrix:
target:
- binary
- dynbinary
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v6
with:
targets: ${{ matrix.target }}
-
name: List artifacts
run: |
tree -nh ${{ env.DESTDIR }}
-
name: Check artifacts
run: |
find ${{ env.DESTDIR }} -type f -exec file -e ascii -- {} +
build-dev:
runs-on: ubuntu-24.04-arm
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- validate-dco
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
with:
targets: dev
set: |
*.cache-from=type=gha,scope=dev-arm64
*.cache-to=type=gha,scope=dev-arm64
*.output=type=cacheonly
test-unit:
runs-on: ubuntu-24.04-arm
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- build-dev
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev-arm64
-
name: Test
run: |
make -o build test-unit
-
name: Prepare reports
if: always()
run: |
mkdir -p bundles /tmp/reports
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C /tmp/reports
sudo chown -R $(id -u):$(id -g) /tmp/reports
tree -nh /tmp/reports
-
name: Send to Codecov
uses: codecov/codecov-action@v4
with:
directory: ./bundles
env_vars: RUNNER_OS
flags: unit
token: ${{ secrets.CODECOV_TOKEN }} # used to upload coverage reports: https://github.com/moby/buildkit/pull/4660#issue-2142122533
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-unit-arm64-graphdriver
path: /tmp/reports/*
retention-days: 1
test-unit-report:
runs-on: ubuntu-24.04
timeout-minutes: 10
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: always()
needs:
- test-unit
steps:
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache-dependency-path: vendor.sum
-
name: Download reports
uses: actions/download-artifact@v4
with:
pattern: test-reports-unit-arm64-*
path: /tmp/reports
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY
test-integration:
runs-on: ubuntu-24.04-arm
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
needs:
- build-dev
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up tracing
uses: ./.github/actions/setup-tracing
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev-arm64
-
name: Test
run: |
make -o build test-integration
env:
TEST_SKIP_INTEGRATION_CLI: 1
TESTCOVERAGE: 1
-
name: Prepare reports
if: always()
run: |
reportsPath="/tmp/reports/arm64-graphdriver"
mkdir -p bundles $reportsPath
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C $reportsPath
sudo chown -R $(id -u):$(id -g) $reportsPath
tree -nh $reportsPath
curl -sSLf localhost:16686/api/traces?service=integration-test-client > $reportsPath/jaeger-trace.json
-
name: Send to Codecov
uses: codecov/codecov-action@v4
with:
directory: ./bundles/test-integration
env_vars: RUNNER_OS
flags: integration
token: ${{ secrets.CODECOV_TOKEN }} # used to upload coverage reports: https://github.com/moby/buildkit/pull/4660#issue-2142122533
-
name: Test daemon logs
if: always()
run: |
cat bundles/test-integration/docker.log
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-integration-arm64-graphdriver
path: /tmp/reports/*
retention-days: 1
test-integration-report:
runs-on: ubuntu-24.04
timeout-minutes: 10
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: always()
needs:
- test-integration
steps:
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache-dependency-path: vendor.sum
-
name: Download reports
uses: actions/download-artifact@v4
with:
path: /tmp/reports
pattern: test-reports-integration-arm64-*
merge-multiple: true
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY

View File

@@ -37,7 +37,7 @@ jobs:
uses: ./.github/workflows/.dco.yml
prepare:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
outputs:
platforms: ${{ steps.platforms.outputs.matrix }}
@@ -90,7 +90,7 @@ jobs:
echo "matrix=$(docker buildx bake bin-image-cross --print | jq -cr '.target."bin-image-cross".platforms')" >>${GITHUB_OUTPUT}
build:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- validate-dco
@@ -101,16 +101,16 @@ jobs:
matrix:
platform: ${{ fromJson(needs.prepare.outputs.platforms) }}
steps:
-
name: Prepare
run: |
platform=${{ matrix.platform }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
-
name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
-
name: Prepare
run: |
platform=${{ matrix.platform }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
-
name: Download meta bake definition
uses: actions/download-artifact@v4
@@ -133,8 +133,9 @@ jobs:
-
name: Build
id: bake
uses: docker/bake-action@v4
uses: docker/bake-action@v6
with:
source: .
files: |
./docker-bake.hcl
/tmp/bake-meta.json
@@ -161,7 +162,7 @@ jobs:
retention-days: 1
merge:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- build

View File

@@ -22,8 +22,8 @@ on:
pull_request:
env:
GO_VERSION: "1.22.8"
ALPINE_VERSION: "3.19"
GO_VERSION: "1.23.9"
ALPINE_VERSION: "3.20"
DESTDIR: ./build
jobs:
@@ -31,20 +31,17 @@ jobs:
uses: ./.github/workflows/.dco.yml
build:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- validate-dco
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build
uses: docker/bake-action@v4
uses: docker/bake-action@v6
with:
targets: binary
-
@@ -57,7 +54,7 @@ jobs:
retention-days: 1
test:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- build
@@ -103,7 +100,10 @@ jobs:
-
name: BuildKit ref
run: |
echo "$(./hack/buildkit-ref)" >> $GITHUB_ENV
# FIXME(aepifanov) temporarily overriding version to use for tests; remove with the next release of buildkit
# echo "BUILDKIT_REF=$(./hack/buildkit-ref)" >> $GITHUB_ENV
echo "BUILDKIT_REPO=moby/buildkit" >> $GITHUB_ENV
echo "BUILDKIT_REF=b10aeed77fd8a370f6aec7ae4b212ab291914e08" >> $GITHUB_ENV
working-directory: moby
-
name: Checkout BuildKit ${{ env.BUILDKIT_REF }}

View File

@@ -29,8 +29,8 @@ jobs:
uses: ./.github/workflows/.dco.yml
build:
runs-on: ubuntu-20.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
needs:
- validate-dco
strategy:
@@ -40,17 +40,12 @@ jobs:
- binary
- dynbinary
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build
uses: docker/bake-action@v4
uses: docker/bake-action@v6
with:
targets: ${{ matrix.target }}
-
@@ -63,7 +58,7 @@ jobs:
find ${{ env.DESTDIR }} -type f -exec file -e ascii -- {} +
prepare-cross:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
needs:
- validate-dco
@@ -85,7 +80,7 @@ jobs:
echo ${{ steps.platforms.outputs.matrix }}
cross:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
needs:
- validate-dco
@@ -95,11 +90,6 @@ jobs:
matrix:
platform: ${{ fromJson(needs.prepare-cross.outputs.matrix) }}
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
-
name: Prepare
run: |
@@ -110,7 +100,7 @@ jobs:
uses: docker/setup-buildx-action@v3
-
name: Build
uses: docker/bake-action@v4
uses: docker/bake-action@v6
with:
targets: all
set: |

View File

@@ -22,7 +22,7 @@ on:
pull_request:
env:
GO_VERSION: "1.22.8"
GO_VERSION: "1.23.9"
GIT_PAGER: "cat"
PAGER: "cat"
@@ -31,7 +31,7 @@ jobs:
uses: ./.github/workflows/.dco.yml
build-dev:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- validate-dco
@@ -41,6 +41,7 @@ jobs:
mode:
- ""
- systemd
- firewalld
steps:
-
name: Prepare
@@ -48,20 +49,17 @@ jobs:
if [ "${{ matrix.mode }}" = "systemd" ]; then
echo "SYSTEMD=true" >> $GITHUB_ENV
fi
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v4
uses: docker/bake-action@v6
with:
targets: dev
set: |
*.cache-from=type=gha,scope=dev${{ matrix.mode }}
*.cache-to=type=gha,scope=dev${{ matrix.mode }},mode=max
*.cache-to=type=gha,scope=dev${{ matrix.mode }}
*.output=type=cacheonly
test:
@@ -80,8 +78,8 @@ jobs:
storage: ${{ matrix.storage }}
validate-prepare:
runs-on: ubuntu-20.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-24.04
timeout-minutes: 10 # guardrails timeout for the whole job
needs:
- validate-dco
outputs:
@@ -102,8 +100,8 @@ jobs:
echo ${{ steps.scripts.outputs.matrix }}
validate:
runs-on: ubuntu-20.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-24.04
timeout-minutes: 30 # guardrails timeout for the whole job
needs:
- validate-prepare
- build-dev
@@ -125,7 +123,7 @@ jobs:
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v4
uses: docker/bake-action@v6
with:
targets: dev
set: |
@@ -136,8 +134,8 @@ jobs:
make -o build validate-${{ matrix.script }}
smoke-prepare:
runs-on: ubuntu-20.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-24.04
timeout-minutes: 10 # guardrails timeout for the whole job
needs:
- validate-dco
outputs:
@@ -158,8 +156,8 @@ jobs:
echo ${{ steps.platforms.outputs.matrix }}
smoke:
runs-on: ubuntu-20.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
needs:
- smoke-prepare
strategy:
@@ -167,9 +165,6 @@ jobs:
matrix:
platform: ${{ fromJson(needs.smoke-prepare.outputs.matrix) }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Prepare
run: |
@@ -183,7 +178,7 @@ jobs:
uses: docker/setup-buildx-action@v3
-
name: Test
uses: docker/bake-action@v4
uses: docker/bake-action@v6
with:
targets: binary-smoketest
set: |

View File

@@ -15,7 +15,7 @@ on:
jobs:
check-area-label:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
steps:
- name: Missing `area/` label
@@ -28,7 +28,7 @@ jobs:
check-changelog:
if: contains(join(github.event.pull_request.labels.*.name, ','), 'impact/')
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
env:
PR_BODY: |
@@ -57,7 +57,7 @@ jobs:
echo "$desc"
check-pr-branch:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
env:
PR_TITLE: ${{ github.event.pull_request.title }}

View File

@@ -14,12 +14,9 @@ concurrency:
cancel-in-progress: true
on:
schedule:
- cron: '0 10 * * *'
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
pull_request:
jobs:
validate-dco:

View File

@@ -1,4 +1,4 @@
name: windows-2019
name: windows-2025
# Default to 'contents: read', which grants actions to read commits.
#
@@ -14,9 +14,13 @@ concurrency:
cancel-in-progress: true
on:
schedule:
- cron: '0 10 * * *'
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
pull_request:
jobs:
validate-dco:
@@ -37,6 +41,6 @@ jobs:
matrix:
storage: ${{ fromJson(needs.test-prepare.outputs.matrix) }}
with:
os: windows-2019
os: windows-2025
storage: ${{ matrix.storage }}
send_coverage: false

View File

@@ -50,11 +50,18 @@ linters-settings:
deny:
- pkg: io/ioutil
desc: The io/ioutil package has been deprecated, see https://go.dev/doc/go1.16#ioutil
- pkg: "github.com/stretchr/testify/assert"
desc: Use "gotest.tools/v3/assert" instead
- pkg: "github.com/stretchr/testify/require"
desc: Use "gotest.tools/v3/assert" instead
- pkg: "github.com/stretchr/testify/suite"
desc: Do not use
revive:
rules:
# FIXME make sure all packages have a description. Currently, there's many packages without.
- name: package-comments
disabled: true
- name: redefines-builtin-id
issues:
# The default exclusion rules are a bit too permissive, so copying the relevant ones below
exclude-use-default: false

View File

@@ -1,9 +1,9 @@
# syntax=docker/dockerfile:1
ARG GO_VERSION=1.22.8
ARG GO_VERSION=1.23.9
ARG BASE_DEBIAN_DISTRO="bookworm"
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"
ARG XX_VERSION=1.2.1
ARG XX_VERSION=1.6.1
ARG VPNKIT_VERSION=0.5.0
@@ -16,6 +16,7 @@ ARG BUILDX_VERSION=0.12.1
ARG COMPOSE_VERSION=v2.24.5
ARG SYSTEMD="false"
ARG FIREWALLD="false"
ARG DOCKER_STATIC=1
# REGISTRY_VERSION specifies the version of the registry to download from
@@ -198,7 +199,7 @@ RUN git init . && git remote add origin "https://github.com/containerd/container
# When updating the binary version you may also need to update the vendor
# version to pick up bug fixes or new APIs, however, usually the Go packages
# are built from a commit from the master branch.
ARG CONTAINERD_VERSION=v1.7.22
ARG CONTAINERD_VERSION=v1.7.27
RUN git fetch -q --depth 1 origin "${CONTAINERD_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS containerd-build
@@ -283,7 +284,7 @@ RUN git init . && git remote add origin "https://github.com/opencontainers/runc.
# that is used. If you need to update runc, open a pull request in the containerd
# project first, and update both after that is merged. When updating RUNC_VERSION,
# consider updating runc in vendor.mod accordingly.
ARG RUNC_VERSION=v1.1.12
ARG RUNC_VERSION=v1.2.5
RUN git fetch -q --depth 1 origin "${RUNC_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS runc-build
@@ -449,8 +450,8 @@ FROM binary-dummy AS containerutil-linux
FROM containerutil-build AS containerutil-windows-amd64
FROM containerutil-windows-${TARGETARCH} AS containerutil-windows
FROM containerutil-${TARGETOS} AS containerutil
FROM docker/buildx-bin:${BUILDX_VERSION} as buildx
FROM docker/compose-bin:${COMPOSE_VERSION} as compose
FROM docker/buildx-bin:${BUILDX_VERSION} AS buildx
FROM docker/compose-bin:${COMPOSE_VERSION} AS compose
FROM base AS dev-systemd-false
COPY --link --from=frozen-images /build/ /docker-frozen-images
@@ -500,7 +501,16 @@ RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
systemd-sysv
ENTRYPOINT ["hack/dind-systemd"]
FROM dev-systemd-${SYSTEMD} AS dev-base
FROM dev-systemd-${SYSTEMD} AS dev-firewalld-false
FROM dev-systemd-true AS dev-firewalld-true
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
firewalld
RUN sed -i 's/FirewallBackend=nftables/FirewallBackend=iptables/' /etc/firewalld/firewalld.conf
FROM dev-firewalld-${FIREWALLD} AS dev-base
RUN groupadd -r docker
RUN useradd --create-home --gid docker unprivilegeduser \
&& mkdir -p /home/unprivilegeduser/.local/share/docker \
@@ -644,7 +654,7 @@ COPY --link --from=build /build /
# smoke tests
# usage:
# > docker buildx bake binary-smoketest
FROM --platform=$TARGETPLATFORM base AS smoketest
FROM base AS smoketest
WORKDIR /usr/local/bin
COPY --from=build /build .
RUN <<EOT

View File

@@ -5,7 +5,7 @@
# This represents the bare minimum required to build and test Docker.
ARG GO_VERSION=1.22.8
ARG GO_VERSION=1.23.9
ARG BASE_DEBIAN_DISTRO="bookworm"
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"

View File

@@ -161,10 +161,10 @@ FROM ${WINDOWS_BASE_IMAGE}:${WINDOWS_BASE_IMAGE_TAG}
# Use PowerShell as the default shell
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ARG GO_VERSION=1.22.8
ARG GOTESTSUM_VERSION=v1.8.2
ARG GO_VERSION=1.23.9
ARG GOWINRES_VERSION=v0.3.1
ARG CONTAINERD_VERSION=v1.7.22
ARG CONTAINERD_VERSION=v1.7.27
# Environment variable notes:
# - GO_VERSION must be consistent with 'Dockerfile' used by Linux.
@@ -255,14 +255,11 @@ RUN `
Remove-Item C:\gitsetup.zip; `
`
Write-Host INFO: Downloading containerd; `
Install-Package -Force 7Zip4PowerShell; `
$location='https://github.com/containerd/containerd/releases/download/'+$Env:CONTAINERD_VERSION+'/containerd-'+$Env:CONTAINERD_VERSION.TrimStart('v')+'-windows-amd64.tar.gz'; `
Download-File $location C:\containerd.tar.gz; `
New-Item -Path C:\containerd -ItemType Directory; `
Expand-7Zip C:\containerd.tar.gz C:\; `
Expand-7Zip C:\containerd.tar C:\containerd; `
tar -xzf C:\containerd.tar.gz -C C:\containerd; `
Remove-Item C:\containerd.tar.gz; `
Remove-Item C:\containerd.tar; `
`
# Ensure all directories exist that we will require below....
$srcDir = """$Env:GOPATH`\src\github.com\docker\docker\bundles"""; `

165
Jenkinsfile vendored
View File

@@ -1,165 +0,0 @@
#!groovy
pipeline {
agent none
options {
buildDiscarder(logRotator(daysToKeepStr: '30'))
timeout(time: 2, unit: 'HOURS')
timestamps()
}
parameters {
booleanParam(name: 'arm64', defaultValue: true, description: 'ARM (arm64) Build/Test')
booleanParam(name: 'dco', defaultValue: true, description: 'Run the DCO check')
}
environment {
DOCKER_BUILDKIT = '1'
DOCKER_EXPERIMENTAL = '1'
DOCKER_GRAPHDRIVER = 'overlay2'
CHECK_CONFIG_COMMIT = '33a3680e08d1007e72c3b3f1454f823d8e9948ee'
TESTDEBUG = '0'
TIMEOUT = '120m'
}
stages {
stage('pr-hack') {
when { changeRequest() }
steps {
script {
echo "Workaround for PR auto-cancel feature. Borrowed from https://issues.jenkins-ci.org/browse/JENKINS-43353"
def buildNumber = env.BUILD_NUMBER as int
if (buildNumber > 1) milestone(buildNumber - 1)
milestone(buildNumber)
}
}
}
stage('DCO-check') {
when {
beforeAgent true
expression { params.dco }
}
agent { label 'arm64 && ubuntu-2004' }
steps {
sh '''
docker run --rm \
-v "$WORKSPACE:/workspace" \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
alpine sh -c 'apk add --no-cache -q bash git openssh-client && git config --system --add safe.directory /workspace && cd /workspace && hack/validate/dco'
'''
}
}
stage('Build') {
parallel {
stage('arm64') {
when {
beforeAgent true
expression { params.arm64 }
}
agent { label 'arm64 && ubuntu-2004' }
environment {
TEST_SKIP_INTEGRATION_CLI = '1'
}
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh 'docker build --force-rm -t docker:${GIT_COMMIT} .'
}
}
stage("Unit tests") {
steps {
sh '''
sudo modprobe ip6table_filter
'''
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/test/unit
'''
}
post {
always {
junit testResults: 'bundles/junit-report*.xml', allowEmptyResults: true
}
}
}
stage("Integration tests") {
environment { TEST_SKIP_INTEGRATION_CLI = '1' }
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TESTDEBUG \
-e TEST_INTEGRATION_USE_SNAPSHOTTER \
-e TEST_SKIP_INTEGRATION_CLI \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary \
test-integration
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=arm64-integration
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
}
}
}
}

View File

@@ -56,6 +56,7 @@ DOCKER_ENVS := \
-e DOCKER_USERLANDPROXY \
-e DOCKERD_ARGS \
-e DELVE_PORT \
-e FIREWALLD \
-e GITHUB_ACTIONS \
-e TEST_FORCE_VALIDATE \
-e TEST_INTEGRATION_DIR \
@@ -63,7 +64,6 @@ DOCKER_ENVS := \
-e TEST_INTEGRATION_FAIL_FAST \
-e TEST_SKIP_INTEGRATION \
-e TEST_SKIP_INTEGRATION_CLI \
-e TEST_IGNORE_CGROUP_CHECK \
-e TESTCOVERAGE \
-e TESTDEBUG \
-e TESTDIRS \
@@ -150,6 +150,9 @@ DOCKER_BUILD_ARGS += --build-arg=DOCKERCLI_INTEGRATION_REPOSITORY
ifdef DOCKER_SYSTEMD
DOCKER_BUILD_ARGS += --build-arg=SYSTEMD=true
endif
ifdef FIREWALLD
DOCKER_BUILD_ARGS += --build-arg=FIREWALLD=true
endif
BUILD_OPTS := ${DOCKER_BUILD_ARGS} ${DOCKER_BUILD_OPTS}
BUILD_CMD := $(BUILDX) build

View File

@@ -25,15 +25,15 @@ func NewRouter(b Backend, d experimentalProvider) router.Router {
}
// Routes returns the available routers to the build controller
func (r *buildRouter) Routes() []router.Route {
return r.routes
func (br *buildRouter) Routes() []router.Route {
return br.routes
}
func (r *buildRouter) initRoutes() {
r.routes = []router.Route{
router.NewPostRoute("/build", r.postBuild),
router.NewPostRoute("/build/prune", r.postPrune),
router.NewPostRoute("/build/cancel", r.postCancel),
func (br *buildRouter) initRoutes() {
br.routes = []router.Route{
router.NewPostRoute("/build", br.postBuild),
router.NewPostRoute("/build/prune", br.postPrune),
router.NewPostRoute("/build/cancel", br.postCancel),
}
}

View File

@@ -23,14 +23,14 @@ func NewRouter(b Backend, decoder httputils.ContainerDecoder) router.Router {
}
// Routes returns the available routers to the checkpoint controller
func (r *checkpointRouter) Routes() []router.Route {
return r.routes
func (cr *checkpointRouter) Routes() []router.Route {
return cr.routes
}
func (r *checkpointRouter) initRoutes() {
r.routes = []router.Route{
router.NewGetRoute("/containers/{name:.*}/checkpoints", r.getContainerCheckpoints, router.Experimental),
router.NewPostRoute("/containers/{name:.*}/checkpoints", r.postContainerCheckpoint, router.Experimental),
router.NewDeleteRoute("/containers/{name}/checkpoints/{checkpoint}", r.deleteContainerCheckpoint, router.Experimental),
func (cr *checkpointRouter) initRoutes() {
cr.routes = []router.Route{
router.NewGetRoute("/containers/{name:.*}/checkpoints", cr.getContainerCheckpoints, router.Experimental),
router.NewPostRoute("/containers/{name:.*}/checkpoints", cr.postContainerCheckpoint, router.Experimental),
router.NewDeleteRoute("/containers/{name}/checkpoints/{checkpoint}", cr.deleteContainerCheckpoint, router.Experimental),
}
}

View File

@@ -8,7 +8,7 @@ import (
"github.com/docker/docker/api/types/checkpoint"
)
func (s *checkpointRouter) postContainerCheckpoint(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (cr *checkpointRouter) postContainerCheckpoint(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
@@ -18,7 +18,7 @@ func (s *checkpointRouter) postContainerCheckpoint(ctx context.Context, w http.R
return err
}
err := s.backend.CheckpointCreate(vars["name"], options)
err := cr.backend.CheckpointCreate(vars["name"], options)
if err != nil {
return err
}
@@ -27,12 +27,12 @@ func (s *checkpointRouter) postContainerCheckpoint(ctx context.Context, w http.R
return nil
}
func (s *checkpointRouter) getContainerCheckpoints(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (cr *checkpointRouter) getContainerCheckpoints(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
checkpoints, err := s.backend.CheckpointList(vars["name"], checkpoint.ListOptions{
checkpoints, err := cr.backend.CheckpointList(vars["name"], checkpoint.ListOptions{
CheckpointDir: r.Form.Get("dir"),
})
if err != nil {
@@ -42,12 +42,12 @@ func (s *checkpointRouter) getContainerCheckpoints(ctx context.Context, w http.R
return httputils.WriteJSON(w, http.StatusOK, checkpoints)
}
func (s *checkpointRouter) deleteContainerCheckpoint(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (cr *checkpointRouter) deleteContainerCheckpoint(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
err := s.backend.CheckpointDelete(vars["name"], checkpoint.DeleteOptions{
err := cr.backend.CheckpointDelete(vars["name"], checkpoint.DeleteOptions{
CheckpointDir: r.Form.Get("dir"),
CheckpointID: vars["checkpoint"],
})

View File

@@ -18,14 +18,14 @@ func NewRouter(backend Backend) router.Router {
}
// Routes returns the available routes
func (r *distributionRouter) Routes() []router.Route {
return r.routes
func (dr *distributionRouter) Routes() []router.Route {
return dr.routes
}
// initRoutes initializes the routes in the distribution router
func (r *distributionRouter) initRoutes() {
r.routes = []router.Route{
func (dr *distributionRouter) initRoutes() {
dr.routes = []router.Route{
// GET
router.NewGetRoute("/distribution/{name:.*}/json", r.getDistributionInfo),
router.NewGetRoute("/distribution/{name:.*}/json", dr.getDistributionInfo),
}
}

View File

@@ -17,7 +17,7 @@ import (
"github.com/pkg/errors"
)
func (s *distributionRouter) getDistributionInfo(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (dr *distributionRouter) getDistributionInfo(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
@@ -43,7 +43,7 @@ func (s *distributionRouter) getDistributionInfo(ctx context.Context, w http.Res
// For a search it is not an error if no auth was given. Ignore invalid
// AuthConfig to increase compatibility with the existing API.
authConfig, _ := registry.DecodeAuthConfig(r.Header.Get(registry.AuthHeader))
repos, err := s.backend.GetRepositories(ctx, namedRef, authConfig)
repos, err := dr.backend.GetRepositories(ctx, namedRef, authConfig)
if err != nil {
return err
}
@@ -64,7 +64,7 @@ func (s *distributionRouter) getDistributionInfo(ctx context.Context, w http.Res
// - https://github.com/moby/moby/blob/12c7411b6b7314bef130cd59f1c7384a7db06d0b/distribution/pull.go#L76-L152
var lastErr error
for _, repo := range repos {
distributionInspect, err := s.fetchManifest(ctx, repo, namedRef)
distributionInspect, err := dr.fetchManifest(ctx, repo, namedRef)
if err != nil {
lastErr = err
continue
@@ -74,7 +74,7 @@ func (s *distributionRouter) getDistributionInfo(ctx context.Context, w http.Res
return lastErr
}
func (s *distributionRouter) fetchManifest(ctx context.Context, distrepo distribution.Repository, namedRef reference.Named) (registry.DistributionInspect, error) {
func (dr *distributionRouter) fetchManifest(ctx context.Context, distrepo distribution.Repository, namedRef reference.Named) (registry.DistributionInspect, error) {
var distributionInspect registry.DistributionInspect
if canonicalRef, ok := namedRef.(reference.Canonical); !ok {
namedRef = reference.TagNameOnly(namedRef)

View File

@@ -22,22 +22,22 @@ func NewRouter(b Backend, c ClusterBackend) router.Router {
}
// Routes returns the available routes to the network controller
func (r *networkRouter) Routes() []router.Route {
return r.routes
func (n *networkRouter) Routes() []router.Route {
return n.routes
}
func (r *networkRouter) initRoutes() {
r.routes = []router.Route{
func (n *networkRouter) initRoutes() {
n.routes = []router.Route{
// GET
router.NewGetRoute("/networks", r.getNetworksList),
router.NewGetRoute("/networks/", r.getNetworksList),
router.NewGetRoute("/networks/{id:.+}", r.getNetwork),
router.NewGetRoute("/networks", n.getNetworksList),
router.NewGetRoute("/networks/", n.getNetworksList),
router.NewGetRoute("/networks/{id:.+}", n.getNetwork),
// POST
router.NewPostRoute("/networks/create", r.postNetworkCreate),
router.NewPostRoute("/networks/{id:.*}/connect", r.postNetworkConnect),
router.NewPostRoute("/networks/{id:.*}/disconnect", r.postNetworkDisconnect),
router.NewPostRoute("/networks/prune", r.postNetworksPrune),
router.NewPostRoute("/networks/create", n.postNetworkCreate),
router.NewPostRoute("/networks/{id:.*}/connect", n.postNetworkConnect),
router.NewPostRoute("/networks/{id:.*}/disconnect", n.postNetworkDisconnect),
router.NewPostRoute("/networks/prune", n.postNetworksPrune),
// DELETE
router.NewDeleteRoute("/networks/{id:.*}", r.deleteNetwork),
router.NewDeleteRoute("/networks/{id:.*}", n.deleteNetwork),
}
}

View File

@@ -18,22 +18,22 @@ func NewRouter(b Backend) router.Router {
}
// Routes returns the available routers to the plugin controller
func (r *pluginRouter) Routes() []router.Route {
return r.routes
func (pr *pluginRouter) Routes() []router.Route {
return pr.routes
}
func (r *pluginRouter) initRoutes() {
r.routes = []router.Route{
router.NewGetRoute("/plugins", r.listPlugins),
router.NewGetRoute("/plugins/{name:.*}/json", r.inspectPlugin),
router.NewGetRoute("/plugins/privileges", r.getPrivileges),
router.NewDeleteRoute("/plugins/{name:.*}", r.removePlugin),
router.NewPostRoute("/plugins/{name:.*}/enable", r.enablePlugin),
router.NewPostRoute("/plugins/{name:.*}/disable", r.disablePlugin),
router.NewPostRoute("/plugins/pull", r.pullPlugin),
router.NewPostRoute("/plugins/{name:.*}/push", r.pushPlugin),
router.NewPostRoute("/plugins/{name:.*}/upgrade", r.upgradePlugin),
router.NewPostRoute("/plugins/{name:.*}/set", r.setPlugin),
router.NewPostRoute("/plugins/create", r.createPlugin),
func (pr *pluginRouter) initRoutes() {
pr.routes = []router.Route{
router.NewGetRoute("/plugins", pr.listPlugins),
router.NewGetRoute("/plugins/{name:.*}/json", pr.inspectPlugin),
router.NewGetRoute("/plugins/privileges", pr.getPrivileges),
router.NewDeleteRoute("/plugins/{name:.*}", pr.removePlugin),
router.NewPostRoute("/plugins/{name:.*}/enable", pr.enablePlugin),
router.NewPostRoute("/plugins/{name:.*}/disable", pr.disablePlugin),
router.NewPostRoute("/plugins/pull", pr.pullPlugin),
router.NewPostRoute("/plugins/{name:.*}/push", pr.pushPlugin),
router.NewPostRoute("/plugins/{name:.*}/upgrade", pr.upgradePlugin),
router.NewPostRoute("/plugins/{name:.*}/set", pr.setPlugin),
router.NewPostRoute("/plugins/create", pr.createPlugin),
}
}

View File

@@ -18,12 +18,12 @@ func NewRouter(b Backend) router.Router {
}
// Routes returns the available routers to the session controller
func (r *sessionRouter) Routes() []router.Route {
return r.routes
func (sr *sessionRouter) Routes() []router.Route {
return sr.routes
}
func (r *sessionRouter) initRoutes() {
r.routes = []router.Route{
router.NewPostRoute("/session", r.startSession),
func (sr *sessionRouter) initRoutes() {
sr.routes = []router.Route{
router.NewPostRoute("/session", sr.startSession),
}
}

View File

@@ -1,5 +1,5 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.19
//go:build go1.23
package system // import "github.com/docker/docker/api/server/router/system"

View File

@@ -20,21 +20,21 @@ func NewRouter(b Backend, cb ClusterBackend) router.Router {
}
// Routes returns the available routes to the volumes controller
func (r *volumeRouter) Routes() []router.Route {
return r.routes
func (v *volumeRouter) Routes() []router.Route {
return v.routes
}
func (r *volumeRouter) initRoutes() {
r.routes = []router.Route{
func (v *volumeRouter) initRoutes() {
v.routes = []router.Route{
// GET
router.NewGetRoute("/volumes", r.getVolumesList),
router.NewGetRoute("/volumes/{name:.*}", r.getVolumeByName),
router.NewGetRoute("/volumes", v.getVolumesList),
router.NewGetRoute("/volumes/{name:.*}", v.getVolumeByName),
// POST
router.NewPostRoute("/volumes/create", r.postVolumesCreate),
router.NewPostRoute("/volumes/prune", r.postVolumesPrune),
router.NewPostRoute("/volumes/create", v.postVolumesCreate),
router.NewPostRoute("/volumes/prune", v.postVolumesPrune),
// PUT
router.NewPutRoute("/volumes/{name:.*}", r.putVolumesUpdate),
router.NewPutRoute("/volumes/{name:.*}", v.putVolumesUpdate),
// DELETE
router.NewDeleteRoute("/volumes/{name:.*}", r.deleteVolumes),
router.NewDeleteRoute("/volumes/{name:.*}", v.deleteVolumes),
}
}

View File

@@ -1,5 +1,5 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.19
//go:build go1.23
package containerimage

View File

@@ -278,38 +278,38 @@ func withoutHealthcheck() runConfigModifier {
}
func copyRunConfig(runConfig *container.Config, modifiers ...runConfigModifier) *container.Config {
copy := *runConfig
copy.Cmd = copyStringSlice(runConfig.Cmd)
copy.Env = copyStringSlice(runConfig.Env)
copy.Entrypoint = copyStringSlice(runConfig.Entrypoint)
copy.OnBuild = copyStringSlice(runConfig.OnBuild)
copy.Shell = copyStringSlice(runConfig.Shell)
c := *runConfig
c.Cmd = copyStringSlice(runConfig.Cmd)
c.Env = copyStringSlice(runConfig.Env)
c.Entrypoint = copyStringSlice(runConfig.Entrypoint)
c.OnBuild = copyStringSlice(runConfig.OnBuild)
c.Shell = copyStringSlice(runConfig.Shell)
if copy.Volumes != nil {
copy.Volumes = make(map[string]struct{}, len(runConfig.Volumes))
if c.Volumes != nil {
c.Volumes = make(map[string]struct{}, len(runConfig.Volumes))
for k, v := range runConfig.Volumes {
copy.Volumes[k] = v
c.Volumes[k] = v
}
}
if copy.ExposedPorts != nil {
copy.ExposedPorts = make(nat.PortSet, len(runConfig.ExposedPorts))
if c.ExposedPorts != nil {
c.ExposedPorts = make(nat.PortSet, len(runConfig.ExposedPorts))
for k, v := range runConfig.ExposedPorts {
copy.ExposedPorts[k] = v
c.ExposedPorts[k] = v
}
}
if copy.Labels != nil {
copy.Labels = make(map[string]string, len(runConfig.Labels))
if c.Labels != nil {
c.Labels = make(map[string]string, len(runConfig.Labels))
for k, v := range runConfig.Labels {
copy.Labels[k] = v
c.Labels[k] = v
}
}
for _, modifier := range modifiers {
modifier(&copy)
modifier(&c)
}
return &copy
return &c
}
func copyStringSlice(orig []string) []string {

View File

@@ -166,17 +166,17 @@ func fullMutableRunConfig() *container.Config {
func TestDeepCopyRunConfig(t *testing.T) {
runConfig := fullMutableRunConfig()
copy := copyRunConfig(runConfig)
assert.Check(t, is.DeepEqual(fullMutableRunConfig(), copy))
deepCopy := copyRunConfig(runConfig)
assert.Check(t, is.DeepEqual(fullMutableRunConfig(), deepCopy))
copy.Cmd[1] = "arg2"
copy.Env[1] = "env2=new"
copy.ExposedPorts["10002"] = struct{}{}
copy.Volumes["three"] = struct{}{}
copy.Entrypoint[1] = "arg2"
copy.OnBuild[0] = "start"
copy.Labels["label3"] = "value3"
copy.Shell[0] = "sh"
deepCopy.Cmd[1] = "arg2"
deepCopy.Env[1] = "env2=new"
deepCopy.ExposedPorts["10002"] = struct{}{}
deepCopy.Volumes["three"] = struct{}{}
deepCopy.Entrypoint[1] = "arg2"
deepCopy.OnBuild[0] = "start"
deepCopy.Labels["label3"] = "value3"
deepCopy.Shell[0] = "sh"
assert.Check(t, is.DeepEqual(fullMutableRunConfig(), runConfig))
}

View File

@@ -9,7 +9,9 @@ import (
"os"
"path/filepath"
"runtime"
"slices"
"sort"
"strconv"
"strings"
"sync"
"time"
@@ -67,6 +69,14 @@ import (
"tags.cncf.io/container-device-interface/pkg/cdi"
)
// strongTLSCiphers defines a secure, modern set of TLS cipher suites for use by the daemon.
var strongTLSCiphers = []uint16{
tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
}
// DaemonCli represents the daemon CLI.
type DaemonCli struct {
*config.Config
@@ -779,6 +789,18 @@ func newAPIServerTLSConfig(config *config.Config) (*tls.Config, error) {
if err != nil {
return nil, errors.Wrap(err, "invalid TLS configuration")
}
// Optionally enforce strong TLS ciphers via the environment variable DOCKER_DISABLE_WEAK_CIPHERS.
// When set to true, weak TLS ciphers are disabled, restricting the daemon to a modern, secure
// subset of cipher suites.
if disableWeakCiphers := os.Getenv("DOCKER_DISABLE_WEAK_CIPHERS"); disableWeakCiphers != "" {
disable, err := strconv.ParseBool(disableWeakCiphers)
if err != nil {
return nil, errors.Wrap(err, "invalid value for DOCKER_DISABLE_WEAK_CIPHERS")
}
if disable {
tlsConfig.CipherSuites = slices.Clone(strongTLSCiphers)
}
}
}
return tlsConfig, nil

View File

@@ -46,11 +46,11 @@ func (s *Health) Status() string {
// obeying the locking semantics.
//
// Status may be set directly if another lock is used.
func (s *Health) SetStatus(new string) {
func (s *Health) SetStatus(healthStatus string) {
s.mu.Lock()
defer s.mu.Unlock()
s.Health.Status = new
s.Health.Status = healthStatus
}
// OpenMonitorChannel creates and returns a new monitor channel. If there

View File

@@ -1,3 +1,6 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.23
package container // import "github.com/docker/docker/container"
import (

View File

@@ -53,6 +53,30 @@ if ! [ -d "$HOME" ]; then
exit 1
fi
mount_directory() {
if [ -z "$_DOCKERD_ROOTLESS_CHILD" ]; then
echo "mount_directory should be called from the child context. Otherwise data loss is at risk" >&2
exit 1
fi
DIRECTORY="$1"
if [ ! -d "$DIRECTORY" ]; then
return
fi
# Bind mount directory: this makes this directory visible to
# Dockerd, even if it is originally a symlink, given Dockerd does
# not always follow symlinks. Some directories might also be
# "copied-up", meaning that they will also be writable on the child
# namespace; this will be the case only if they are provided as
# --copy-up to the rootlesskit.
DIRECTORY_REALPATH=$(realpath "$DIRECTORY")
MOUNT_OPTIONS="${2:---bind}"
rm -rf "$DIRECTORY"
mkdir -p "$DIRECTORY"
mount $MOUNT_OPTIONS "$DIRECTORY_REALPATH" "$DIRECTORY"
}
rootlesskit=""
for f in docker-rootlesskit rootlesskit; do
if command -v $f > /dev/null 2>&1; then
@@ -132,6 +156,25 @@ if [ -z "$_DOCKERD_ROOTLESS_CHILD" ]; then
"$0" "$@"
else
[ "$_DOCKERD_ROOTLESS_CHILD" = 1 ]
# The Container Device Interface (CDI) specs can be found by default
# under {/etc,/var/run}/cdi. More information at:
# https://github.com/cncf-tags/container-device-interface
#
# In order to use the Container Device Interface (CDI) integration,
# the CDI paths need to exist before the Docker daemon is started in
# order for it to read the CDI specification files. Otherwise, a
# Docker daemon restart will be required for the daemon to discover
# them.
#
# If another set of CDI paths (other than the default /etc/cdi and
# /var/run/cdi) are configured through the Docker configuration file
# (using "cdi-spec-dirs"), they need to be bind mounted in rootless
# mode; otherwise the Docker daemon won't have access to the CDI
# specification files.
mount_directory /etc/cdi
mount_directory /var/run/cdi
# remove the symlinks for the existing files in the parent namespace if any,
# so that we can create our own files in our mount namespace.
rm -f /run/docker /run/containerd /run/xtables.lock
@@ -146,10 +189,7 @@ else
if [ "$(stat -c %T -f /etc)" = "tmpfs" ] && [ -L "/etc/ssl" ]; then
# Workaround for "x509: certificate signed by unknown authority" on openSUSE Tumbleweed.
# https://github.com/rootless-containers/rootlesskit/issues/225
realpath_etc_ssl=$(realpath /etc/ssl)
rm -f /etc/ssl
mkdir /etc/ssl
mount --rbind ${realpath_etc_ssl} /etc/ssl
mount_directory /etc/ssl "--rbind"
fi
exec "$dockerd" "$@"

View File

@@ -72,30 +72,11 @@ fetch_blob() {
shift
local curlArgs=("$@")
local curlHeaders
curlHeaders="$(
curl -S "${curlArgs[@]}" \
-H "Authorization: Bearer $token" \
"$registryBase/v2/$image/blobs/$digest" \
-o "$targetFile" \
-D-
)"
curlHeaders="$(echo "$curlHeaders" | tr -d '\r')"
if grep -qE "^HTTP/[0-9].[0-9] 3" <<< "$curlHeaders"; then
rm -f "$targetFile"
local blobRedirect
blobRedirect="$(echo "$curlHeaders" | awk -F ': ' 'tolower($1) == "location" { print $2; exit }')"
if [ -z "$blobRedirect" ]; then
echo >&2 "error: failed fetching '$image' blob '$digest'"
echo "$curlHeaders" | head -1 >&2
return 1
fi
curl -fSL "${curlArgs[@]}" \
"$blobRedirect" \
-o "$targetFile"
fi
curl -L -S "${curlArgs[@]}" \
-H "Authorization: Bearer $token" \
"$registryBase/v2/$image/blobs/$digest" \
-o "$targetFile" \
-D-
}
# handle 'application/vnd.docker.distribution.manifest.v2+json' manifest

View File

@@ -0,0 +1,60 @@
package convert
import (
"github.com/docker/docker/pkg/plugingetter"
"github.com/moby/swarmkit/v2/node/plugin"
)
// SwarmPluginGetter adapts a plugingetter.PluginGetter to a Swarmkit plugin.Getter.
func SwarmPluginGetter(pg plugingetter.PluginGetter) plugin.Getter {
return pluginGetter{pg}
}
type pluginGetter struct {
pg plugingetter.PluginGetter
}
var _ plugin.Getter = (*pluginGetter)(nil)
type swarmPlugin struct {
plugingetter.CompatPlugin
}
func (p swarmPlugin) Client() plugin.Client {
return p.CompatPlugin.Client()
}
type addrPlugin struct {
plugingetter.CompatPlugin
plugingetter.PluginAddr
}
var _ plugin.AddrPlugin = (*addrPlugin)(nil)
func (p addrPlugin) Client() plugin.Client {
return p.CompatPlugin.Client()
}
func adaptPluginForSwarm(p plugingetter.CompatPlugin) plugin.Plugin {
if pa, ok := p.(plugingetter.PluginAddr); ok {
return addrPlugin{p, pa}
}
return swarmPlugin{p}
}
func (g pluginGetter) Get(name string, capability string) (plugin.Plugin, error) {
p, err := g.pg.Get(name, capability, plugingetter.Lookup)
if err != nil {
return nil, err
}
return adaptPluginForSwarm(p), nil
}
func (g pluginGetter) GetAllManagedPluginsByCap(capability string) []plugin.Plugin {
pp := g.pg.GetAllManagedPluginsByCap(capability)
ret := make([]plugin.Plugin, len(pp))
for i, p := range pp {
ret[i] = adaptPluginForSwarm(p)
}
return ret
}

View File

@@ -52,7 +52,7 @@ func NewExecutor(b executorpkg.Backend, p plugin.Backend, i executorpkg.ImageBac
pluginBackend: p,
imageBackend: i,
volumeBackend: v,
dependencies: agent.NewDependencyManager(b.PluginGetter()),
dependencies: agent.NewDependencyManager(convert.SwarmPluginGetter(b.PluginGetter())),
}
}
@@ -214,36 +214,35 @@ func (e *executor) Configure(ctx context.Context, node *api.Node) error {
if ingressNA == nil {
e.backend.ReleaseIngress()
return e.backend.GetAttachmentStore().ResetAttachments(attachments)
}
options := types.NetworkCreate{
Driver: ingressNA.Network.DriverState.Name,
IPAM: &network.IPAM{
Driver: ingressNA.Network.IPAM.Driver.Name,
},
Options: ingressNA.Network.DriverState.Options,
Ingress: true,
}
for _, ic := range ingressNA.Network.IPAM.Configs {
c := network.IPAMConfig{
Subnet: ic.Subnet,
IPRange: ic.Range,
Gateway: ic.Gateway,
} else {
options := types.NetworkCreate{
Driver: ingressNA.Network.DriverState.Name,
IPAM: &network.IPAM{
Driver: ingressNA.Network.IPAM.Driver.Name,
},
Options: ingressNA.Network.DriverState.Options,
Ingress: true,
}
options.IPAM.Config = append(options.IPAM.Config, c)
}
_, err := e.backend.SetupIngress(clustertypes.NetworkCreateRequest{
ID: ingressNA.Network.ID,
NetworkCreateRequest: types.NetworkCreateRequest{
Name: ingressNA.Network.Spec.Annotations.Name,
NetworkCreate: options,
},
}, ingressNA.Addresses[0])
if err != nil {
return err
for _, ic := range ingressNA.Network.IPAM.Configs {
c := network.IPAMConfig{
Subnet: ic.Subnet,
IPRange: ic.Range,
Gateway: ic.Gateway,
}
options.IPAM.Config = append(options.IPAM.Config, c)
}
_, err := e.backend.SetupIngress(clustertypes.NetworkCreateRequest{
ID: ingressNA.Network.ID,
NetworkCreateRequest: types.NetworkCreateRequest{
Name: ingressNA.Network.Spec.Annotations.Name,
NetworkCreate: options,
},
}, ingressNA.Addresses[0])
if err != nil {
return err
}
}
var (

View File

@@ -10,10 +10,12 @@ import (
"github.com/containerd/log"
types "github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/daemon/cluster/convert"
"github.com/docker/docker/daemon/cluster/executor/container"
lncluster "github.com/docker/docker/libnetwork/cluster"
"github.com/docker/docker/libnetwork/cnmallocator"
swarmapi "github.com/moby/swarmkit/v2/api"
swarmallocator "github.com/moby/swarmkit/v2/manager/allocator/cnmallocator"
"github.com/moby/swarmkit/v2/manager/allocator/networkallocator"
swarmnode "github.com/moby/swarmkit/v2/node"
"github.com/pkg/errors"
"google.golang.org/grpc"
@@ -123,7 +125,7 @@ func (n *nodeRunner) start(conf nodeStartConfig) error {
ListenControlAPI: control,
ListenRemoteAPI: conf.ListenAddr,
AdvertiseRemoteAPI: conf.AdvertiseAddr,
NetworkConfig: &swarmallocator.NetworkConfig{
NetworkConfig: &networkallocator.Config{
DefaultAddrPool: conf.DefaultAddressPool,
SubnetSize: conf.SubnetSize,
VXLANUDPPort: conf.DataPathPort,
@@ -144,7 +146,8 @@ func (n *nodeRunner) start(conf nodeStartConfig) error {
ElectionTick: n.cluster.config.RaftElectionTick,
UnlockKey: conf.lockKey,
AutoLockManagers: conf.autolock,
PluginGetter: n.cluster.config.Backend.PluginGetter(),
PluginGetter: convert.SwarmPluginGetter(n.cluster.config.Backend.PluginGetter()),
NetworkProvider: cnmallocator.NewProvider(n.cluster.config.Backend.PluginGetter()),
}
if conf.availability != "" {
avail, ok := swarmapi.NodeSpec_Availability_value[strings.ToUpper(string(conf.availability))]

View File

@@ -1,5 +1,5 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.19
//go:build go1.23
package daemon // import "github.com/docker/docker/daemon"

View File

@@ -70,7 +70,9 @@ func (i *ImageService) TagImage(ctx context.Context, imageID image.ID, newTag re
// Delete the source dangling image, as it's no longer dangling.
if err := is.Delete(compatcontext.WithoutCancel(ctx), danglingImageName(targetImage.Target.Digest)); err != nil {
logger.WithError(err).Warn("unexpected error when deleting dangling image")
if !cerrdefs.IsNotFound(err) {
logger.WithError(err).Warn("unexpected error when deleting dangling image")
}
}
return nil

View File

@@ -1,5 +1,5 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.19
//go:build go1.23
// Package daemon exposes the functions that occur on the host server
// that the Docker daemon is running.
@@ -373,6 +373,16 @@ func (daemon *Daemon) restore(cfg *configStore) error {
Type: local.Name,
}
}
// The logger option 'fluentd-async-connect' has been
// deprecated in v20.10 in favor of 'fluentd-async', and
// removed in v28.0.
if v, ok := c.HostConfig.LogConfig.Config["fluentd-async-connect"]; ok {
if _, ok := c.HostConfig.LogConfig.Config["fluentd-async"]; !ok {
c.HostConfig.LogConfig.Config["fluentd-async"] = v
}
delete(c.HostConfig.LogConfig.Config, "fluentd-async-connect")
}
}
if err := daemon.checkpointAndSave(c); err != nil {

View File

@@ -1,5 +1,5 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.19
//go:build go1.23
package daemon // import "github.com/docker/docker/daemon"

View File

@@ -1,5 +1,5 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.19
//go:build go1.23
package daemon // import "github.com/docker/docker/daemon"

View File

@@ -224,17 +224,17 @@ func (j *Journal) Data() (map[string]string, error) {
j.restartData()
for {
var (
data unsafe.Pointer
len C.size_t
data unsafe.Pointer
length C.size_t
)
rc := C.sd_journal_enumerate_data(j.j, &data, &len)
rc := C.sd_journal_enumerate_data(j.j, &data, &length)
if rc == 0 {
return m, nil
} else if rc < 0 {
return m, fmt.Errorf("journald: error enumerating entry data: %w", syscall.Errno(-rc))
}
k, v, _ := strings.Cut(C.GoStringN((*C.char)(data), C.int(len)), "=")
k, v, _ := strings.Cut(C.GoStringN((*C.char)(data), C.int(length)), "=")
m[k] = v
}
}

View File

@@ -102,10 +102,10 @@ func New(info logger.Info) (logger.Logger, error) {
return nil, fmt.Errorf("journald is not enabled on this host")
}
return new(info)
return newJournald(info)
}
func new(info logger.Info) (*journald, error) {
func newJournald(info logger.Info) (*journald, error) {
// parse log tag
tag, err := loggerutils.ParseLogTag(info, loggerutils.DefaultTemplate)
if err != nil {

View File

@@ -24,7 +24,7 @@ func TestLogRead(t *testing.T) {
// LogReader needs to filter out.
rotatedJournal := fake.NewT(t, journalDir+"/rotated.journal")
rotatedJournal.AssignEventTimestampFromSyslogTimestamp = true
l, err := new(logger.Info{
l, err := newJournald(logger.Info{
ContainerID: "wrongone0001",
ContainerName: "fake",
})
@@ -36,7 +36,7 @@ func TestLogRead(t *testing.T) {
activeJournal := fake.NewT(t, journalDir+"/fake.journal")
activeJournal.AssignEventTimestampFromSyslogTimestamp = true
l, err = new(logger.Info{
l, err = newJournald(logger.Info{
ContainerID: "wrongone0002",
ContainerName: "fake",
})
@@ -47,7 +47,7 @@ func TestLogRead(t *testing.T) {
assert.NilError(t, rotatedJournal.Send("a log message from a totally different process in the active journal", journal.PriInfo, nil))
return func(t *testing.T) logger.Logger {
l, err := new(info)
l, err := newJournald(info)
assert.NilError(t, err)
l.journalReadDir = journalDir
sl := &syncLogger{journald: l, waiters: map[uint64]chan<- struct{}{}}

View File

@@ -510,12 +510,12 @@ func logMessages(t *testing.T, l logger.Logger, messages []*logger.Message) []*l
// existing behavior of the json-file log driver.
func transformToExpected(m *logger.Message) *logger.Message {
// Copy the log message again so as not to mutate the input.
copy := copyLogMessage(m)
logMessageCopy := copyLogMessage(m)
if m.PLogMetaData == nil || m.PLogMetaData.Last {
copy.Line = append(copy.Line, '\n')
logMessageCopy.Line = append(logMessageCopy.Line, '\n')
}
return copy
return logMessageCopy
}
func copyLogMessage(src *logger.Message) *logger.Message {

View File

@@ -22,7 +22,7 @@ const extName = "LogDriver"
type logPlugin interface {
StartLogging(streamPath string, info Info) (err error)
StopLogging(streamPath string) (err error)
Capabilities() (cap Capability, err error)
Capabilities() (capability Capability, err error)
ReadLogs(info Info, config ReadConfig) (stream io.ReadCloser, err error)
}
@@ -90,9 +90,9 @@ func makePluginCreator(name string, l logPlugin, scopePath func(s string) string
logInfo: logCtx,
}
cap, err := a.plugin.Capabilities()
caps, err := a.plugin.Capabilities()
if err == nil {
a.capabilities = cap
a.capabilities = caps
}
stream, err := openPluginStream(a)
@@ -107,7 +107,7 @@ func makePluginCreator(name string, l logPlugin, scopePath func(s string) string
return nil, errors.Wrapf(err, "error creating logger")
}
if cap.ReadLogs {
if caps.ReadLogs {
return &pluginAdapterWithRead{a}, nil
}

View File

@@ -80,13 +80,11 @@ func (pp *logPluginProxy) Capabilities() (cap Capability, err error) {
return
}
cap = ret.Cap
if ret.Err != "" {
err = errors.New(ret.Err)
}
return
return ret.Cap, err
}
type logPluginProxyReadLogsRequest struct {

View File

@@ -864,15 +864,11 @@ func withCgroups(daemon *Daemon, daemonCfg *dconfig.Config, c *container.Contain
p := cgroupsPath
if useSystemd {
initPath, err := cgroups.GetInitCgroup("cpu")
path, err := cgroups.GetOwnCgroup("cpu")
if err != nil {
return errors.Wrap(err, "unable to init CPU RT controller")
}
_, err = cgroups.GetOwnCgroup("cpu")
if err != nil {
return errors.Wrap(err, "unable to init CPU RT controller")
}
p = filepath.Join(initPath, s.Linux.CgroupsPath)
p = filepath.Join(path, s.Linux.CgroupsPath)
}
// Clean path to guard against things like ../../../BAD

View File

@@ -641,7 +641,7 @@ func getMaxMountAndExistenceCheckAttempts(layer PushLayer) (maxMountAttempts, ma
func getRepositoryMountCandidates(
repoInfo reference.Named,
hmacKey []byte,
max int,
maxCandidates int,
v2Metadata []metadata.V2Metadata,
) []metadata.V2Metadata {
candidates := []metadata.V2Metadata{}
@@ -658,9 +658,9 @@ func getRepositoryMountCandidates(
}
sortV2MetadataByLikenessAndAge(repoInfo, hmacKey, candidates)
if max >= 0 && len(candidates) > max {
if maxCandidates >= 0 && len(candidates) > maxCandidates {
// select the youngest metadata
candidates = candidates[:max]
candidates = candidates[:maxCandidates]
}
return candidates

View File

@@ -52,9 +52,9 @@ type DownloadOption func(*LayerDownloadManager)
// WithMaxDownloadAttempts configures the maximum number of download
// attempts for a download manager.
func WithMaxDownloadAttempts(max int) DownloadOption {
func WithMaxDownloadAttempts(maxDownloadAttempts int) DownloadOption {
return func(dlm *LayerDownloadManager) {
dlm.maxDownloadAttempts = max
dlm.maxDownloadAttempts = maxDownloadAttempts
}
}

View File

@@ -172,11 +172,16 @@ variable "SYSTEMD" {
default = "false"
}
variable "FIREWALLD" {
default = "false"
}
target "dev" {
inherits = ["_common"]
target = "dev"
args = {
SYSTEMD = SYSTEMD
FIREWALLD = FIREWALLD
}
tags = ["docker-dev"]
output = ["type=docker"]

View File

@@ -56,12 +56,27 @@ if [ -d /sys/kernel/security ] && ! mountpoint -q /sys/kernel/security; then
}
fi
# Allow connections coming from the host (through eth0). This is needed to
# access the daemon port (independently of which port is used), or run a
# 'remote' Delve session, etc...
if [ "${FIREWALLD:-}" = "true" ]; then
cat > /etc/firewalld/zones/trusted.xml << EOF
<?xml version="1.0" encoding="utf-8"?>
<zone target="ACCEPT">
<short>Trusted</short>
<description>All network connections are accepted.</description>
<interface name="eth0"/>
<forward/>
</zone>
EOF
fi
env > /etc/docker-entrypoint-env
cat > /etc/systemd/system/docker-entrypoint.target << EOF
[Unit]
Description=the target for docker-entrypoint.service
Requires=docker-entrypoint.service systemd-logind.service systemd-user-sessions.service
Requires=docker-entrypoint.service systemd-logind.service systemd-user-sessions.service $([ "${FIREWALLD:-}" = "true" ] && echo firewalld.service)
EOF
quoted_args="$(printf " %q" "${@}")"

View File

@@ -15,7 +15,7 @@ set -e
# the binary version you may also need to update the vendor version to pick up
# bug fixes or new APIs, however, usually the Go packages are built from a
# commit from the master branch.
: "${CONTAINERD_VERSION:=v1.7.22}"
: "${CONTAINERD_VERSION:=v1.7.27}"
install_containerd() (
echo "Install containerd version $CONTAINERD_VERSION"

View File

@@ -9,7 +9,7 @@ set -e
# the containerd project first, and update both after that is merged.
#
# When updating RUNC_VERSION, consider updating runc in vendor.mod accordingly
: "${RUNC_VERSION:=v1.1.12}"
: "${RUNC_VERSION:=v1.2.5}"
install_runc() {
RUNC_BUILDTAGS="${RUNC_BUILDTAGS:-"seccomp"}"

View File

@@ -1,6 +1,6 @@
# syntax=docker/dockerfile:1
ARG GO_VERSION=1.22.8
ARG GO_VERSION=1.23.9
ARG BASE_DEBIAN_DISTRO="bookworm"
ARG PROTOC_VERSION=3.11.4

View File

@@ -327,10 +327,26 @@ Function Run-UnitTests() {
$pkgList = $pkgList | Select-String -NotMatch "github.com/docker/docker/integration"
$pkgList = $pkgList -replace "`r`n", " "
$jsonFilePath = $bundlesDir + "\go-test-report-unit-flaky-tests.json"
$xmlFilePath = $bundlesDir + "\junit-report-unit-flaky-tests.xml"
$coverageFilePath = $bundlesDir + "\coverage-report-unit-flaky-tests.txt"
$goTestArg = "--rerun-fails=4 --format=standard-verbose --jsonfile=$jsonFilePath --junitfile=$xmlFilePath """ + "--packages=$pkgList" + """ -- " + $raceParm + " -coverprofile=$coverageFilePath -covermode=atomic -ldflags -w -a -test.timeout=10m -test.run=TestFlaky.*"
Write-Host "INFO: Invoking unit tests run with $GOTESTSUM_LOCATION\gotestsum.exe $goTestArg"
$pinfo = New-Object System.Diagnostics.ProcessStartInfo
$pinfo.FileName = "$GOTESTSUM_LOCATION\gotestsum.exe"
$pinfo.WorkingDirectory = "$($PWD.Path)"
$pinfo.UseShellExecute = $false
$pinfo.Arguments = $goTestArg
$p = New-Object System.Diagnostics.Process
$p.StartInfo = $pinfo
$p.Start() | Out-Null
$p.WaitForExit()
if ($p.ExitCode -ne 0) { Throw "Unit tests (flaky) failed" }
$jsonFilePath = $bundlesDir + "\go-test-report-unit-tests.json"
$xmlFilePath = $bundlesDir + "\junit-report-unit-tests.xml"
$coverageFilePath = $bundlesDir + "\coverage-report-unit-tests.txt"
$goTestArg = "--format=standard-verbose --jsonfile=$jsonFilePath --junitfile=$xmlFilePath -- " + $raceParm + " -coverprofile=$coverageFilePath -covermode=atomic -ldflags -w -a """ + "-test.timeout=10m" + """ $pkgList"
$goTestArg = "--format=standard-verbose --jsonfile=$jsonFilePath --junitfile=$xmlFilePath -- " + $raceParm + " -coverprofile=$coverageFilePath -covermode=atomic -ldflags -w -a -test.timeout=10m -test.skip=TestFlaky.*" + " $pkgList"
Write-Host "INFO: Invoking unit tests run with $GOTESTSUM_LOCATION\gotestsum.exe $goTestArg"
$pinfo = New-Object System.Diagnostics.ProcessStartInfo
$pinfo.FileName = "$GOTESTSUM_LOCATION\gotestsum.exe"

View File

@@ -72,12 +72,6 @@ if [ "$DOCKER_EXPERIMENTAL" ]; then
fi
dockerd="dockerd"
if [ -f "/sys/fs/cgroup/cgroup.controllers" ]; then
if [ -z "$TEST_IGNORE_CGROUP_CHECK" ] && [ -z "$TEST_SKIP_INTEGRATION_CLI" ]; then
echo >&2 '# cgroup v2 requires TEST_SKIP_INTEGRATION_CLI to be set'
exit 1
fi
fi
if [ -n "$DOCKER_ROOTLESS" ]; then
if [ -z "$TEST_SKIP_INTEGRATION_CLI" ]; then

View File

@@ -38,15 +38,38 @@ if [ -n "${base_pkg_list}" ]; then
${base_pkg_list}
fi
if [ -n "${libnetwork_pkg_list}" ]; then
rerun_flaky=1
gotest_extra_flags="-skip=TestFlaky.*"
# Custom -run passed, don't run flaky tests separately.
if echo "$TESTFLAGS" | grep -Eq '(-run|-test.run)[= ]'; then
rerun_flaky=0
gotest_extra_flags=""
fi
# libnetwork tests invoke iptables, and cannot be run in parallel. Execute
# tests within /libnetwork with '-p=1' to run them sequentially. See
# https://github.com/moby/moby/issues/42458#issuecomment-873216754 for details.
gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml -- \
"${BUILDFLAGS[@]}" \
gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml \
-- "${BUILDFLAGS[@]}" \
-cover \
-coverprofile=bundles/coverage-libnetwork.out \
-covermode=atomic \
-p=1 \
${gotest_extra_flags} \
${TESTFLAGS} \
${libnetwork_pkg_list}
if [ $rerun_flaky -eq 1 ]; then
gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork-flaky.json --junitfile=bundles/junit-report-libnetwork-flaky.xml \
--packages "${libnetwork_pkg_list}" \
--rerun-fails=4 \
-- "${BUILDFLAGS[@]}" \
-cover \
-coverprofile=bundles/coverage-libnetwork-flaky.out \
-covermode=atomic \
-p=1 \
-test.run 'TestFlaky.*' \
${TESTFLAGS}
fi
fi

View File

@@ -5,27 +5,30 @@ set -e
SCRIPTDIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPTDIR}/.validate"
tidy_files=('vendor.mod' 'vendor.sum')
modules_files=('man/go.mod' 'vendor.mod')
tidy_files=("${modules_files[@]}" 'man/go.sum' 'vendor.sum')
vendor_files=("${tidy_files[@]}" 'vendor/')
validate_vendor_tidy() {
validate_tidy_modules() {
# check that all go.mod files exist in HEAD; go.sum files are generated by 'go mod tidy'
# so we don't need to check for their existence beforehand
for f in "${modules_files[@]}"; do
if [ ! -f "$f" ]; then
echo >&2 "ERROR: missing $f"
return 1
fi
done
# run mod tidy
./hack/vendor.sh tidy
# check if any files have changed
git diff --quiet HEAD -- "${tidy_files[@]}"
git diff --quiet HEAD -- "${tidy_files[@]}" && [ -z "$(git ls-files --others --exclude-standard)" ]
}
validate_vendor_diff() {
mapfile -t changed_files < <(validate_diff --diff-filter=ACMR --name-only -- "${vendor_files[@]}")
if [ -n "${TEST_FORCE_VALIDATE:-}" ] || [ "${#changed_files[@]}" -gt 0 ]; then
# recreate vendor/
./hack/vendor.sh vendor
# check if any files have changed
git diff --quiet HEAD -- "${vendor_files[@]}"
else
echo >&2 'INFO: no vendor changes in diff; skipping vendor check.'
fi
# recreate vendor/
./hack/vendor.sh vendor
# check if any files have changed
git diff --quiet HEAD -- "${vendor_files[@]}" && [ -z "$(git ls-files --others --exclude-standard)" ]
}
validate_vendor_license() {
@@ -37,16 +40,22 @@ validate_vendor_license() {
done < <(awk '/^# /{ print $2 }' vendor/modules.txt)
}
if validate_vendor_tidy && validate_vendor_diff && validate_vendor_license; then
if validate_tidy_modules && validate_vendor_diff && validate_vendor_license; then
echo >&2 'PASS: Vendoring has been performed correctly!'
else
{
echo 'FAIL: Vendoring was not performed correctly!'
echo
echo 'The following files changed during re-vendor:'
echo
git diff --name-status HEAD -- "${vendor_files[@]}"
echo
if [ -n "$(git ls-files --others --exclude-standard)" ]; then
echo 'The following files are missing:'
git ls-files --others --exclude-standard
echo
fi
if [ -n "$(git diff --name-status HEAD -- "${vendor_files[@]}")" ]; then
echo 'The following files changed during re-vendor:'
git diff --name-status HEAD -- "${vendor_files[@]}"
echo
fi
echo 'Please revendor with hack/vendor.sh'
echo
git diff --diff-filter=M -- "${vendor_files[@]}"

View File

@@ -7,15 +7,32 @@
set -e
SCRIPTDIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(cd "$SCRIPTDIR/.." && pwd)"
tidy() (
(
set -x
"${SCRIPTDIR}"/with-go-mod.sh go mod tidy -modfile vendor.mod -compat 1.18
)
(
set -x
cd man
go mod tidy
)
)
vendor() (
(
set -x
"${SCRIPTDIR}"/with-go-mod.sh go mod vendor -modfile vendor.mod
)
(
set -x
cd man
go mod vendor
)
)
help() {

View File

@@ -25,7 +25,7 @@ else
tee "${ROOTDIR}/go.mod" >&2 <<- EOF
module github.com/docker/docker
go 1.20
go 1.23.0
EOF
trap 'rm -f "${ROOTDIR}/go.mod"' EXIT
fi

View File

@@ -27,6 +27,7 @@ var expectedNetworkInterfaceStats = strings.Split("rx_bytes rx_dropped rx_errors
func (s *DockerAPISuite) TestAPIStatsNoStreamGetCpu(c *testing.T) {
skip.If(c, RuntimeIsWindowsContainerd(), "FIXME: Broken on Windows + containerd combination")
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
out := cli.DockerCmd(c, "run", "-d", "busybox", "/bin/sh", "-c", "while true;usleep 100; do echo 'Hello'; done").Stdout()
id := strings.TrimSpace(out)
cli.WaitRun(c, id)

View File

@@ -3959,6 +3959,7 @@ func (s *DockerCLIBuildSuite) TestBuildEmptyStringVolume(c *testing.T) {
func (s *DockerCLIBuildSuite) TestBuildContainerWithCgroupParent(c *testing.T) {
testRequires(c, testEnv.IsLocalDaemon, DaemonIsLinux)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
cgroupParent := "test"
data, err := os.ReadFile("/proc/self/cgroup")

View File

@@ -1739,6 +1739,7 @@ func (s *DockerDaemonSuite) TestDaemonRestartContainerLinksRestart(c *testing.T)
func (s *DockerDaemonSuite) TestDaemonCgroupParent(c *testing.T) {
testRequires(c, DaemonIsLinux)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
cgroupParent := "test"
name := "cgroup-test"

View File

@@ -3204,6 +3204,7 @@ func (s *DockerCLIRunSuite) TestRunWithUlimits(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunContainerWithCgroupParent(c *testing.T) {
// Not applicable on Windows as uses Unix specific functionality
testRequires(c, DaemonIsLinux)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
// cgroup-parent relative path
testRunContainerWithCgroupParent(c, "test", "cgroup-test")
@@ -3239,6 +3240,7 @@ func testRunContainerWithCgroupParent(c *testing.T, cgroupParent, name string) {
func (s *DockerCLIRunSuite) TestRunInvalidCgroupParent(c *testing.T) {
// Not applicable on Windows as uses Unix specific functionality
testRequires(c, DaemonIsLinux)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
testRunInvalidCgroupParent(c, "../../../../../../../../SHOULD_NOT_EXIST", "SHOULD_NOT_EXIST", "cgroup-invalid-test")
@@ -3279,6 +3281,7 @@ func (s *DockerCLIRunSuite) TestRunContainerWithCgroupMountRO(c *testing.T) {
// Not applicable on Windows as uses Unix specific functionality
// --read-only + userns has remount issues
testRequires(c, DaemonIsLinux, NotUserNamespace)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
filename := "/sys/fs/cgroup/devices/test123"
out, _, err := dockerCmdWithError("run", "busybox", "touch", filename)
@@ -4428,6 +4431,7 @@ func (s *DockerCLIRunSuite) TestRunHostnameInHostMode(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunAddDeviceCgroupRule(c *testing.T) {
testRequires(c, DaemonIsLinux)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const deviceRule = "c 7:128 rwm"

View File

@@ -26,6 +26,7 @@ import (
"github.com/moby/sys/mount"
"gotest.tools/v3/assert"
"gotest.tools/v3/icmd"
"gotest.tools/v3/skip"
)
// #6509
@@ -450,6 +451,7 @@ func (s *DockerCLIRunSuite) TestRunAttachInvalidDetachKeySequencePreserved(c *te
// "test" should be printed
func (s *DockerCLIRunSuite) TestRunWithCPUQuota(c *testing.T) {
testRequires(c, cpuCfsQuota)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const file = "/sys/fs/cgroup/cpu/cpu.cfs_quota_us"
out := cli.DockerCmd(c, "run", "--cpu-quota", "8000", "--name", "test", "busybox", "cat", file).Combined()
@@ -461,6 +463,7 @@ func (s *DockerCLIRunSuite) TestRunWithCPUQuota(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunWithCpuPeriod(c *testing.T) {
testRequires(c, cpuCfsPeriod)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const file = "/sys/fs/cgroup/cpu/cpu.cfs_period_us"
out := cli.DockerCmd(c, "run", "--cpu-period", "50000", "--name", "test", "busybox", "cat", file).Combined()
@@ -491,6 +494,7 @@ func (s *DockerCLIRunSuite) TestRunWithInvalidCpuPeriod(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunWithCPUShares(c *testing.T) {
testRequires(c, cpuShare)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const file = "/sys/fs/cgroup/cpu/cpu.shares"
out := cli.DockerCmd(c, "run", "--cpu-shares", "1000", "--name", "test", "busybox", "cat", file).Combined()
@@ -511,6 +515,7 @@ func (s *DockerCLIRunSuite) TestRunEchoStdoutWithCPUSharesAndMemoryLimit(c *test
func (s *DockerCLIRunSuite) TestRunWithCpusetCpus(c *testing.T) {
testRequires(c, cgroupCpuset)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const file = "/sys/fs/cgroup/cpuset/cpuset.cpus"
out := cli.DockerCmd(c, "run", "--cpuset-cpus", "0", "--name", "test", "busybox", "cat", file).Combined()
@@ -522,6 +527,7 @@ func (s *DockerCLIRunSuite) TestRunWithCpusetCpus(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunWithCpusetMems(c *testing.T) {
testRequires(c, cgroupCpuset)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const file = "/sys/fs/cgroup/cpuset/cpuset.mems"
out := cli.DockerCmd(c, "run", "--cpuset-mems", "0", "--name", "test", "busybox", "cat", file).Combined()
@@ -533,6 +539,7 @@ func (s *DockerCLIRunSuite) TestRunWithCpusetMems(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunWithBlkioWeight(c *testing.T) {
testRequires(c, blkioWeight)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const file = "/sys/fs/cgroup/blkio/blkio.weight"
out := cli.DockerCmd(c, "run", "--blkio-weight", "300", "--name", "test", "busybox", "cat", file).Combined()
@@ -544,6 +551,7 @@ func (s *DockerCLIRunSuite) TestRunWithBlkioWeight(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunWithInvalidBlkioWeight(c *testing.T) {
testRequires(c, blkioWeight)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
out, _, err := dockerCmdWithError("run", "--blkio-weight", "5", "busybox", "true")
assert.ErrorContains(c, err, "", out)
expected := "Range of blkio weight is from 10 to 1000"
@@ -602,6 +610,7 @@ func (s *DockerCLIRunSuite) TestRunOOMExitCode(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunWithMemoryLimit(c *testing.T) {
testRequires(c, memoryLimitSupport)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const file = "/sys/fs/cgroup/memory/memory.limit_in_bytes"
cli.DockerCmd(c, "run", "-m", "32M", "--name", "test", "busybox", "cat", file).Assert(c, icmd.Expected{
@@ -646,6 +655,7 @@ func (s *DockerCLIRunSuite) TestRunWithSwappinessInvalid(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunWithMemoryReservation(c *testing.T) {
testRequires(c, testEnv.IsLocalDaemon, memoryReservationSupport)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const file = "/sys/fs/cgroup/memory/memory.soft_limit_in_bytes"
out := cli.DockerCmd(c, "run", "--memory-reservation", "200M", "--name", "test", "busybox", "cat", file).Combined()
@@ -729,6 +739,7 @@ func (s *DockerCLIRunSuite) TestRunInvalidCpusetMemsFlagValue(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunInvalidCPUShares(c *testing.T) {
testRequires(c, cpuShare, DaemonIsLinux)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
out, _, err := dockerCmdWithError("run", "--cpu-shares", "1", "busybox", "echo", "test")
assert.ErrorContains(c, err, "", out)
expected := "minimum allowed cpu-shares is 2"
@@ -1383,6 +1394,7 @@ func (s *DockerCLIRunSuite) TestRunDeviceSymlink(c *testing.T) {
// TestRunPIDsLimit makes sure the pids cgroup is set with --pids-limit
func (s *DockerCLIRunSuite) TestRunPIDsLimit(c *testing.T) {
testRequires(c, testEnv.IsLocalDaemon, pidsLimit)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const file = "/sys/fs/cgroup/pids/pids.max"
out := cli.DockerCmd(c, "run", "--name", "skittles", "--pids-limit", "4", "busybox", "cat", file).Combined()
@@ -1394,6 +1406,7 @@ func (s *DockerCLIRunSuite) TestRunPIDsLimit(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunPrivilegedAllowedDevices(c *testing.T) {
testRequires(c, DaemonIsLinux, NotUserNamespace)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const file = "/sys/fs/cgroup/devices/devices.list"
out := cli.DockerCmd(c, "run", "--privileged", "busybox", "cat", file).Combined()
@@ -1548,6 +1561,7 @@ func (s *DockerDaemonSuite) TestRunWithDaemonDefaultSeccompProfile(c *testing.T)
func (s *DockerCLIRunSuite) TestRunWithNanoCPUs(c *testing.T) {
testRequires(c, cpuCfsQuota, cpuCfsPeriod)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const file1 = "/sys/fs/cgroup/cpu/cpu.cfs_quota_us"
const file2 = "/sys/fs/cgroup/cpu/cpu.cfs_period_us"

View File

@@ -18,6 +18,7 @@ import (
"github.com/docker/docker/testutil"
"github.com/docker/docker/testutil/request"
"gotest.tools/v3/assert"
"gotest.tools/v3/skip"
)
func (s *DockerCLIUpdateSuite) TearDownTest(ctx context.Context, c *testing.T) {
@@ -31,6 +32,7 @@ func (s *DockerCLIUpdateSuite) OnTimeout(c *testing.T) {
func (s *DockerCLIUpdateSuite) TestUpdateRunningContainer(c *testing.T) {
testRequires(c, DaemonIsLinux)
testRequires(c, memoryLimitSupport)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const name = "test-update-container"
cli.DockerCmd(c, "run", "-d", "--name", name, "-m", "300M", "busybox", "top")
@@ -46,6 +48,7 @@ func (s *DockerCLIUpdateSuite) TestUpdateRunningContainer(c *testing.T) {
func (s *DockerCLIUpdateSuite) TestUpdateRunningContainerWithRestart(c *testing.T) {
testRequires(c, DaemonIsLinux)
testRequires(c, memoryLimitSupport)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const name = "test-update-container"
cli.DockerCmd(c, "run", "-d", "--name", name, "-m", "300M", "busybox", "top")
@@ -62,6 +65,7 @@ func (s *DockerCLIUpdateSuite) TestUpdateRunningContainerWithRestart(c *testing.
func (s *DockerCLIUpdateSuite) TestUpdateStoppedContainer(c *testing.T) {
testRequires(c, DaemonIsLinux)
testRequires(c, memoryLimitSupport)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const name = "test-update-container"
const file = "/sys/fs/cgroup/memory/memory.limit_in_bytes"
@@ -77,6 +81,7 @@ func (s *DockerCLIUpdateSuite) TestUpdateStoppedContainer(c *testing.T) {
func (s *DockerCLIUpdateSuite) TestUpdatePausedContainer(c *testing.T) {
testRequires(c, DaemonIsLinux)
testRequires(c, cpuShare)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const name = "test-update-container"
cli.DockerCmd(c, "run", "-d", "--name", name, "--cpu-shares", "1000", "busybox", "top")
@@ -95,6 +100,7 @@ func (s *DockerCLIUpdateSuite) TestUpdateWithUntouchedFields(c *testing.T) {
testRequires(c, DaemonIsLinux)
testRequires(c, memoryLimitSupport)
testRequires(c, cpuShare)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const name = "test-update-container"
cli.DockerCmd(c, "run", "-d", "--name", name, "-m", "300M", "--cpu-shares", "800", "busybox", "top")
@@ -135,6 +141,7 @@ func (s *DockerCLIUpdateSuite) TestUpdateSwapMemoryOnly(c *testing.T) {
testRequires(c, DaemonIsLinux)
testRequires(c, memoryLimitSupport)
testRequires(c, swapMemorySupport)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const name = "test-update-container"
cli.DockerCmd(c, "run", "-d", "--name", name, "--memory", "300M", "--memory-swap", "500M", "busybox", "top")
@@ -151,6 +158,7 @@ func (s *DockerCLIUpdateSuite) TestUpdateInvalidSwapMemory(c *testing.T) {
testRequires(c, DaemonIsLinux)
testRequires(c, memoryLimitSupport)
testRequires(c, swapMemorySupport)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const name = "test-update-container"
cli.DockerCmd(c, "run", "-d", "--name", name, "--memory", "300M", "--memory-swap", "500M", "busybox", "top")
@@ -244,6 +252,7 @@ func (s *DockerCLIUpdateSuite) TestUpdateNotAffectMonitorRestartPolicy(c *testin
func (s *DockerCLIUpdateSuite) TestUpdateWithNanoCPUs(c *testing.T) {
testRequires(c, cpuCfsQuota, cpuCfsPeriod)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
const file1 = "/sys/fs/cgroup/cpu/cpu.cfs_quota_us"
const file2 = "/sys/fs/cgroup/cpu/cpu.cfs_period_us"

View File

@@ -6,6 +6,7 @@ import (
"os"
"strings"
"github.com/containerd/cgroups/v3"
"github.com/docker/docker/pkg/sysinfo"
)
@@ -67,6 +68,11 @@ func bridgeNfIptables() bool {
return !sysInfo.BridgeNFCallIPTablesDisabled
}
func onlyCgroupsv2() bool {
// Only check for unified, cgroup v1 tests can run under other modes
return cgroups.Mode() == cgroups.Unified
}
func unprivilegedUsernsClone() bool {
content, err := os.ReadFile("/proc/sys/kernel/unprivileged_userns_clone")
return err != nil || !strings.Contains(string(content), "0")

View File

@@ -2,3 +2,7 @@ package main
func setupLocalInfo() {
}
func onlyCgroupsv2() bool {
return false
}

View File

@@ -16,6 +16,7 @@ import (
"github.com/docker/docker/testutil/fakecontext"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
"gotest.tools/v3/poll"
"gotest.tools/v3/skip"
)
@@ -85,6 +86,8 @@ func TestBuildSquashParent(t *testing.T) {
container.WithImage(name),
container.WithCmd("/bin/sh", "-c", "cat /hello"),
)
poll.WaitOn(t, container.IsStopped(ctx, client, cid))
reader, err := client.ContainerLogs(ctx, cid, containertypes.LogsOptions{
ShowStdout: true,
})

View File

@@ -18,6 +18,7 @@ import (
"github.com/docker/docker/testutil/fakecontext"
"github.com/docker/docker/testutil/fixtures/load"
"gotest.tools/v3/assert"
"gotest.tools/v3/poll"
"gotest.tools/v3/skip"
)
@@ -117,6 +118,8 @@ func TestBuildUserNamespaceValidateCapabilitiesAreV2(t *testing.T) {
container.WithImage(imageTag),
container.WithCmd("/sbin/getcap", "-n", "/bin/sleep"),
)
poll.WaitOn(t, container.IsStopped(ctx, clientNoUserRemap, cid))
logReader, err := clientNoUserRemap.ContainerLogs(ctx, cid, containertypes.LogsOptions{
ShowStdout: true,
})

View File

@@ -16,6 +16,7 @@ import (
"github.com/docker/docker/testutil/daemon"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
"gotest.tools/v3/poll"
"gotest.tools/v3/skip"
)
@@ -54,6 +55,7 @@ func TestCreateWithCDIDevices(t *testing.T) {
}
assert.Check(t, is.DeepEqual(inspect.HostConfig.DeviceRequests, expectedRequests))
poll.WaitOn(t, container.IsStopped(ctx, apiClient, id))
reader, err := apiClient.ContainerLogs(ctx, id, containertypes.LogsOptions{
ShowStdout: true,
})

View File

@@ -7,6 +7,7 @@ import (
"io"
"os"
"path/filepath"
"strings"
"testing"
"github.com/docker/docker/api/types"
@@ -31,6 +32,8 @@ func TestCopyFromContainerPathDoesNotExist(t *testing.T) {
assert.Check(t, is.ErrorContains(err, "Could not find the file /dne in container "+cid))
}
// TestCopyFromContainerPathIsNotDir tests that an error is returned when
// trying to create a directory on a path that's a file.
func TestCopyFromContainerPathIsNotDir(t *testing.T) {
skip.If(t, testEnv.UsingSnapshotter(), "FIXME: https://github.com/moby/moby/issues/47107")
ctx := setupTest(t)
@@ -38,14 +41,29 @@ func TestCopyFromContainerPathIsNotDir(t *testing.T) {
apiClient := testEnv.APIClient()
cid := container.Create(ctx, t, apiClient)
path := "/etc/passwd/"
expected := "not a directory"
// Pick a path that already exists as a file; on Linux "/etc/passwd"
// is expected to be there, so we pick that for convenience.
existingFile := "/etc/passwd/"
expected := []string{"not a directory"}
if testEnv.DaemonInfo.OSType == "windows" {
path = "c:/windows/system32/drivers/etc/hosts/"
expected = "The filename, directory name, or volume label syntax is incorrect."
existingFile = "c:/windows/system32/drivers/etc/hosts/"
// Depending on the version of Windows, this produces a "ERROR_INVALID_NAME" (Windows < 2025),
// or a "ERROR_DIRECTORY" (Windows 2025); https://learn.microsoft.com/en-us/windows/win32/debug/system-error-codes--0-499-
expected = []string{
"The directory name is invalid.", // ERROR_DIRECTORY
"The filename, directory name, or volume label syntax is incorrect.", // ERROR_INVALID_NAME
}
}
_, _, err := apiClient.CopyFromContainer(ctx, cid, path)
assert.Assert(t, is.ErrorContains(err, expected))
_, _, err := apiClient.CopyFromContainer(ctx, cid, existingFile)
var found bool
for _, expErr := range expected {
if err != nil && strings.Contains(err.Error(), expErr) {
found = true
break
}
}
assert.Check(t, found, "Expected error to be one of %v, but got %v", expected, err)
}
func TestCopyToContainerPathDoesNotExist(t *testing.T) {

View File

@@ -23,6 +23,7 @@ func TestDiff(t *testing.T) {
{Kind: containertypes.ChangeAdd, Path: "/foo/bar"},
}
poll.WaitOn(t, container.IsStopped(ctx, apiClient, cID))
items, err := apiClient.ContainerDiff(ctx, cID)
assert.NilError(t, err)
assert.DeepEqual(t, expected, items)

View File

@@ -0,0 +1,412 @@
package container // import "github.com/docker/docker/integration/container"
import (
"context"
"strings"
"testing"
"time"
"github.com/docker/docker/api/types"
containertypes "github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/api/types/volume"
"github.com/docker/docker/integration/internal/container"
"github.com/docker/docker/testutil"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
)
// TestWindowsProcessIsolation validates process isolation on Windows.
func TestWindowsProcessIsolation(t *testing.T) {
ctx := setupTest(t)
apiClient := testEnv.APIClient()
testcases := []struct {
name string
description string
validate func(t *testing.T, ctx context.Context, id string)
}{
{
name: "Process isolation basic container lifecycle",
description: "Validate container can start, run, and stop with process isolation",
validate: func(t *testing.T, ctx context.Context, id string) {
// Verify container is running
ctrInfo := container.Inspect(ctx, t, apiClient, id)
assert.Check(t, is.Equal(ctrInfo.State.Running, true))
assert.Check(t, is.Equal(ctrInfo.HostConfig.Isolation, containertypes.IsolationProcess))
execCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
res := container.ExecT(execCtx, t, apiClient, id, []string{"cmd", "/c", "echo", "test"})
assert.Check(t, is.Equal(res.ExitCode, 0))
assert.Check(t, strings.Contains(res.Stdout(), "test"))
},
},
{
name: "Process isolation filesystem access",
description: "Validate filesystem operations work correctly with process isolation",
validate: func(t *testing.T, ctx context.Context, id string) {
execCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
// Create a test file
res := container.ExecT(execCtx, t, apiClient, id,
[]string{"cmd", "/c", "echo test123 > C:\\testfile.txt"})
assert.Check(t, is.Equal(res.ExitCode, 0))
// Read the test file
execCtx2, cancel2 := context.WithTimeout(ctx, 10*time.Second)
defer cancel2()
res2 := container.ExecT(execCtx2, t, apiClient, id,
[]string{"cmd", "/c", "type", "C:\\testfile.txt"})
assert.Check(t, is.Equal(res2.ExitCode, 0))
assert.Check(t, strings.Contains(res2.Stdout(), "test123"))
},
},
{
name: "Process isolation network connectivity",
description: "Validate network connectivity works with process isolation",
validate: func(t *testing.T, ctx context.Context, id string) {
execCtx, cancel := context.WithTimeout(ctx, 15*time.Second)
defer cancel()
// Test localhost connectivity
res := container.ExecT(execCtx, t, apiClient, id,
[]string{"ping", "-n", "1", "-w", "3000", "localhost"})
assert.Check(t, is.Equal(res.ExitCode, 0))
assert.Check(t, strings.Contains(res.Stdout(), "Reply from") ||
strings.Contains(res.Stdout(), "Received = 1"))
},
},
{
name: "Process isolation environment variables",
description: "Validate environment variables are properly isolated",
validate: func(t *testing.T, ctx context.Context, id string) {
execCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
// Check that container has expected environment variables
res := container.ExecT(execCtx, t, apiClient, id,
[]string{"cmd", "/c", "set"})
assert.Check(t, is.Equal(res.ExitCode, 0))
// Should have Windows-specific environment variables
stdout := res.Stdout()
assert.Check(t, strings.Contains(stdout, "COMPUTERNAME") ||
strings.Contains(stdout, "OS=Windows"))
},
},
{
name: "Process isolation CPU access",
description: "Validate container can access CPU information",
validate: func(t *testing.T, ctx context.Context, id string) {
execCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
// Check NUMBER_OF_PROCESSORS environment variable
res := container.ExecT(execCtx, t, apiClient, id,
[]string{"cmd", "/c", "echo", "%NUMBER_OF_PROCESSORS%"})
assert.Check(t, is.Equal(res.ExitCode, 0))
// Should return a number
output := strings.TrimSpace(res.Stdout())
assert.Check(t, output != "" && output != "%NUMBER_OF_PROCESSORS%",
"NUMBER_OF_PROCESSORS not set")
},
},
}
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
ctx := testutil.StartSpan(ctx, t)
// Create and start container with process isolation
id := container.Run(ctx, t, apiClient,
container.WithIsolation(containertypes.IsolationProcess),
container.WithCmd("ping", "-t", "localhost"),
)
defer apiClient.ContainerRemove(ctx, id, containertypes.RemoveOptions{Force: true})
tc.validate(t, ctx, id)
})
}
}
// TestWindowsHyperVIsolation validates Hyper-V isolation on Windows.
func TestWindowsHyperVIsolation(t *testing.T) {
ctx := setupTest(t)
apiClient := testEnv.APIClient()
testcases := []struct {
name string
description string
validate func(t *testing.T, ctx context.Context, id string)
}{
{
name: "Hyper-V isolation basic container lifecycle",
description: "Validate container can start, run, and stop with Hyper-V isolation",
validate: func(t *testing.T, ctx context.Context, id string) {
// Verify container is running
ctrInfo := container.Inspect(ctx, t, apiClient, id)
assert.Check(t, is.Equal(ctrInfo.State.Running, true))
assert.Check(t, is.Equal(ctrInfo.HostConfig.Isolation, containertypes.IsolationHyperV))
// Execute a simple command
execCtx, cancel := context.WithTimeout(ctx, 15*time.Second)
defer cancel()
res := container.ExecT(execCtx, t, apiClient, id, []string{"cmd", "/c", "echo", "hyperv-test"})
assert.Check(t, is.Equal(res.ExitCode, 0))
assert.Check(t, strings.Contains(res.Stdout(), "hyperv-test"))
},
},
{
name: "Hyper-V isolation filesystem operations",
description: "Validate filesystem isolation with Hyper-V",
validate: func(t *testing.T, ctx context.Context, id string) {
execCtx, cancel := context.WithTimeout(ctx, 15*time.Second)
defer cancel()
// Test file creation
res := container.ExecT(execCtx, t, apiClient, id,
[]string{"cmd", "/c", "echo hyperv-file > C:\\hvtest.txt"})
assert.Check(t, is.Equal(res.ExitCode, 0))
// Test file read
execCtx2, cancel2 := context.WithTimeout(ctx, 15*time.Second)
defer cancel2()
res2 := container.ExecT(execCtx2, t, apiClient, id,
[]string{"cmd", "/c", "type", "C:\\hvtest.txt"})
assert.Check(t, is.Equal(res2.ExitCode, 0))
assert.Check(t, strings.Contains(res2.Stdout(), "hyperv-file"))
},
},
{
name: "Hyper-V isolation network connectivity",
description: "Validate network works with Hyper-V isolation",
validate: func(t *testing.T, ctx context.Context, id string) {
execCtx, cancel := context.WithTimeout(ctx, 15*time.Second)
defer cancel()
// Test localhost connectivity
res := container.ExecT(execCtx, t, apiClient, id,
[]string{"ping", "-n", "1", "-w", "5000", "localhost"})
assert.Check(t, is.Equal(res.ExitCode, 0))
},
},
}
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
ctx := testutil.StartSpan(ctx, t)
// Create and start container with Hyper-V isolation
id := container.Run(ctx, t, apiClient,
container.WithIsolation(containertypes.IsolationHyperV),
container.WithCmd("ping", "-t", "localhost"),
)
defer apiClient.ContainerRemove(ctx, id, containertypes.RemoveOptions{Force: true})
tc.validate(t, ctx, id)
})
}
}
// TestWindowsIsolationComparison validates that both isolation modes can coexist
// and that containers can be created with different isolation modes on Windows.
func TestWindowsIsolationComparison(t *testing.T) {
ctx := setupTest(t)
apiClient := testEnv.APIClient()
// Create container with process isolation
processID := container.Run(ctx, t, apiClient,
container.WithIsolation(containertypes.IsolationProcess),
container.WithCmd("ping", "-t", "localhost"),
)
defer apiClient.ContainerRemove(ctx, processID, containertypes.RemoveOptions{Force: true})
processInfo := container.Inspect(ctx, t, apiClient, processID)
assert.Check(t, is.Equal(processInfo.HostConfig.Isolation, containertypes.IsolationProcess))
assert.Check(t, is.Equal(processInfo.State.Running, true))
// Create container with Hyper-V isolation
hypervID := container.Run(ctx, t, apiClient,
container.WithIsolation(containertypes.IsolationHyperV),
container.WithCmd("ping", "-t", "localhost"),
)
defer apiClient.ContainerRemove(ctx, hypervID, containertypes.RemoveOptions{Force: true})
hypervInfo := container.Inspect(ctx, t, apiClient, hypervID)
assert.Check(t, is.Equal(hypervInfo.HostConfig.Isolation, containertypes.IsolationHyperV))
assert.Check(t, is.Equal(hypervInfo.State.Running, true))
// Verify both containers can run simultaneously
processInfo2 := container.Inspect(ctx, t, apiClient, processID)
hypervInfo2 := container.Inspect(ctx, t, apiClient, hypervID)
assert.Check(t, is.Equal(processInfo2.State.Running, true))
assert.Check(t, is.Equal(hypervInfo2.State.Running, true))
}
// TestWindowsProcessIsolationResourceConstraints validates resource constraints
// work correctly with process isolation on Windows.
func TestWindowsProcessIsolationResourceConstraints(t *testing.T) {
ctx := setupTest(t)
apiClient := testEnv.APIClient()
testcases := []struct {
name string
cpuShares int64
nanoCPUs int64
memoryLimit int64
cpuCount int64
validateConfig func(t *testing.T, ctrInfo types.ContainerJSON)
}{
{
name: "CPU shares constraint - config only",
cpuShares: 512,
// Note: CPU shares are accepted by the API but NOT enforced on Windows.
// This test only verifies the configuration is stored correctly.
// Actual enforcement does not work - containers get equal CPU regardless of shares.
// Use NanoCPUs (--cpus flag) for actual CPU limiting on Windows.
validateConfig: func(t *testing.T, ctrInfo types.ContainerJSON) {
assert.Check(t, is.Equal(ctrInfo.HostConfig.CPUShares, int64(512)))
},
},
{
name: "CPU limit (NanoCPUs) constraint",
nanoCPUs: 2000000000, // 2.0 CPUs
// NanoCPUs enforce hard CPU limits on Windows (unlike CPUShares which don't work)
validateConfig: func(t *testing.T, ctrInfo types.ContainerJSON) {
assert.Check(t, is.Equal(ctrInfo.HostConfig.NanoCPUs, int64(2000000000)))
},
},
{
name: "Memory limit constraint",
memoryLimit: 512 * 1024 * 1024, // 512MB
// Memory limits enforce hard limits on container memory usage
validateConfig: func(t *testing.T, ctrInfo types.ContainerJSON) {
assert.Check(t, is.Equal(ctrInfo.HostConfig.Memory, int64(512*1024*1024)))
},
},
{
name: "CPU count constraint",
cpuCount: 2,
// CPU count limits the number of CPUs available to the container
validateConfig: func(t *testing.T, ctrInfo types.ContainerJSON) {
assert.Check(t, is.Equal(ctrInfo.HostConfig.CPUCount, int64(2)))
},
},
}
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
ctx := testutil.StartSpan(ctx, t)
opts := []func(*container.TestContainerConfig){
container.WithIsolation(containertypes.IsolationProcess),
container.WithCmd("ping", "-t", "localhost"),
}
if tc.cpuShares > 0 {
opts = append(opts, func(config *container.TestContainerConfig) {
config.HostConfig.CPUShares = tc.cpuShares
})
}
if tc.nanoCPUs > 0 {
opts = append(opts, func(config *container.TestContainerConfig) {
config.HostConfig.NanoCPUs = tc.nanoCPUs
})
}
if tc.memoryLimit > 0 {
opts = append(opts, func(config *container.TestContainerConfig) {
config.HostConfig.Memory = tc.memoryLimit
})
}
if tc.cpuCount > 0 {
opts = append(opts, func(config *container.TestContainerConfig) {
config.HostConfig.CPUCount = tc.cpuCount
})
}
id := container.Run(ctx, t, apiClient, opts...)
defer apiClient.ContainerRemove(ctx, id, containertypes.RemoveOptions{Force: true})
ctrInfo := container.Inspect(ctx, t, apiClient, id)
tc.validateConfig(t, ctrInfo)
})
}
}
// TestWindowsProcessIsolationVolumeMount validates volume mounting with process isolation on Windows.
func TestWindowsProcessIsolationVolumeMount(t *testing.T) {
ctx := setupTest(t)
apiClient := testEnv.APIClient()
volumeName := "process-iso-test-volume"
volRes, err := apiClient.VolumeCreate(ctx, volume.CreateOptions{
Name: volumeName,
})
assert.NilError(t, err)
defer func() {
// Force volume removal in case container cleanup fails
apiClient.VolumeRemove(ctx, volRes.Name, true)
}()
// Create container with volume mount
id := container.Run(ctx, t, apiClient,
container.WithIsolation(containertypes.IsolationProcess),
container.WithCmd("ping", "-t", "localhost"),
container.WithMount(mount.Mount{
Type: mount.TypeVolume,
Source: volumeName,
Target: "C:\\data",
}),
)
defer apiClient.ContainerRemove(ctx, id, containertypes.RemoveOptions{Force: true})
// Write data to mounted volume
execCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
res := container.ExecT(execCtx, t, apiClient, id,
[]string{"cmd", "/c", "echo volume-test > C:\\data\\test.txt"})
assert.Check(t, is.Equal(res.ExitCode, 0))
// Read data from mounted volume
execCtx2, cancel2 := context.WithTimeout(ctx, 10*time.Second)
defer cancel2()
res2 := container.ExecT(execCtx2, t, apiClient, id,
[]string{"cmd", "/c", "type", "C:\\data\\test.txt"})
assert.Check(t, is.Equal(res2.ExitCode, 0))
assert.Check(t, strings.Contains(res2.Stdout(), "volume-test"))
// Verify container has volume mount
ctrInfo := container.Inspect(ctx, t, apiClient, id)
assert.Check(t, len(ctrInfo.Mounts) == 1)
assert.Check(t, is.Equal(ctrInfo.Mounts[0].Type, mount.TypeVolume))
assert.Check(t, is.Equal(ctrInfo.Mounts[0].Name, volumeName))
}
// TestWindowsHyperVIsolationResourceLimits validates resource limits work with Hyper-V isolation.
// This ensures Windows can properly enforce resource constraints on Hyper-V containers.
func TestWindowsHyperVIsolationResourceLimits(t *testing.T) {
ctx := setupTest(t)
apiClient := testEnv.APIClient()
// Create container with memory limit
memoryLimit := int64(512 * 1024 * 1024) // 512MB
id := container.Run(ctx, t, apiClient,
container.WithIsolation(containertypes.IsolationHyperV),
container.WithCmd("ping", "-t", "localhost"),
func(config *container.TestContainerConfig) {
config.HostConfig.Memory = memoryLimit
},
)
defer apiClient.ContainerRemove(ctx, id, containertypes.RemoveOptions{Force: true})
// Verify resource limit is set
ctrInfo := container.Inspect(ctx, t, apiClient, id)
assert.Check(t, is.Equal(ctrInfo.HostConfig.Memory, memoryLimit))
assert.Check(t, is.Equal(ctrInfo.HostConfig.Isolation, containertypes.IsolationHyperV))
}

View File

@@ -392,6 +392,37 @@ func TestContainerVolumesMountedAsSlave(t *testing.T) {
}
}
// TestContainerVolumeAnonymous verifies that anonymous volumes created through
// the Mounts API get a random name generated, and have the "AnonymousLabel"
// (com.docker.volume.anonymous) label set.
//
// regression test for https://github.com/moby/moby/issues/48748
func TestContainerVolumeAnonymous(t *testing.T) {
skip.If(t, testEnv.IsRemoteDaemon)
ctx := setupTest(t)
mntOpts := mounttypes.Mount{Type: mounttypes.TypeVolume, Target: "/foo"}
apiClient := testEnv.APIClient()
cID := container.Create(ctx, t, apiClient, container.WithMount(mntOpts))
inspect := container.Inspect(ctx, t, apiClient, cID)
assert.Assert(t, is.Len(inspect.HostConfig.Mounts, 1))
assert.Check(t, is.Equal(inspect.HostConfig.Mounts[0], mntOpts))
assert.Assert(t, is.Len(inspect.Mounts, 1))
volName := inspect.Mounts[0].Name
assert.Check(t, is.Len(volName, 64), "volume name should be 64 bytes (from stringid.GenerateRandomID())")
volInspect, err := apiClient.VolumeInspect(ctx, volName)
assert.NilError(t, err)
// see [daemon.AnonymousLabel]; we don't want to import the daemon package here.
const expectedAnonymousLabel = "com.docker.volume.anonymous"
assert.Check(t, is.Contains(volInspect.Labels, expectedAnonymousLabel))
}
// Regression test for #38995 and #43390.
func TestContainerCopyLeaksMounts(t *testing.T) {
ctx := setupTest(t)

View File

@@ -6,6 +6,7 @@ import (
"fmt"
"io"
"net"
"strconv"
"strings"
"testing"
"time"
@@ -25,13 +26,25 @@ func TestNetworkNat(t *testing.T) {
ctx := setupTest(t)
msg := "it works"
startServerContainer(ctx, t, msg, 8080)
const msg = "it works"
const port = 8080
startServerContainer(ctx, t, msg, port)
endpoint := getExternalAddress(t)
conn, err := net.Dial("tcp", net.JoinHostPort(endpoint.String(), "8080"))
assert.NilError(t, err)
defer conn.Close()
var conn net.Conn
addr := net.JoinHostPort(endpoint.String(), strconv.Itoa(port))
poll.WaitOn(t, func(t poll.LogT) poll.Result {
var err error
conn, err = net.Dial("tcp", addr)
if err != nil {
return poll.Continue("waiting for %s to be accessible: %v", addr, err)
}
return poll.Success()
})
defer func() {
assert.Check(t, conn.Close())
}()
data, err := io.ReadAll(conn)
assert.NilError(t, err)
@@ -43,12 +56,23 @@ func TestNetworkLocalhostTCPNat(t *testing.T) {
ctx := setupTest(t)
msg := "hi yall"
startServerContainer(ctx, t, msg, 8081)
const msg = "hi yall"
const port = 8081
startServerContainer(ctx, t, msg, port)
conn, err := net.Dial("tcp", "localhost:8081")
assert.NilError(t, err)
defer conn.Close()
var conn net.Conn
addr := net.JoinHostPort("localhost", strconv.Itoa(port))
poll.WaitOn(t, func(t poll.LogT) poll.Result {
var err error
conn, err = net.Dial("tcp", addr)
if err != nil {
return poll.Continue("waiting for %s to be accessible: %v", addr, err)
}
return poll.Success()
})
defer func() {
assert.Check(t, conn.Close())
}()
data, err := io.ReadAll(conn)
assert.NilError(t, err)

View File

@@ -10,6 +10,7 @@ import (
"github.com/docker/docker/testutil/daemon"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
"gotest.tools/v3/poll"
"gotest.tools/v3/skip"
)
@@ -61,7 +62,8 @@ func TestUsernsCommit(t *testing.T) {
clientUserRemap := dUserRemap.NewClientT(t)
defer clientUserRemap.Close()
container.Run(ctx, t, clientUserRemap, container.WithName(t.Name()), container.WithImage("busybox"), container.WithCmd("sh", "-c", "echo hello world > /hello.txt && chown 1000:1000 /hello.txt"))
cID := container.Run(ctx, t, clientUserRemap, container.WithName(t.Name()), container.WithImage("busybox"), container.WithCmd("sh", "-c", "echo hello world > /hello.txt && chown 1000:1000 /hello.txt"))
poll.WaitOn(t, container.IsStopped(ctx, clientUserRemap, cID))
img, err := clientUserRemap.ContainerCommit(ctx, t.Name(), containertypes.CommitOptions{})
assert.NilError(t, err)

View File

@@ -1,6 +1,7 @@
package container
import (
"slices"
"strings"
"github.com/docker/docker/api/types/container"
@@ -56,6 +57,16 @@ func WithExposedPorts(ports ...string) func(*TestContainerConfig) {
}
}
// WithPortMap sets/replaces port mappings.
func WithPortMap(pm nat.PortMap) func(*TestContainerConfig) {
return func(c *TestContainerConfig) {
c.HostConfig.PortBindings = nat.PortMap{}
for p, b := range pm {
c.HostConfig.PortBindings[p] = slices.Clone(b)
}
}
}
// WithTty sets the TTY mode of the container
func WithTty(tty bool) func(*TestContainerConfig) {
return func(c *TestContainerConfig) {

View File

@@ -6,11 +6,17 @@ import (
"testing"
"time"
containertypes "github.com/docker/docker/api/types/container"
networktypes "github.com/docker/docker/api/types/network"
"github.com/docker/docker/api/types/versions"
ctr "github.com/docker/docker/integration/internal/container"
"github.com/docker/docker/integration/internal/network"
"github.com/docker/docker/internal/testutils/networking"
"github.com/docker/docker/libnetwork/drivers/bridge"
"github.com/docker/docker/testutil/daemon"
"github.com/docker/go-connections/nat"
"gotest.tools/v3/assert"
"gotest.tools/v3/icmd"
"gotest.tools/v3/skip"
)
@@ -43,3 +49,62 @@ func TestCreateWithMultiNetworks(t *testing.T) {
ifacesWithAddress := strings.Count(res.Stdout.String(), "\n")
assert.Equal(t, ifacesWithAddress, 3)
}
// TestFirewalldReloadNoZombies checks that when firewalld is reloaded, rules
// belonging to deleted networks/containers do not reappear.
func TestFirewalldReloadNoZombies(t *testing.T) {
skip.If(t, testEnv.DaemonInfo.OSType == "windows")
skip.If(t, !networking.FirewalldRunning(), "firewalld is not running")
skip.If(t, testEnv.IsRootless, "no firewalld in rootless netns")
ctx := setupTest(t)
d := daemon.New(t)
d.StartWithBusybox(ctx, t)
defer d.Stop(t)
c := d.NewClientT(t)
const bridgeName = "br-fwdreload"
removed := false
nw := network.CreateNoError(ctx, t, c, "testnet",
network.WithOption(bridge.BridgeName, bridgeName))
defer func() {
if !removed {
network.RemoveNoError(ctx, t, c, nw)
}
}()
cid := ctr.Run(ctx, t, c,
ctr.WithExposedPorts("80/tcp", "81/tcp"),
ctr.WithPortMap(nat.PortMap{"80/tcp": {{HostPort: "8000"}}}))
defer func() {
if !removed {
ctr.Remove(ctx, t, c, cid, containertypes.RemoveOptions{Force: true})
}
}()
iptablesSave := icmd.Command("iptables-save")
resBeforeDel := icmd.RunCmd(iptablesSave)
assert.NilError(t, resBeforeDel.Error)
assert.Check(t, strings.Contains(resBeforeDel.Combined(), bridgeName),
"With container: expected rules for %s in: %s", bridgeName, resBeforeDel.Combined())
// Delete the container and its network.
ctr.Remove(ctx, t, c, cid, containertypes.RemoveOptions{Force: true})
network.RemoveNoError(ctx, t, c, nw)
removed = true
// Check the network does not appear in iptables rules.
resAfterDel := icmd.RunCmd(iptablesSave)
assert.NilError(t, resAfterDel.Error)
assert.Check(t, !strings.Contains(resAfterDel.Combined(), bridgeName),
"After deletes: did not expect rules for %s in: %s", bridgeName, resAfterDel.Combined())
// firewall-cmd --reload, and wait for the daemon to restore rules.
networking.FirewalldReload(t, d)
// Check that rules for the deleted container/network have not reappeared.
resAfterReload := icmd.RunCmd(iptablesSave)
assert.NilError(t, resAfterReload.Error)
assert.Check(t, !strings.Contains(resAfterReload.Combined(), bridgeName),
"After deletes: did not expect rules for %s in: %s", bridgeName, resAfterReload.Combined())
}

View File

@@ -10,6 +10,7 @@ import (
containertypes "github.com/docker/docker/api/types/container"
"github.com/docker/docker/integration/internal/container"
"github.com/docker/docker/integration/internal/network"
"github.com/docker/docker/internal/testutils/networking"
"github.com/docker/docker/testutil"
"github.com/docker/docker/testutil/daemon"
"gotest.tools/v3/assert"
@@ -160,6 +161,8 @@ func TestBridgeICC(t *testing.T) {
Force: true,
})
networking.FirewalldReload(t, d)
pingHost := tc.pingHost
if pingHost == "" {
if tc.linkLocal {
@@ -235,7 +238,7 @@ func TestBridgeICCWindows(t *testing.T) {
pingCmd := []string{"ping", "-n", "1", "-w", "3000", ctr1Name}
const ctr2Name = "ctr2"
attachCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
attachCtx, cancel := context.WithTimeout(ctx, 15*time.Second)
defer cancel()
res := container.RunAttach(attachCtx, t, c,
container.WithName(ctr2Name),
@@ -351,6 +354,7 @@ func TestBridgeINC(t *testing.T) {
defer c.ContainerRemove(ctx, id1, containertypes.RemoveOptions{
Force: true,
})
networking.FirewalldReload(t, d)
ctr1Info := container.Inspect(ctx, t, c, id1)
targetAddr := ctr1Info.NetworkSettings.Networks[bridge1].IPAddress
@@ -575,6 +579,7 @@ func TestInternalNwConnectivity(t *testing.T) {
container.WithNetworkMode(bridgeName),
)
defer c.ContainerRemove(ctx, id, containertypes.RemoveOptions{Force: true})
networking.FirewalldReload(t, d)
execCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()

View File

@@ -0,0 +1,378 @@
package networking
import (
"context"
"fmt"
"io"
"net/http"
"strings"
"testing"
"time"
"github.com/docker/docker/api/types"
containertypes "github.com/docker/docker/api/types/container"
"github.com/docker/docker/integration/internal/container"
"github.com/docker/docker/integration/internal/network"
"github.com/docker/docker/testutil"
"github.com/docker/go-connections/nat"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
"gotest.tools/v3/poll"
"gotest.tools/v3/skip"
)
// TestWindowsNetworkDrivers validates Windows-specific network drivers for Windows.
// Tests: NAT, Transparent, and L2Bridge network drivers.
func TestWindowsNetworkDrivers(t *testing.T) {
ctx := setupTest(t)
c := testEnv.APIClient()
testcases := []struct {
name string
driver string
}{
{
// NAT connectivity is already tested in TestBridgeICCWindows (bridge_test.go),
// so we only validate network creation here.
name: "NAT driver network creation",
driver: "nat",
},
{
// Only test creation of a Transparent driver network, connectivity depends on external
// network infrastructure.
name: "Transparent driver network creation",
driver: "transparent",
},
{
// L2Bridge driver requires specific host network adapter configuration, test will skip
// if host configuration is missing.
name: "L2Bridge driver network creation",
driver: "l2bridge",
},
}
for tcID, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
ctx := testutil.StartSpan(ctx, t)
netName := fmt.Sprintf("test-%s-%d", tc.driver, tcID)
// Create network with specified driver
netResp, err := c.NetworkCreate(ctx, netName, types.NetworkCreate{
Driver: tc.driver,
})
if err != nil {
// L2Bridge may fail if host network configuration is not available
if tc.driver == "l2bridge" {
errStr := strings.ToLower(err.Error())
if strings.Contains(errStr, "the network does not have a subnet for this endpoint") {
t.Skipf("Driver %s requires host network configuration: %v", tc.driver, err)
}
}
t.Fatalf("Failed to create network with %s driver: %v", tc.driver, err)
}
defer network.RemoveNoError(ctx, t, c, netName)
// Inspect network to validate driver is correctly set
netInfo, err := c.NetworkInspect(ctx, netResp.ID, types.NetworkInspectOptions{})
assert.NilError(t, err)
assert.Check(t, is.Equal(netInfo.Driver, tc.driver), "Network driver mismatch")
assert.Check(t, is.Equal(netInfo.Name, netName), "Network name mismatch")
})
}
}
// TestWindowsNATDriverPortMapping validates NAT port mapping by testing host connectivity.
func TestWindowsNATDriverPortMapping(t *testing.T) {
ctx := setupTest(t)
c := testEnv.APIClient()
// Use default NAT network which supports port mapping
netName := "nat"
// PowerShell HTTP listener on port 80
psScript := `
$listener = New-Object System.Net.HttpListener
$listener.Prefixes.Add('http://+:80/')
$listener.Start()
while ($listener.IsListening) {
$context = $listener.GetContext()
$response = $context.Response
$content = [System.Text.Encoding]::UTF8.GetBytes('OK')
$response.ContentLength64 = $content.Length
$response.OutputStream.Write($content, 0, $content.Length)
$response.OutputStream.Close()
}
`
// Create container with port mapping 80->8080
ctrName := "port-mapping-test"
id := container.Run(ctx, t, c,
container.WithName(ctrName),
container.WithCmd("powershell", "-Command", psScript),
container.WithNetworkMode(netName),
container.WithExposedPorts("80/tcp"),
container.WithPortMap(nat.PortMap{
"80/tcp": []nat.PortBinding{{HostPort: "8080"}},
}),
)
defer c.ContainerRemove(ctx, id, containertypes.RemoveOptions{Force: true})
// Verify port mapping metadata
ctrInfo := container.Inspect(ctx, t, c, id)
portKey := nat.Port("80/tcp")
assert.Check(t, ctrInfo.NetworkSettings.Ports[portKey] != nil, "Port mapping not found")
assert.Check(t, len(ctrInfo.NetworkSettings.Ports[portKey]) > 0, "No host port binding")
assert.Check(t, is.Equal(ctrInfo.NetworkSettings.Ports[portKey][0].HostPort, "8080"))
// Test actual connectivity from host to container via mapped port
httpClient := &http.Client{Timeout: 2 * time.Second}
checkHTTP := func(t poll.LogT) poll.Result {
resp, err := httpClient.Get("http://localhost:8080")
if err != nil {
return poll.Continue("connection failed: %v", err)
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return poll.Continue("failed to read body: %v", err)
}
if !strings.Contains(string(body), "OK") {
return poll.Continue("unexpected response body: %s", string(body))
}
return poll.Success()
}
poll.WaitOn(t, checkHTTP, poll.WithTimeout(10*time.Second))
}
// TestWindowsNetworkDNSResolution validates DNS resolution on Windows networks.
func TestWindowsNetworkDNSResolution(t *testing.T) {
ctx := setupTest(t)
c := testEnv.APIClient()
testcases := []struct {
name string
driver string
customDNS bool
dnsServers []string
}{
{
name: "Default NAT network DNS resolution",
driver: "nat",
},
{
name: "Custom DNS servers on NAT network",
driver: "nat",
customDNS: true,
dnsServers: []string{"8.8.8.8", "8.8.4.4"},
},
}
for tcID, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
ctx := testutil.StartSpan(ctx, t)
netName := fmt.Sprintf("test-dns-%s-%d", tc.driver, tcID)
// Create network with optional custom DNS
netOpts := []func(*types.NetworkCreate){
network.WithDriver(tc.driver),
}
if tc.customDNS {
// Note: DNS options may need to be set via network options on Windows
for _, dns := range tc.dnsServers {
netOpts = append(netOpts, network.WithOption("com.docker.network.windowsshim.dnsservers", dns))
}
}
network.CreateNoError(ctx, t, c, netName, netOpts...)
defer network.RemoveNoError(ctx, t, c, netName)
// Create container and verify DNS resolution
ctrName := fmt.Sprintf("dns-test-%d", tcID)
id := container.Run(ctx, t, c,
container.WithName(ctrName),
container.WithNetworkMode(netName),
)
defer c.ContainerRemove(ctx, id, containertypes.RemoveOptions{Force: true})
// Test DNS resolution by pinging container by name from another container
pingCmd := []string{"ping", "-n", "1", "-w", "3000", ctrName}
attachCtx, cancel := context.WithTimeout(ctx, 15*time.Second)
defer cancel()
res := container.RunAttach(attachCtx, t, c,
container.WithCmd(pingCmd...),
container.WithNetworkMode(netName),
)
defer c.ContainerRemove(ctx, res.ContainerID, containertypes.RemoveOptions{Force: true})
assert.Check(t, is.Equal(res.ExitCode, 0), "DNS resolution failed")
assert.Check(t, is.Contains(res.Stdout.String(), "Sent = 1, Received = 1, Lost = 0"))
})
}
}
// TestWindowsNetworkLifecycle validates network lifecycle operations on Windows.
// Tests network creation, container attachment, detachment, and deletion.
func TestWindowsNetworkLifecycle(t *testing.T) {
// Skip this test on Windows Containerd because NetworkConnect operations fail with an
// unsupported platform request error:
// https://github.com/moby/moby/issues/51589
skip.If(t, testEnv.RuntimeIsWindowsContainerd(),
"Skipping test: fails on Containerd due to unsupported platform request error during NetworkConnect operations")
ctx := setupTest(t)
c := testEnv.APIClient()
netName := "lifecycle-test-nat"
netID := network.CreateNoError(ctx, t, c, netName,
network.WithDriver("nat"),
)
netInfo, err := c.NetworkInspect(ctx, netID, types.NetworkInspectOptions{})
assert.NilError(t, err)
assert.Check(t, is.Equal(netInfo.Name, netName))
// Create container on network
ctrName := "lifecycle-ctr"
id := container.Run(ctx, t, c,
container.WithName(ctrName),
container.WithNetworkMode(netName),
)
ctrInfo := container.Inspect(ctx, t, c, id)
assert.Check(t, ctrInfo.NetworkSettings.Networks[netName] != nil)
// Disconnect container from network
err = c.NetworkDisconnect(ctx, netID, id, false)
assert.NilError(t, err)
ctrInfo = container.Inspect(ctx, t, c, id)
assert.Check(t, ctrInfo.NetworkSettings.Networks[netName] == nil, "Container still connected after disconnect")
// Reconnect container to network
err = c.NetworkConnect(ctx, netID, id, nil)
assert.NilError(t, err)
ctrInfo = container.Inspect(ctx, t, c, id)
assert.Check(t, ctrInfo.NetworkSettings.Networks[netName] != nil, "Container not reconnected")
c.ContainerRemove(ctx, id, containertypes.RemoveOptions{Force: true})
network.RemoveNoError(ctx, t, c, netName)
_, err = c.NetworkInspect(ctx, netID, types.NetworkInspectOptions{})
assert.Check(t, err != nil, "Network still exists after deletion")
}
// TestWindowsNetworkIsolation validates network isolation between containers on different networks.
// Ensures containers on different networks cannot communicate, validating Windows network driver isolation.
func TestWindowsNetworkIsolation(t *testing.T) {
ctx := setupTest(t)
c := testEnv.APIClient()
// Create two separate NAT networks
net1Name := "isolation-net1"
net2Name := "isolation-net2"
network.CreateNoError(ctx, t, c, net1Name, network.WithDriver("nat"))
defer network.RemoveNoError(ctx, t, c, net1Name)
network.CreateNoError(ctx, t, c, net2Name, network.WithDriver("nat"))
defer network.RemoveNoError(ctx, t, c, net2Name)
// Create container on first network
ctr1Name := "isolated-ctr1"
id1 := container.Run(ctx, t, c,
container.WithName(ctr1Name),
container.WithNetworkMode(net1Name),
)
defer c.ContainerRemove(ctx, id1, containertypes.RemoveOptions{Force: true})
ctr1Info := container.Inspect(ctx, t, c, id1)
ctr1IP := ctr1Info.NetworkSettings.Networks[net1Name].IPAddress
assert.Check(t, ctr1IP != "", "Container IP not assigned")
// Create container on second network and try to ping first container
pingCmd := []string{"ping", "-n", "1", "-w", "2000", ctr1IP}
attachCtx, cancel := context.WithTimeout(ctx, 15*time.Second)
defer cancel()
res := container.RunAttach(attachCtx, t, c,
container.WithCmd(pingCmd...),
container.WithNetworkMode(net2Name),
)
defer c.ContainerRemove(ctx, res.ContainerID, containertypes.RemoveOptions{Force: true})
// Ping should fail, demonstrating network isolation
assert.Check(t, res.ExitCode != 0, "Ping succeeded unexpectedly - networks are not isolated")
// Windows ping failure can have various error messages, but we should see some indication of failure
stdout := res.Stdout.String()
stderr := res.Stderr.String()
// Check for common Windows ping failure indicators
hasFailureIndicator := strings.Contains(stdout, "Destination host unreachable") ||
strings.Contains(stdout, "Request timed out") ||
strings.Contains(stdout, "100% loss") ||
strings.Contains(stdout, "Lost = 1") ||
strings.Contains(stderr, "unreachable") ||
strings.Contains(stderr, "timeout")
assert.Check(t, hasFailureIndicator,
"Expected ping failure indicators not found. Exit code: %d, stdout: %q, stderr: %q",
res.ExitCode, stdout, stderr)
}
// TestWindowsNetworkEndpointManagement validates endpoint creation and management on Windows networks.
// Tests that multiple containers can be created and managed on the same network.
func TestWindowsNetworkEndpointManagement(t *testing.T) {
ctx := setupTest(t)
c := testEnv.APIClient()
netName := "endpoint-test-nat"
network.CreateNoError(ctx, t, c, netName, network.WithDriver("nat"))
defer network.RemoveNoError(ctx, t, c, netName)
// Create multiple containers on the same network
const numContainers = 3
containerIDs := make([]string, numContainers)
for i := 0; i < numContainers; i++ {
ctrName := fmt.Sprintf("endpoint-ctr-%d", i)
id := container.Run(ctx, t, c,
container.WithName(ctrName),
container.WithNetworkMode(netName),
)
containerIDs[i] = id
defer c.ContainerRemove(ctx, id, containertypes.RemoveOptions{Force: true})
}
netInfo, err := c.NetworkInspect(ctx, netName, types.NetworkInspectOptions{})
assert.NilError(t, err)
assert.Check(t, is.Equal(len(netInfo.Containers), numContainers),
"Expected %d containers, got %d", numContainers, len(netInfo.Containers))
// Verify each container has network connectivity to others
for i := 0; i < numContainers-1; i++ {
targetName := fmt.Sprintf("endpoint-ctr-%d", i)
pingCmd := []string{"ping", "-n", "1", "-w", "3000", targetName}
sourceName := fmt.Sprintf("endpoint-ctr-%d", i+1)
attachCtx, cancel := context.WithTimeout(ctx, 15*time.Second)
defer cancel()
res := container.RunAttach(attachCtx, t, c,
container.WithName(fmt.Sprintf("%s-pinger", sourceName)),
container.WithCmd(pingCmd...),
container.WithNetworkMode(netName),
)
defer c.ContainerRemove(ctx, res.ContainerID, containertypes.RemoveOptions{Force: true})
assert.Check(t, is.Equal(res.ExitCode, 0),
"Container %s failed to ping %s", sourceName, targetName)
}
}

View File

@@ -9,10 +9,12 @@ import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
testContainer "github.com/docker/docker/integration/internal/container"
"github.com/docker/docker/pkg/stdcopy"
"github.com/docker/docker/testutil"
"github.com/docker/docker/testutil/daemon"
"gotest.tools/v3/assert"
"gotest.tools/v3/poll"
)
// TestReadPluginNoRead tests that reads are supported even if the plugin isn't capable.
@@ -65,6 +67,7 @@ func TestReadPluginNoRead(t *testing.T) {
err = client.ContainerStart(ctx, c.ID, container.StartOptions{})
assert.Assert(t, err)
poll.WaitOn(t, testContainer.IsStopped(ctx, client, c.ID))
logs, err := client.ContainerLogs(ctx, c.ID, container.LogsOptions{ShowStdout: true})
if !test.logsSupported {
assert.Assert(t, err != nil)

View File

@@ -0,0 +1,71 @@
package service
import (
stdnet "net"
"strings"
"testing"
"time"
swarmtypes "github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/integration/internal/swarm"
"github.com/docker/docker/internal/testutils/networking"
"gotest.tools/v3/assert"
"gotest.tools/v3/icmd"
"gotest.tools/v3/poll"
"gotest.tools/v3/skip"
)
func TestRestoreIngressRulesOnFirewalldReload(t *testing.T) {
skip.If(t, testEnv.IsRemoteDaemon)
skip.If(t, testEnv.IsRootless, "rootless mode doesn't support Swarm-mode")
//skip.If(t, testEnv.FirewallBackendDriver() == "iptables")
skip.If(t, !networking.FirewalldRunning(), "Need firewalld to test restoration ingress rules")
ctx := setupTest(t)
// Check the published port is accessible.
checkHTTP := func(_ poll.LogT) poll.Result {
res := icmd.RunCommand("curl", "-v", "-o", "/dev/null", "-w", "%{http_code}\n",
"http://"+stdnet.JoinHostPort("localhost", "8080"))
// A "404 Not Found" means the server responded, but it's got nothing to serve.
if !strings.Contains(res.Stdout(), "404") {
return poll.Continue("404 - not found in: %s, %+v", res.Stdout(), res)
}
return poll.Success()
}
d := swarm.NewSwarm(ctx, t, testEnv)
defer d.Stop(t)
c := d.NewClientT(t)
defer c.Close()
serviceID := swarm.CreateService(ctx, t, d,
swarm.ServiceWithName("test-ingress-on-firewalld-reload"),
swarm.ServiceWithCommand([]string{"httpd", "-f"}),
swarm.ServiceWithEndpoint(&swarmtypes.EndpointSpec{
Ports: []swarmtypes.PortConfig{
{
Protocol: "tcp",
TargetPort: 80,
PublishedPort: 8080,
PublishMode: swarmtypes.PortConfigPublishModeIngress,
},
},
}),
)
defer func() {
err := c.ServiceRemove(ctx, serviceID)
assert.NilError(t, err)
}()
t.Log("Waiting for the service to start")
poll.WaitOn(t, swarm.RunningTasksCount(ctx, c, serviceID, 1), swarm.ServicePoll)
t.Log("Checking http access to the service")
poll.WaitOn(t, checkHTTP, poll.WithTimeout(30*time.Second))
t.Log("Firewalld reload")
networking.FirewalldReload(t, d)
t.Log("Checking http access to the service")
// It takes a while before this works ...
poll.WaitOn(t, checkHTTP, poll.WithTimeout(30*time.Second))
}

View File

@@ -0,0 +1,32 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.23
package iterutil
import (
"iter"
"maps"
)
// SameValues checks if a and b yield the same values, independent of order.
func SameValues[T comparable](a, b iter.Seq[T]) bool {
m, n := make(map[T]int), make(map[T]int)
for v := range a {
m[v]++
}
for v := range b {
n[v]++
}
return maps.Equal(m, n)
}
// Deref adapts an iterator of pointers to an iterator of values.
func Deref[T any, P *T](s iter.Seq[P]) iter.Seq[T] {
return func(yield func(T) bool) {
for p := range s {
if !yield(*p) {
return
}
}
}
}

View File

@@ -0,0 +1,31 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.23
package iterutil
import (
"slices"
"testing"
"gotest.tools/v3/assert"
)
func TestSameValues(t *testing.T) {
a := []int{1, 2, 3, 4, 3}
b := []int{3, 4, 3, 2, 1}
c := []int{1, 2, 3, 4}
assert.Check(t, SameValues(slices.Values(a), slices.Values(a)))
assert.Check(t, SameValues(slices.Values(c), slices.Values(c)))
assert.Check(t, SameValues(slices.Values(a), slices.Values(b)))
assert.Check(t, !SameValues(slices.Values(a), slices.Values(c)))
}
func TestDeref(t *testing.T) {
a := make([]*int, 3)
for i := range a {
a[i] = &i
}
b := slices.Collect(Deref(slices.Values(a)))
assert.DeepEqual(t, b, []int{0, 1, 2})
}

View File

@@ -1,5 +1,5 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.19
//go:build go1.23
package sliceutil

View File

@@ -0,0 +1,60 @@
package networking
import (
"fmt"
"os/exec"
"regexp"
"strings"
"testing"
"time"
"github.com/docker/docker/testutil/daemon"
"golang.org/x/net/context"
"gotest.tools/v3/assert"
"gotest.tools/v3/icmd"
"gotest.tools/v3/poll"
)
func FirewalldRunning() bool {
state, err := exec.Command("firewall-cmd", "--state").CombinedOutput()
return err == nil && strings.TrimSpace(string(state)) == "running"
}
func extractLogTime(s string) (time.Time, error) {
// time="2025-07-15T13:46:13.414214418Z" level=info msg=""
re := regexp.MustCompile(`time="([^"]+)"`)
matches := re.FindStringSubmatch(s)
if len(matches) < 2 {
return time.Time{}, fmt.Errorf("timestamp not found in log line: %s, matches: %+v", s, matches)
}
return time.Parse(time.RFC3339Nano, matches[1])
}
// FirewalldReload reloads firewalld and waits for the daemon to re-create its rules.
// It's a no-op if firewalld is not running, and the test fails if the reload does
// not complete.
func FirewalldReload(t *testing.T, d *daemon.Daemon) {
t.Helper()
if !FirewalldRunning() {
return
}
timeBeforeReload := time.Now()
res := icmd.RunCommand("firewall-cmd", "--reload")
assert.NilError(t, res.Error)
ctx := context.Background()
poll.WaitOn(t, d.PollCheckLogs(ctx, func(s string) bool {
if !strings.Contains(s, "Firewalld reload completed") {
return false
}
lastReload, err := extractLogTime(s)
if err != nil {
return false
}
if lastReload.After(timeBeforeReload) {
return true
}
return false
}))
}

View File

@@ -0,0 +1,47 @@
package networking
import (
"reflect"
"testing"
"time"
)
func Test_getTimeFromLogMsg(t *testing.T) {
tests := []struct {
name string
s string
want time.Time
wantErr bool
}{
{
name: "valid time",
s: `time="2025-07-15T13:46:13.414214418Z" level=info msg=""`,
want: time.Date(2025, 7, 15, 13, 46, 13, 414214418, time.UTC),
wantErr: false,
},
{
name: "invalid format",
s: `time="invalid-time-format" level=info msg=""`,
want: time.Time{},
wantErr: true,
},
{
name: "missing time",
s: `level=info msg=""`,
want: time.Time{},
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := extractLogTime(tt.s)
if (err != nil) != tt.wantErr {
t.Errorf("getTimeFromLogMsg() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("getTimeFromLogMsg() got = %v, want %v", got, tt.want)
}
})
}
}

View File

@@ -1,3 +1,6 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.23
package libnetwork
//go:generate protoc -I=. -I=../vendor/ --gogofaster_out=import_path=github.com/docker/docker/libnetwork:. agent.proto
@@ -7,10 +10,13 @@ import (
"encoding/json"
"fmt"
"net"
"net/netip"
"slices"
"sort"
"sync"
"github.com/containerd/log"
"github.com/docker/docker/internal/iterutil"
"github.com/docker/docker/libnetwork/cluster"
"github.com/docker/docker/libnetwork/discoverapi"
"github.com/docker/docker/libnetwork/driverapi"
@@ -490,17 +496,19 @@ func (n *Network) Services() map[string]ServiceInfo {
// Walk through the driver's tables, have the driver decode the entries
// and return the tuple {ep ID, value}. value is a string that coveys
// relevant info about the endpoint.
for _, table := range n.driverTables {
if table.objType != driverapi.EndpointObject {
continue
}
for key, value := range agent.networkDB.GetTableByNetwork(table.name, nwID) {
epID, info := d.DecodeTableEntry(table.name, key, value.Value)
if ep, ok := eps[epID]; !ok {
log.G(context.TODO()).Errorf("Inconsistent driver and libnetwork state for endpoint %s", epID)
} else {
ep.info = info
eps[epID] = ep
if d, ok := d.(driverapi.TableWatcher); ok {
for _, table := range n.driverTables {
if table.objType != driverapi.EndpointObject {
continue
}
for key, value := range agent.networkDB.GetTableByNetwork(table.name, nwID) {
epID, info := d.DecodeTableEntry(table.name, key, value.Value)
if ep, ok := eps[epID]; !ok {
log.G(context.TODO()).Errorf("Inconsistent driver and libnetwork state for endpoint %s", epID)
} else {
ep.info = info
eps[epID] = ep
}
}
}
}
@@ -777,23 +785,6 @@ func (n *Network) addDriverWatches() {
agent.driverCancelFuncs[n.ID()] = append(agent.driverCancelFuncs[n.ID()], cancel)
agent.mu.Unlock()
go c.handleTableEvents(ch, n.handleDriverTableEvent)
d, err := n.driver(false)
if err != nil {
log.G(context.TODO()).Errorf("Could not resolve driver %s while walking driver tabl: %v", n.networkType, err)
return
}
err = agent.networkDB.WalkTable(table.name, func(nid, key string, value []byte, deleted bool) bool {
// skip the entries that are mark for deletion, this is safe because this function is
// called at initialization time so there is no state to delete
if nid == n.ID() && !deleted {
d.EventNotify(driverapi.Create, nid, table.name, key, value)
}
return false
})
if err != nil {
log.G(context.TODO()).WithError(err).Warn("Error while walking networkdb")
}
}
}
@@ -830,33 +821,14 @@ func (n *Network) handleDriverTableEvent(ev events.Event) {
log.G(context.TODO()).Errorf("Could not resolve driver %s while handling driver table event: %v", n.networkType, err)
return
}
var (
etype driverapi.EventType
tname string
key string
value []byte
)
switch event := ev.(type) {
case networkdb.CreateEvent:
tname = event.Table
key = event.Key
value = event.Value
etype = driverapi.Create
case networkdb.DeleteEvent:
tname = event.Table
key = event.Key
value = event.Value
etype = driverapi.Delete
case networkdb.UpdateEvent:
tname = event.Table
key = event.Key
value = event.Value
etype = driverapi.Delete
ed, ok := d.(driverapi.TableWatcher)
if !ok {
log.G(context.TODO()).Errorf("Could not notify driver %s about table event: driver does not implement TableWatcher interface", n.networkType)
return
}
d.EventNotify(etype, n.ID(), tname, key, value)
event := ev.(networkdb.WatchEvent)
ed.EventNotify(n.ID(), event.Table, event.Key, event.Prev, event.Value)
}
func (c *Controller) handleNodeTableEvent(ev events.Event) {
@@ -865,13 +837,14 @@ func (c *Controller) handleNodeTableEvent(ev events.Event) {
isAdd bool
nodeAddr networkdb.NodeAddr
)
switch event := ev.(type) {
case networkdb.CreateEvent:
event := ev.(networkdb.WatchEvent)
switch {
case event.IsCreate():
value = event.Value
isAdd = true
case networkdb.DeleteEvent:
value = event.Value
case networkdb.UpdateEvent:
case event.IsDelete():
value = event.Prev
case event.IsUpdate():
log.G(context.TODO()).Errorf("Unexpected update node table event = %#v", event)
}
@@ -883,94 +856,139 @@ func (c *Controller) handleNodeTableEvent(ev events.Event) {
c.processNodeDiscovery([]net.IP{nodeAddr.Addr}, isAdd)
}
type endpointEvent struct {
EndpointRecord
// Virtual IP of the service to which this endpoint belongs.
VirtualIP netip.Addr
// IP assigned to this endpoint.
EndpointIP netip.Addr
}
func unmarshalEndpointRecord(data []byte) (*endpointEvent, error) {
var epRec EndpointRecord
if err := proto.Unmarshal(data, &epRec); err != nil {
return nil, fmt.Errorf("failed to unmarshal endpoint record: %w", err)
}
vip, _ := netip.ParseAddr(epRec.VirtualIP)
eip, _ := netip.ParseAddr(epRec.EndpointIP)
if epRec.Name == "" || !eip.IsValid() {
return nil, fmt.Errorf("invalid endpoint name/ip in service table event %s", data)
}
return &endpointEvent{
EndpointRecord: epRec,
VirtualIP: vip,
EndpointIP: eip,
}, nil
}
// EquivalentTo returns true if ev is semantically equivalent to other.
func (ev *endpointEvent) EquivalentTo(other *endpointEvent) bool {
return ev.Name == other.Name &&
ev.ServiceName == other.ServiceName &&
ev.ServiceID == other.ServiceID &&
ev.VirtualIP == other.VirtualIP &&
ev.EndpointIP == other.EndpointIP &&
ev.ServiceDisabled == other.ServiceDisabled &&
iterutil.SameValues(
iterutil.Deref(slices.Values(ev.IngressPorts)),
iterutil.Deref(slices.Values(other.IngressPorts))) &&
iterutil.SameValues(slices.Values(ev.Aliases), slices.Values(other.Aliases)) &&
iterutil.SameValues(slices.Values(ev.TaskAliases), slices.Values(other.TaskAliases))
}
func (c *Controller) handleEpTableEvent(ev events.Event) {
var (
nid string
eid string
value []byte
epRec EndpointRecord
)
event := ev.(networkdb.WatchEvent)
nid := event.NetworkID
eid := event.Key
switch event := ev.(type) {
case networkdb.CreateEvent:
nid = event.NetworkID
eid = event.Key
value = event.Value
case networkdb.DeleteEvent:
nid = event.NetworkID
eid = event.Key
value = event.Value
case networkdb.UpdateEvent:
nid = event.NetworkID
eid = event.Key
value = event.Value
default:
log.G(context.TODO()).Errorf("Unexpected update service table event = %#v", event)
return
var prev, epRec *endpointEvent
if event.Prev != nil {
var err error
prev, err = unmarshalEndpointRecord(event.Prev)
if err != nil {
log.G(context.TODO()).WithError(err).Error("error unmarshaling previous value from service table event")
return
}
}
if event.Value != nil {
var err error
epRec, err = unmarshalEndpointRecord(event.Value)
if err != nil {
log.G(context.TODO()).WithError(err).Error("error unmarshaling service table event")
return
}
}
err := proto.Unmarshal(value, &epRec)
if err != nil {
log.G(context.TODO()).Errorf("Failed to unmarshal service table value: %v", err)
return
}
logger := log.G(context.TODO()).WithFields(log.Fields{
"evt": event,
"R": epRec,
"prev": prev,
})
logger.Debug("handleEpTableEvent")
containerName := epRec.Name
svcName := epRec.ServiceName
svcID := epRec.ServiceID
vip := net.ParseIP(epRec.VirtualIP)
ip := net.ParseIP(epRec.EndpointIP)
ingressPorts := epRec.IngressPorts
serviceAliases := epRec.Aliases
taskAliases := epRec.TaskAliases
if prev != nil {
if epRec != nil && prev.EquivalentTo(epRec) {
// Avoid flapping if we would otherwise remove a service
// binding then immediately replace it with an equivalent one.
return
}
if containerName == "" || ip == nil {
log.G(context.TODO()).Errorf("Invalid endpoint name/ip received while handling service table event %s", value)
return
}
switch ev.(type) {
case networkdb.CreateEvent:
log.G(context.TODO()).Debugf("handleEpTableEvent ADD %s R:%v", eid, epRec)
if svcID != "" {
if prev.ServiceID != "" {
// This is a remote task part of a service
if err := c.addServiceBinding(svcName, svcID, nid, eid, containerName, vip, ingressPorts, serviceAliases, taskAliases, ip, "handleEpTableEvent"); err != nil {
log.G(context.TODO()).Errorf("failed adding service binding for %s epRec:%v err:%v", eid, epRec, err)
return
if !prev.ServiceDisabled {
err := c.rmServiceBinding(prev.ServiceName, prev.ServiceID, nid, eid,
prev.Name, prev.VirtualIP.AsSlice(), prev.IngressPorts,
prev.Aliases, prev.TaskAliases, prev.EndpointIP.AsSlice(),
"handleEpTableEvent", true, true)
if err != nil {
logger.WithError(err).Error("failed removing service binding")
}
}
} else {
// This is a remote container simply attached to an attachable network
if err := c.addContainerNameResolution(nid, eid, containerName, taskAliases, ip, "handleEpTableEvent"); err != nil {
log.G(context.TODO()).Errorf("failed adding container name resolution for %s epRec:%v err:%v", eid, epRec, err)
err := c.delContainerNameResolution(nid, eid, prev.Name, prev.TaskAliases,
prev.EndpointIP.AsSlice(), "handleEpTableEvent")
if err != nil {
logger.WithError(err).Errorf("failed removing container name resolution")
}
}
}
case networkdb.DeleteEvent:
log.G(context.TODO()).Debugf("handleEpTableEvent DEL %s R:%v", eid, epRec)
if svcID != "" {
if epRec != nil {
if epRec.ServiceID != "" {
// This is a remote task part of a service
if err := c.rmServiceBinding(svcName, svcID, nid, eid, containerName, vip, ingressPorts, serviceAliases, taskAliases, ip, "handleEpTableEvent", true, true); err != nil {
log.G(context.TODO()).Errorf("failed removing service binding for %s epRec:%v err:%v", eid, epRec, err)
return
if epRec.ServiceDisabled {
// Don't double-remove a service binding
if prev == nil || prev.ServiceID != epRec.ServiceID || !prev.ServiceDisabled {
err := c.rmServiceBinding(epRec.ServiceName, epRec.ServiceID,
nid, eid, epRec.Name, epRec.VirtualIP.AsSlice(),
epRec.IngressPorts, epRec.Aliases, epRec.TaskAliases,
epRec.EndpointIP.AsSlice(), "handleEpTableEvent", true, false)
if err != nil {
logger.WithError(err).Error("failed disabling service binding")
return
}
}
} else {
err := c.addServiceBinding(epRec.ServiceName, epRec.ServiceID, nid, eid,
epRec.Name, epRec.VirtualIP.AsSlice(), epRec.IngressPorts,
epRec.Aliases, epRec.TaskAliases, epRec.EndpointIP.AsSlice(),
"handleEpTableEvent")
if err != nil {
logger.WithError(err).Error("failed adding service binding")
return
}
}
} else {
// This is a remote container simply attached to an attachable network
if err := c.delContainerNameResolution(nid, eid, containerName, taskAliases, ip, "handleEpTableEvent"); err != nil {
log.G(context.TODO()).Errorf("failed removing container name resolution for %s epRec:%v err:%v", eid, epRec, err)
err := c.addContainerNameResolution(nid, eid, epRec.Name, epRec.TaskAliases,
epRec.EndpointIP.AsSlice(), "handleEpTableEvent")
if err != nil {
logger.WithError(err).Errorf("failed adding container name resolution")
}
}
case networkdb.UpdateEvent:
log.G(context.TODO()).Debugf("handleEpTableEvent UPD %s R:%v", eid, epRec)
// We currently should only get these to inform us that an endpoint
// is disabled. Report if otherwise.
if svcID == "" || !epRec.ServiceDisabled {
log.G(context.TODO()).Errorf("Unexpected update table event for %s epRec:%v", eid, epRec)
return
}
// This is a remote task that is part of a service that is now disabled
if err := c.rmServiceBinding(svcName, svcID, nid, eid, containerName, vip, ingressPorts, serviceAliases, taskAliases, ip, "handleEpTableEvent", true, false); err != nil {
log.G(context.TODO()).Errorf("failed disabling service binding for %s epRec:%v err:%v", eid, epRec, err)
return
}
}
}

93
libnetwork/agent_test.go Normal file
View File

@@ -0,0 +1,93 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.23
package libnetwork
import (
"net/netip"
"slices"
"testing"
"gotest.tools/v3/assert"
)
func TestEndpointEvent_EquivalentTo(t *testing.T) {
assert.Check(t, (&endpointEvent{}).EquivalentTo(&endpointEvent{}))
a := endpointEvent{
EndpointRecord: EndpointRecord{
Name: "foo",
ServiceName: "bar",
ServiceID: "baz",
IngressPorts: []*PortConfig{
{
Protocol: ProtocolTCP,
TargetPort: 80,
},
{
Name: "dns",
Protocol: ProtocolUDP,
TargetPort: 5353,
PublishedPort: 53,
},
},
},
VirtualIP: netip.MustParseAddr("10.0.0.42"),
EndpointIP: netip.MustParseAddr("192.168.69.42"),
}
assert.Check(t, a.EquivalentTo(&a))
reflexiveEquiv := func(a, b *endpointEvent) bool {
t.Helper()
assert.Check(t, a.EquivalentTo(b) == b.EquivalentTo(a), "reflexive equivalence")
return a.EquivalentTo(b)
}
b := a
b.ServiceDisabled = true
assert.Check(t, !reflexiveEquiv(&a, &b), "differing by ServiceDisabled")
c := a
c.IngressPorts = slices.Clone(a.IngressPorts)
slices.Reverse(c.IngressPorts)
assert.Check(t, reflexiveEquiv(&a, &c), "IngressPorts order should not matter")
d := a
d.IngressPorts = append(d.IngressPorts, a.IngressPorts[0])
assert.Check(t, !reflexiveEquiv(&a, &d), "Differing number of copies of IngressPort entries should not be equivalent")
d.IngressPorts = a.IngressPorts[:1]
assert.Check(t, !reflexiveEquiv(&a, &d), "Removing an IngressPort entry should not be equivalent")
e := a
e.Aliases = []string{"alias1", "alias2"}
assert.Check(t, !reflexiveEquiv(&a, &e), "Differing Aliases should not be equivalent")
f := a
f.TaskAliases = []string{"taskalias1", "taskalias2"}
assert.Check(t, !reflexiveEquiv(&a, &f), "Adding TaskAliases should not be equivalent")
g := a
g.TaskAliases = []string{"taskalias2", "taskalias1"}
assert.Check(t, reflexiveEquiv(&f, &g), "TaskAliases order should not matter")
g.TaskAliases = g.TaskAliases[:1]
assert.Check(t, !reflexiveEquiv(&f, &g), "Differing number of TaskAliases should not be equivalent")
h := a
h.EndpointIP = netip.MustParseAddr("192.168.69.43")
assert.Check(t, !reflexiveEquiv(&a, &h), "Differing EndpointIP should not be equivalent")
i := a
i.VirtualIP = netip.MustParseAddr("10.0.0.69")
assert.Check(t, !reflexiveEquiv(&a, &i), "Differing VirtualIP should not be equivalent")
j := a
j.ServiceID = "qux"
assert.Check(t, !reflexiveEquiv(&a, &j), "Differing ServiceID should not be equivalent")
k := a
k.ServiceName = "quux"
assert.Check(t, !reflexiveEquiv(&a, &k), "Differing ServiceName should not be equivalent")
l := a
l.Name = "aaaaa"
assert.Check(t, !reflexiveEquiv(&a, &l), "Differing Name should not be equivalent")
}

View File

@@ -232,7 +232,7 @@ func (h *Bitmap) IsSet(ordinal uint64) bool {
}
// set/reset the bit
func (h *Bitmap) set(ordinal, start, end uint64, any bool, release bool, serial bool) (uint64, error) {
func (h *Bitmap) set(ordinal, start, end uint64, isAvailable bool, release bool, serial bool) (uint64, error) {
var (
bitPos uint64
bytePos uint64
@@ -248,7 +248,7 @@ func (h *Bitmap) set(ordinal, start, end uint64, any bool, release bool, serial
if release {
bytePos, bitPos = ordinalToPos(ordinal)
} else {
if any {
if isAvailable {
bytePos, bitPos, err = getAvailableFromCurrent(h.head, start, curr, end)
ret = posToOrdinal(bytePos, bitPos)
if err == nil {

View File

@@ -80,13 +80,6 @@ func watchTableEntries(w http.ResponseWriter, r *http.Request) {
}
func handleTableEvents(tableName string, ch *events.Channel) {
var (
// nid string
eid string
value []byte
isAdd bool
)
log.G(context.TODO()).Infof("Started watching table:%s", tableName)
for {
select {
@@ -95,27 +88,17 @@ func handleTableEvents(tableName string, ch *events.Channel) {
return
case evt := <-ch.C:
log.G(context.TODO()).Infof("Recevied new event on:%s", tableName)
switch event := evt.(type) {
case networkdb.CreateEvent:
// nid = event.NetworkID
eid = event.Key
value = event.Value
isAdd = true
case networkdb.DeleteEvent:
// nid = event.NetworkID
eid = event.Key
value = event.Value
isAdd = false
default:
log.G(context.TODO()).Infof("Received new event on:%s", tableName)
event, ok := evt.(networkdb.WatchEvent)
if !ok {
log.G(context.TODO()).Fatalf("Unexpected table event = %#v", event)
}
if isAdd {
// log.G(ctx).Infof("Add %s %s", tableName, eid)
clientWatchTable[tableName].entries[eid] = string(value)
if event.Value != nil {
// log.G(ctx).Infof("Add %s %s", tableName, event.Key)
clientWatchTable[tableName].entries[event.Key] = string(event.Value)
} else {
// log.G(ctx).Infof("Del %s %s", tableName, eid)
delete(clientWatchTable[tableName].entries, eid)
// log.G(ctx).Infof("Del %s %s", tableName, event.Key)
delete(clientWatchTable[tableName].entries, event.Key)
}
}
}

View File

@@ -0,0 +1,14 @@
package cnmallocator
import (
"runtime"
"testing"
"github.com/moby/swarmkit/v2/manager/allocator"
"gotest.tools/v3/skip"
)
func TestAllocator(t *testing.T) {
skip.If(t, runtime.GOOS == "windows", "Allocator tests are hardcoded to use Linux network driver names")
allocator.RunAllocatorTests(t, NewProvider(nil))
}

View File

@@ -11,6 +11,6 @@ var initializers = map[string]func(driverapi.Registerer) error{
}
// PredefinedNetworks returns the list of predefined network structures
func PredefinedNetworks() []networkallocator.PredefinedNetworkData {
func (*Provider) PredefinedNetworks() []networkallocator.PredefinedNetworkData {
return nil
}

View File

@@ -5,14 +5,15 @@ import (
"strconv"
"strings"
"github.com/containerd/log"
"github.com/docker/docker/libnetwork/ipamapi"
builtinIpam "github.com/docker/docker/libnetwork/ipams/builtin"
nullIpam "github.com/docker/docker/libnetwork/ipams/null"
"github.com/docker/docker/libnetwork/ipamutils"
"github.com/moby/swarmkit/v2/log"
"github.com/moby/swarmkit/v2/manager/allocator/networkallocator"
)
func initIPAMDrivers(r ipamapi.Registerer, netConfig *NetworkConfig) error {
func initIPAMDrivers(r ipamapi.Registerer, netConfig *networkallocator.Config) error {
var addressPool []*ipamutils.NetworkToSplit
var str strings.Builder
str.WriteString("Subnetlist - ")
@@ -36,7 +37,7 @@ func initIPAMDrivers(r ipamapi.Registerer, netConfig *NetworkConfig) error {
return err
}
if addressPool != nil {
log.G(context.TODO()).Infof("Swarm initialized global default address pool to: " + str.String())
log.G(context.TODO()).Info("Swarm initialized global default address pool to: " + str.String())
}
for _, fn := range [](func(ipamapi.Registerer) error){

View File

@@ -19,7 +19,7 @@ var initializers = map[string]func(driverapi.Registerer) error{
}
// PredefinedNetworks returns the list of predefined network structures
func PredefinedNetworks() []networkallocator.PredefinedNetworkData {
func (*Provider) PredefinedNetworks() []networkallocator.PredefinedNetworkData {
return []networkallocator.PredefinedNetworkData{
{Name: "bridge", Driver: "bridge"},
{Name: "host", Driver: "host"},

Some files were not shown because too many files have changed in this diff Show More