Compare commits

...

9 Commits

Author SHA1 Message Date
Sebastiaan van Stijn
aab38e91a7 Merge pull request #51844 from corhere/v25/fix-service-binding-soft-disable
[25.0 backport] libnetwork: fix graceful service endpoint removal
2026-01-15 00:07:39 +01:00
Cory Snider
2bb8af04d9 libnetwork: fix graceful service endpoint removal
When a container is stopped, new connections to a published port will
not be routed to that container but any existing connections are
permitted to continue uninterrupted for the duration of the container's
grace period. Unfortunately recent fixes to overlay networks introduced
a regression: existing connections routed over the service mesh to
containers on remote nodes are dropped immediately when the container is
stopped, irrespective of the grace period.

Fix the handling of NetworkDB endpoint table events so that the endpoint
is disabled in the load balancer when a service endpoint transitions to
ServiceDisabled instead of deleting the endpoint and re-adding it. And
fix the other bugged state transitions with the help of a unit test
which exhaustively covers all permutations of endpoint event.

Signed-off-by: Cory Snider <csnider@mirantis.com>
2026-01-13 18:20:25 -05:00
Cory Snider
f415784c1a Merge pull request #51575 from smerkviladze/25.0-add-windows-integration-tests
[25.0 backport] integration: add Windows network driver and isolation tests
2025-11-26 18:55:24 -05:00
Sopho Merkviladze
4ef26e4c35 integration: add Windows network driver and isolation tests
Add integration tests for Windows container functionality focusing on network drivers and container isolation modes.

Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
2025-11-26 13:24:13 +04:00
Sebastiaan van Stijn
2b409606ac Merge pull request #51344 from smerkviladze/50179-25.0-windows-gha-updates
[25.0 backport] gha: update to windows 2022 / 2025
2025-10-30 22:20:37 +01:00
Sebastiaan van Stijn
00fbff3423 integration/networking: increase context timeout for attach
The TestBridgeICCWindows test was failing on Windows due to a context timeout:

=== FAIL: github.com/docker/docker/integration/networking TestBridgeICCWindows/User_defined_nat_network (9.02s)
    bridge_test.go:243: assertion failed: error is not nil: Post "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.44/containers/62a4ed964f125e023cc298fde2d4d2f8f35415da970fd163b24e181b8c0c6654/start": context deadline exceeded
    panic.go:635: assertion failed: error is not nil: Error response from daemon: error while removing network: network mynat id 25066355c070294c1d8d596c204aa81f056cc32b3e12bf7c56ca9c5746a85b0c has active endpoints

=== FAIL: github.com/docker/docker/integration/networking TestBridgeICCWindows (17.65s)

Windows appears to be slower to start, so these timeouts are expected.
Increase the context timeout to give it a little more time.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0ea28fede0)
Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
2025-10-30 21:58:10 +04:00
Sebastiaan van Stijn
92df858a5b integration-cli: TestCopyFromContainerPathIsNotDir: adjust for win 2025
It looks like the error returned by Windows changed in Windows 2025; before
Windows 2025, this produced a `ERROR_INVALID_NAME`;

    The filename, directory name, or volume label syntax is incorrect.

But Windows 2025 produces a `ERROR_DIRECTORY` ("The directory name is invalid."):

    CreateFile \\\\?\\Volume{d9f06b05-0405-418b-b3e5-4fede64f3cdc}\\windows\\system32\\drivers\\etc\\hosts\\: The directory name is invalid.

Docs; https://learn.microsoft.com/en-us/windows/win32/debug/system-error-codes--0-499-

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d3d20b9195)
Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
2025-10-30 14:19:54 +04:00
Sebastiaan van Stijn
00f9f839c6 gha: run windows 2025 on PRs, 2022 scheduled
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9316396db0)
Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
2025-10-30 13:37:45 +04:00
Sebastiaan van Stijn
acd2546285 gha: update to windows 2022 / 2025
The hosted Windows 2019 runners reach EOL on June 30;
https://github.com/actions/runner-images/issues/12045

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6f484d0d4c)
Signed-off-by: Sopho Merkviladze <smerkviladze@mirantis.com>
2025-10-30 13:07:40 +04:00
9 changed files with 1190 additions and 44 deletions

View File

@@ -32,8 +32,8 @@ env:
GOTESTLIST_VERSION: v0.3.1
TESTSTAT_VERSION: v0.1.25
WINDOWS_BASE_IMAGE: mcr.microsoft.com/windows/servercore
WINDOWS_BASE_TAG_2019: ltsc2019
WINDOWS_BASE_TAG_2022: ltsc2022
WINDOWS_BASE_TAG_2025: ltsc2025
TEST_IMAGE_NAME: moby:test
TEST_CTN_NAME: moby
DOCKER_BUILDKIT: 0
@@ -65,8 +65,8 @@ jobs:
run: |
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go-build"
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go\pkg\mod"
If ("${{ inputs.os }}" -eq "windows-2019") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2019 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
If ("${{ inputs.os }}" -eq "windows-2025") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2025 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
@@ -145,8 +145,8 @@ jobs:
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go-build"
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go\pkg\mod"
New-Item -ItemType "directory" -Path "bundles"
If ("${{ inputs.os }}" -eq "windows-2019") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2019 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
If ("${{ inputs.os }}" -eq "windows-2025") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2025 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
@@ -319,8 +319,8 @@ jobs:
name: Init
run: |
New-Item -ItemType "directory" -Path "bundles"
If ("${{ inputs.os }}" -eq "windows-2019") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2019 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
If ("${{ inputs.os }}" -eq "windows-2025") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2025 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}

View File

@@ -14,12 +14,9 @@ concurrency:
cancel-in-progress: true
on:
schedule:
- cron: '0 10 * * *'
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
pull_request:
jobs:
validate-dco:

View File

@@ -1,4 +1,4 @@
name: windows-2019
name: windows-2025
# Default to 'contents: read', which grants actions to read commits.
#
@@ -14,9 +14,13 @@ concurrency:
cancel-in-progress: true
on:
schedule:
- cron: '0 10 * * *'
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
pull_request:
jobs:
validate-dco:
@@ -37,6 +41,6 @@ jobs:
matrix:
storage: ${{ fromJson(needs.test-prepare.outputs.matrix) }}
with:
os: windows-2019
os: windows-2025
storage: ${{ matrix.storage }}
send_coverage: false

View File

@@ -7,6 +7,7 @@ import (
"io"
"os"
"path/filepath"
"strings"
"testing"
"github.com/docker/docker/api/types"
@@ -31,6 +32,8 @@ func TestCopyFromContainerPathDoesNotExist(t *testing.T) {
assert.Check(t, is.ErrorContains(err, "Could not find the file /dne in container "+cid))
}
// TestCopyFromContainerPathIsNotDir tests that an error is returned when
// trying to create a directory on a path that's a file.
func TestCopyFromContainerPathIsNotDir(t *testing.T) {
skip.If(t, testEnv.UsingSnapshotter(), "FIXME: https://github.com/moby/moby/issues/47107")
ctx := setupTest(t)
@@ -38,14 +41,29 @@ func TestCopyFromContainerPathIsNotDir(t *testing.T) {
apiClient := testEnv.APIClient()
cid := container.Create(ctx, t, apiClient)
path := "/etc/passwd/"
expected := "not a directory"
// Pick a path that already exists as a file; on Linux "/etc/passwd"
// is expected to be there, so we pick that for convenience.
existingFile := "/etc/passwd/"
expected := []string{"not a directory"}
if testEnv.DaemonInfo.OSType == "windows" {
path = "c:/windows/system32/drivers/etc/hosts/"
expected = "The filename, directory name, or volume label syntax is incorrect."
existingFile = "c:/windows/system32/drivers/etc/hosts/"
// Depending on the version of Windows, this produces a "ERROR_INVALID_NAME" (Windows < 2025),
// or a "ERROR_DIRECTORY" (Windows 2025); https://learn.microsoft.com/en-us/windows/win32/debug/system-error-codes--0-499-
expected = []string{
"The directory name is invalid.", // ERROR_DIRECTORY
"The filename, directory name, or volume label syntax is incorrect.", // ERROR_INVALID_NAME
}
}
_, _, err := apiClient.CopyFromContainer(ctx, cid, path)
assert.Assert(t, is.ErrorContains(err, expected))
_, _, err := apiClient.CopyFromContainer(ctx, cid, existingFile)
var found bool
for _, expErr := range expected {
if err != nil && strings.Contains(err.Error(), expErr) {
found = true
break
}
}
assert.Check(t, found, "Expected error to be one of %v, but got %v", expected, err)
}
func TestCopyToContainerPathDoesNotExist(t *testing.T) {

View File

@@ -0,0 +1,412 @@
package container // import "github.com/docker/docker/integration/container"
import (
"context"
"strings"
"testing"
"time"
"github.com/docker/docker/api/types"
containertypes "github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/api/types/volume"
"github.com/docker/docker/integration/internal/container"
"github.com/docker/docker/testutil"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
)
// TestWindowsProcessIsolation validates process isolation on Windows.
func TestWindowsProcessIsolation(t *testing.T) {
ctx := setupTest(t)
apiClient := testEnv.APIClient()
testcases := []struct {
name string
description string
validate func(t *testing.T, ctx context.Context, id string)
}{
{
name: "Process isolation basic container lifecycle",
description: "Validate container can start, run, and stop with process isolation",
validate: func(t *testing.T, ctx context.Context, id string) {
// Verify container is running
ctrInfo := container.Inspect(ctx, t, apiClient, id)
assert.Check(t, is.Equal(ctrInfo.State.Running, true))
assert.Check(t, is.Equal(ctrInfo.HostConfig.Isolation, containertypes.IsolationProcess))
execCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
res := container.ExecT(execCtx, t, apiClient, id, []string{"cmd", "/c", "echo", "test"})
assert.Check(t, is.Equal(res.ExitCode, 0))
assert.Check(t, strings.Contains(res.Stdout(), "test"))
},
},
{
name: "Process isolation filesystem access",
description: "Validate filesystem operations work correctly with process isolation",
validate: func(t *testing.T, ctx context.Context, id string) {
execCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
// Create a test file
res := container.ExecT(execCtx, t, apiClient, id,
[]string{"cmd", "/c", "echo test123 > C:\\testfile.txt"})
assert.Check(t, is.Equal(res.ExitCode, 0))
// Read the test file
execCtx2, cancel2 := context.WithTimeout(ctx, 10*time.Second)
defer cancel2()
res2 := container.ExecT(execCtx2, t, apiClient, id,
[]string{"cmd", "/c", "type", "C:\\testfile.txt"})
assert.Check(t, is.Equal(res2.ExitCode, 0))
assert.Check(t, strings.Contains(res2.Stdout(), "test123"))
},
},
{
name: "Process isolation network connectivity",
description: "Validate network connectivity works with process isolation",
validate: func(t *testing.T, ctx context.Context, id string) {
execCtx, cancel := context.WithTimeout(ctx, 15*time.Second)
defer cancel()
// Test localhost connectivity
res := container.ExecT(execCtx, t, apiClient, id,
[]string{"ping", "-n", "1", "-w", "3000", "localhost"})
assert.Check(t, is.Equal(res.ExitCode, 0))
assert.Check(t, strings.Contains(res.Stdout(), "Reply from") ||
strings.Contains(res.Stdout(), "Received = 1"))
},
},
{
name: "Process isolation environment variables",
description: "Validate environment variables are properly isolated",
validate: func(t *testing.T, ctx context.Context, id string) {
execCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
// Check that container has expected environment variables
res := container.ExecT(execCtx, t, apiClient, id,
[]string{"cmd", "/c", "set"})
assert.Check(t, is.Equal(res.ExitCode, 0))
// Should have Windows-specific environment variables
stdout := res.Stdout()
assert.Check(t, strings.Contains(stdout, "COMPUTERNAME") ||
strings.Contains(stdout, "OS=Windows"))
},
},
{
name: "Process isolation CPU access",
description: "Validate container can access CPU information",
validate: func(t *testing.T, ctx context.Context, id string) {
execCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
// Check NUMBER_OF_PROCESSORS environment variable
res := container.ExecT(execCtx, t, apiClient, id,
[]string{"cmd", "/c", "echo", "%NUMBER_OF_PROCESSORS%"})
assert.Check(t, is.Equal(res.ExitCode, 0))
// Should return a number
output := strings.TrimSpace(res.Stdout())
assert.Check(t, output != "" && output != "%NUMBER_OF_PROCESSORS%",
"NUMBER_OF_PROCESSORS not set")
},
},
}
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
ctx := testutil.StartSpan(ctx, t)
// Create and start container with process isolation
id := container.Run(ctx, t, apiClient,
container.WithIsolation(containertypes.IsolationProcess),
container.WithCmd("ping", "-t", "localhost"),
)
defer apiClient.ContainerRemove(ctx, id, containertypes.RemoveOptions{Force: true})
tc.validate(t, ctx, id)
})
}
}
// TestWindowsHyperVIsolation validates Hyper-V isolation on Windows.
func TestWindowsHyperVIsolation(t *testing.T) {
ctx := setupTest(t)
apiClient := testEnv.APIClient()
testcases := []struct {
name string
description string
validate func(t *testing.T, ctx context.Context, id string)
}{
{
name: "Hyper-V isolation basic container lifecycle",
description: "Validate container can start, run, and stop with Hyper-V isolation",
validate: func(t *testing.T, ctx context.Context, id string) {
// Verify container is running
ctrInfo := container.Inspect(ctx, t, apiClient, id)
assert.Check(t, is.Equal(ctrInfo.State.Running, true))
assert.Check(t, is.Equal(ctrInfo.HostConfig.Isolation, containertypes.IsolationHyperV))
// Execute a simple command
execCtx, cancel := context.WithTimeout(ctx, 15*time.Second)
defer cancel()
res := container.ExecT(execCtx, t, apiClient, id, []string{"cmd", "/c", "echo", "hyperv-test"})
assert.Check(t, is.Equal(res.ExitCode, 0))
assert.Check(t, strings.Contains(res.Stdout(), "hyperv-test"))
},
},
{
name: "Hyper-V isolation filesystem operations",
description: "Validate filesystem isolation with Hyper-V",
validate: func(t *testing.T, ctx context.Context, id string) {
execCtx, cancel := context.WithTimeout(ctx, 15*time.Second)
defer cancel()
// Test file creation
res := container.ExecT(execCtx, t, apiClient, id,
[]string{"cmd", "/c", "echo hyperv-file > C:\\hvtest.txt"})
assert.Check(t, is.Equal(res.ExitCode, 0))
// Test file read
execCtx2, cancel2 := context.WithTimeout(ctx, 15*time.Second)
defer cancel2()
res2 := container.ExecT(execCtx2, t, apiClient, id,
[]string{"cmd", "/c", "type", "C:\\hvtest.txt"})
assert.Check(t, is.Equal(res2.ExitCode, 0))
assert.Check(t, strings.Contains(res2.Stdout(), "hyperv-file"))
},
},
{
name: "Hyper-V isolation network connectivity",
description: "Validate network works with Hyper-V isolation",
validate: func(t *testing.T, ctx context.Context, id string) {
execCtx, cancel := context.WithTimeout(ctx, 15*time.Second)
defer cancel()
// Test localhost connectivity
res := container.ExecT(execCtx, t, apiClient, id,
[]string{"ping", "-n", "1", "-w", "5000", "localhost"})
assert.Check(t, is.Equal(res.ExitCode, 0))
},
},
}
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
ctx := testutil.StartSpan(ctx, t)
// Create and start container with Hyper-V isolation
id := container.Run(ctx, t, apiClient,
container.WithIsolation(containertypes.IsolationHyperV),
container.WithCmd("ping", "-t", "localhost"),
)
defer apiClient.ContainerRemove(ctx, id, containertypes.RemoveOptions{Force: true})
tc.validate(t, ctx, id)
})
}
}
// TestWindowsIsolationComparison validates that both isolation modes can coexist
// and that containers can be created with different isolation modes on Windows.
func TestWindowsIsolationComparison(t *testing.T) {
ctx := setupTest(t)
apiClient := testEnv.APIClient()
// Create container with process isolation
processID := container.Run(ctx, t, apiClient,
container.WithIsolation(containertypes.IsolationProcess),
container.WithCmd("ping", "-t", "localhost"),
)
defer apiClient.ContainerRemove(ctx, processID, containertypes.RemoveOptions{Force: true})
processInfo := container.Inspect(ctx, t, apiClient, processID)
assert.Check(t, is.Equal(processInfo.HostConfig.Isolation, containertypes.IsolationProcess))
assert.Check(t, is.Equal(processInfo.State.Running, true))
// Create container with Hyper-V isolation
hypervID := container.Run(ctx, t, apiClient,
container.WithIsolation(containertypes.IsolationHyperV),
container.WithCmd("ping", "-t", "localhost"),
)
defer apiClient.ContainerRemove(ctx, hypervID, containertypes.RemoveOptions{Force: true})
hypervInfo := container.Inspect(ctx, t, apiClient, hypervID)
assert.Check(t, is.Equal(hypervInfo.HostConfig.Isolation, containertypes.IsolationHyperV))
assert.Check(t, is.Equal(hypervInfo.State.Running, true))
// Verify both containers can run simultaneously
processInfo2 := container.Inspect(ctx, t, apiClient, processID)
hypervInfo2 := container.Inspect(ctx, t, apiClient, hypervID)
assert.Check(t, is.Equal(processInfo2.State.Running, true))
assert.Check(t, is.Equal(hypervInfo2.State.Running, true))
}
// TestWindowsProcessIsolationResourceConstraints validates resource constraints
// work correctly with process isolation on Windows.
func TestWindowsProcessIsolationResourceConstraints(t *testing.T) {
ctx := setupTest(t)
apiClient := testEnv.APIClient()
testcases := []struct {
name string
cpuShares int64
nanoCPUs int64
memoryLimit int64
cpuCount int64
validateConfig func(t *testing.T, ctrInfo types.ContainerJSON)
}{
{
name: "CPU shares constraint - config only",
cpuShares: 512,
// Note: CPU shares are accepted by the API but NOT enforced on Windows.
// This test only verifies the configuration is stored correctly.
// Actual enforcement does not work - containers get equal CPU regardless of shares.
// Use NanoCPUs (--cpus flag) for actual CPU limiting on Windows.
validateConfig: func(t *testing.T, ctrInfo types.ContainerJSON) {
assert.Check(t, is.Equal(ctrInfo.HostConfig.CPUShares, int64(512)))
},
},
{
name: "CPU limit (NanoCPUs) constraint",
nanoCPUs: 2000000000, // 2.0 CPUs
// NanoCPUs enforce hard CPU limits on Windows (unlike CPUShares which don't work)
validateConfig: func(t *testing.T, ctrInfo types.ContainerJSON) {
assert.Check(t, is.Equal(ctrInfo.HostConfig.NanoCPUs, int64(2000000000)))
},
},
{
name: "Memory limit constraint",
memoryLimit: 512 * 1024 * 1024, // 512MB
// Memory limits enforce hard limits on container memory usage
validateConfig: func(t *testing.T, ctrInfo types.ContainerJSON) {
assert.Check(t, is.Equal(ctrInfo.HostConfig.Memory, int64(512*1024*1024)))
},
},
{
name: "CPU count constraint",
cpuCount: 2,
// CPU count limits the number of CPUs available to the container
validateConfig: func(t *testing.T, ctrInfo types.ContainerJSON) {
assert.Check(t, is.Equal(ctrInfo.HostConfig.CPUCount, int64(2)))
},
},
}
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
ctx := testutil.StartSpan(ctx, t)
opts := []func(*container.TestContainerConfig){
container.WithIsolation(containertypes.IsolationProcess),
container.WithCmd("ping", "-t", "localhost"),
}
if tc.cpuShares > 0 {
opts = append(opts, func(config *container.TestContainerConfig) {
config.HostConfig.CPUShares = tc.cpuShares
})
}
if tc.nanoCPUs > 0 {
opts = append(opts, func(config *container.TestContainerConfig) {
config.HostConfig.NanoCPUs = tc.nanoCPUs
})
}
if tc.memoryLimit > 0 {
opts = append(opts, func(config *container.TestContainerConfig) {
config.HostConfig.Memory = tc.memoryLimit
})
}
if tc.cpuCount > 0 {
opts = append(opts, func(config *container.TestContainerConfig) {
config.HostConfig.CPUCount = tc.cpuCount
})
}
id := container.Run(ctx, t, apiClient, opts...)
defer apiClient.ContainerRemove(ctx, id, containertypes.RemoveOptions{Force: true})
ctrInfo := container.Inspect(ctx, t, apiClient, id)
tc.validateConfig(t, ctrInfo)
})
}
}
// TestWindowsProcessIsolationVolumeMount validates volume mounting with process isolation on Windows.
func TestWindowsProcessIsolationVolumeMount(t *testing.T) {
ctx := setupTest(t)
apiClient := testEnv.APIClient()
volumeName := "process-iso-test-volume"
volRes, err := apiClient.VolumeCreate(ctx, volume.CreateOptions{
Name: volumeName,
})
assert.NilError(t, err)
defer func() {
// Force volume removal in case container cleanup fails
apiClient.VolumeRemove(ctx, volRes.Name, true)
}()
// Create container with volume mount
id := container.Run(ctx, t, apiClient,
container.WithIsolation(containertypes.IsolationProcess),
container.WithCmd("ping", "-t", "localhost"),
container.WithMount(mount.Mount{
Type: mount.TypeVolume,
Source: volumeName,
Target: "C:\\data",
}),
)
defer apiClient.ContainerRemove(ctx, id, containertypes.RemoveOptions{Force: true})
// Write data to mounted volume
execCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
res := container.ExecT(execCtx, t, apiClient, id,
[]string{"cmd", "/c", "echo volume-test > C:\\data\\test.txt"})
assert.Check(t, is.Equal(res.ExitCode, 0))
// Read data from mounted volume
execCtx2, cancel2 := context.WithTimeout(ctx, 10*time.Second)
defer cancel2()
res2 := container.ExecT(execCtx2, t, apiClient, id,
[]string{"cmd", "/c", "type", "C:\\data\\test.txt"})
assert.Check(t, is.Equal(res2.ExitCode, 0))
assert.Check(t, strings.Contains(res2.Stdout(), "volume-test"))
// Verify container has volume mount
ctrInfo := container.Inspect(ctx, t, apiClient, id)
assert.Check(t, len(ctrInfo.Mounts) == 1)
assert.Check(t, is.Equal(ctrInfo.Mounts[0].Type, mount.TypeVolume))
assert.Check(t, is.Equal(ctrInfo.Mounts[0].Name, volumeName))
}
// TestWindowsHyperVIsolationResourceLimits validates resource limits work with Hyper-V isolation.
// This ensures Windows can properly enforce resource constraints on Hyper-V containers.
func TestWindowsHyperVIsolationResourceLimits(t *testing.T) {
ctx := setupTest(t)
apiClient := testEnv.APIClient()
// Create container with memory limit
memoryLimit := int64(512 * 1024 * 1024) // 512MB
id := container.Run(ctx, t, apiClient,
container.WithIsolation(containertypes.IsolationHyperV),
container.WithCmd("ping", "-t", "localhost"),
func(config *container.TestContainerConfig) {
config.HostConfig.Memory = memoryLimit
},
)
defer apiClient.ContainerRemove(ctx, id, containertypes.RemoveOptions{Force: true})
// Verify resource limit is set
ctrInfo := container.Inspect(ctx, t, apiClient, id)
assert.Check(t, is.Equal(ctrInfo.HostConfig.Memory, memoryLimit))
assert.Check(t, is.Equal(ctrInfo.HostConfig.Isolation, containertypes.IsolationHyperV))
}

View File

@@ -238,7 +238,7 @@ func TestBridgeICCWindows(t *testing.T) {
pingCmd := []string{"ping", "-n", "1", "-w", "3000", ctr1Name}
const ctr2Name = "ctr2"
attachCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
attachCtx, cancel := context.WithTimeout(ctx, 15*time.Second)
defer cancel()
res := container.RunAttach(attachCtx, t, c,
container.WithName(ctr2Name),

View File

@@ -0,0 +1,378 @@
package networking
import (
"context"
"fmt"
"io"
"net/http"
"strings"
"testing"
"time"
"github.com/docker/docker/api/types"
containertypes "github.com/docker/docker/api/types/container"
"github.com/docker/docker/integration/internal/container"
"github.com/docker/docker/integration/internal/network"
"github.com/docker/docker/testutil"
"github.com/docker/go-connections/nat"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
"gotest.tools/v3/poll"
"gotest.tools/v3/skip"
)
// TestWindowsNetworkDrivers validates Windows-specific network drivers for Windows.
// Tests: NAT, Transparent, and L2Bridge network drivers.
func TestWindowsNetworkDrivers(t *testing.T) {
ctx := setupTest(t)
c := testEnv.APIClient()
testcases := []struct {
name string
driver string
}{
{
// NAT connectivity is already tested in TestBridgeICCWindows (bridge_test.go),
// so we only validate network creation here.
name: "NAT driver network creation",
driver: "nat",
},
{
// Only test creation of a Transparent driver network, connectivity depends on external
// network infrastructure.
name: "Transparent driver network creation",
driver: "transparent",
},
{
// L2Bridge driver requires specific host network adapter configuration, test will skip
// if host configuration is missing.
name: "L2Bridge driver network creation",
driver: "l2bridge",
},
}
for tcID, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
ctx := testutil.StartSpan(ctx, t)
netName := fmt.Sprintf("test-%s-%d", tc.driver, tcID)
// Create network with specified driver
netResp, err := c.NetworkCreate(ctx, netName, types.NetworkCreate{
Driver: tc.driver,
})
if err != nil {
// L2Bridge may fail if host network configuration is not available
if tc.driver == "l2bridge" {
errStr := strings.ToLower(err.Error())
if strings.Contains(errStr, "the network does not have a subnet for this endpoint") {
t.Skipf("Driver %s requires host network configuration: %v", tc.driver, err)
}
}
t.Fatalf("Failed to create network with %s driver: %v", tc.driver, err)
}
defer network.RemoveNoError(ctx, t, c, netName)
// Inspect network to validate driver is correctly set
netInfo, err := c.NetworkInspect(ctx, netResp.ID, types.NetworkInspectOptions{})
assert.NilError(t, err)
assert.Check(t, is.Equal(netInfo.Driver, tc.driver), "Network driver mismatch")
assert.Check(t, is.Equal(netInfo.Name, netName), "Network name mismatch")
})
}
}
// TestWindowsNATDriverPortMapping validates NAT port mapping by testing host connectivity.
func TestWindowsNATDriverPortMapping(t *testing.T) {
ctx := setupTest(t)
c := testEnv.APIClient()
// Use default NAT network which supports port mapping
netName := "nat"
// PowerShell HTTP listener on port 80
psScript := `
$listener = New-Object System.Net.HttpListener
$listener.Prefixes.Add('http://+:80/')
$listener.Start()
while ($listener.IsListening) {
$context = $listener.GetContext()
$response = $context.Response
$content = [System.Text.Encoding]::UTF8.GetBytes('OK')
$response.ContentLength64 = $content.Length
$response.OutputStream.Write($content, 0, $content.Length)
$response.OutputStream.Close()
}
`
// Create container with port mapping 80->8080
ctrName := "port-mapping-test"
id := container.Run(ctx, t, c,
container.WithName(ctrName),
container.WithCmd("powershell", "-Command", psScript),
container.WithNetworkMode(netName),
container.WithExposedPorts("80/tcp"),
container.WithPortMap(nat.PortMap{
"80/tcp": []nat.PortBinding{{HostPort: "8080"}},
}),
)
defer c.ContainerRemove(ctx, id, containertypes.RemoveOptions{Force: true})
// Verify port mapping metadata
ctrInfo := container.Inspect(ctx, t, c, id)
portKey := nat.Port("80/tcp")
assert.Check(t, ctrInfo.NetworkSettings.Ports[portKey] != nil, "Port mapping not found")
assert.Check(t, len(ctrInfo.NetworkSettings.Ports[portKey]) > 0, "No host port binding")
assert.Check(t, is.Equal(ctrInfo.NetworkSettings.Ports[portKey][0].HostPort, "8080"))
// Test actual connectivity from host to container via mapped port
httpClient := &http.Client{Timeout: 2 * time.Second}
checkHTTP := func(t poll.LogT) poll.Result {
resp, err := httpClient.Get("http://localhost:8080")
if err != nil {
return poll.Continue("connection failed: %v", err)
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return poll.Continue("failed to read body: %v", err)
}
if !strings.Contains(string(body), "OK") {
return poll.Continue("unexpected response body: %s", string(body))
}
return poll.Success()
}
poll.WaitOn(t, checkHTTP, poll.WithTimeout(10*time.Second))
}
// TestWindowsNetworkDNSResolution validates DNS resolution on Windows networks.
func TestWindowsNetworkDNSResolution(t *testing.T) {
ctx := setupTest(t)
c := testEnv.APIClient()
testcases := []struct {
name string
driver string
customDNS bool
dnsServers []string
}{
{
name: "Default NAT network DNS resolution",
driver: "nat",
},
{
name: "Custom DNS servers on NAT network",
driver: "nat",
customDNS: true,
dnsServers: []string{"8.8.8.8", "8.8.4.4"},
},
}
for tcID, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
ctx := testutil.StartSpan(ctx, t)
netName := fmt.Sprintf("test-dns-%s-%d", tc.driver, tcID)
// Create network with optional custom DNS
netOpts := []func(*types.NetworkCreate){
network.WithDriver(tc.driver),
}
if tc.customDNS {
// Note: DNS options may need to be set via network options on Windows
for _, dns := range tc.dnsServers {
netOpts = append(netOpts, network.WithOption("com.docker.network.windowsshim.dnsservers", dns))
}
}
network.CreateNoError(ctx, t, c, netName, netOpts...)
defer network.RemoveNoError(ctx, t, c, netName)
// Create container and verify DNS resolution
ctrName := fmt.Sprintf("dns-test-%d", tcID)
id := container.Run(ctx, t, c,
container.WithName(ctrName),
container.WithNetworkMode(netName),
)
defer c.ContainerRemove(ctx, id, containertypes.RemoveOptions{Force: true})
// Test DNS resolution by pinging container by name from another container
pingCmd := []string{"ping", "-n", "1", "-w", "3000", ctrName}
attachCtx, cancel := context.WithTimeout(ctx, 15*time.Second)
defer cancel()
res := container.RunAttach(attachCtx, t, c,
container.WithCmd(pingCmd...),
container.WithNetworkMode(netName),
)
defer c.ContainerRemove(ctx, res.ContainerID, containertypes.RemoveOptions{Force: true})
assert.Check(t, is.Equal(res.ExitCode, 0), "DNS resolution failed")
assert.Check(t, is.Contains(res.Stdout.String(), "Sent = 1, Received = 1, Lost = 0"))
})
}
}
// TestWindowsNetworkLifecycle validates network lifecycle operations on Windows.
// Tests network creation, container attachment, detachment, and deletion.
func TestWindowsNetworkLifecycle(t *testing.T) {
// Skip this test on Windows Containerd because NetworkConnect operations fail with an
// unsupported platform request error:
// https://github.com/moby/moby/issues/51589
skip.If(t, testEnv.RuntimeIsWindowsContainerd(),
"Skipping test: fails on Containerd due to unsupported platform request error during NetworkConnect operations")
ctx := setupTest(t)
c := testEnv.APIClient()
netName := "lifecycle-test-nat"
netID := network.CreateNoError(ctx, t, c, netName,
network.WithDriver("nat"),
)
netInfo, err := c.NetworkInspect(ctx, netID, types.NetworkInspectOptions{})
assert.NilError(t, err)
assert.Check(t, is.Equal(netInfo.Name, netName))
// Create container on network
ctrName := "lifecycle-ctr"
id := container.Run(ctx, t, c,
container.WithName(ctrName),
container.WithNetworkMode(netName),
)
ctrInfo := container.Inspect(ctx, t, c, id)
assert.Check(t, ctrInfo.NetworkSettings.Networks[netName] != nil)
// Disconnect container from network
err = c.NetworkDisconnect(ctx, netID, id, false)
assert.NilError(t, err)
ctrInfo = container.Inspect(ctx, t, c, id)
assert.Check(t, ctrInfo.NetworkSettings.Networks[netName] == nil, "Container still connected after disconnect")
// Reconnect container to network
err = c.NetworkConnect(ctx, netID, id, nil)
assert.NilError(t, err)
ctrInfo = container.Inspect(ctx, t, c, id)
assert.Check(t, ctrInfo.NetworkSettings.Networks[netName] != nil, "Container not reconnected")
c.ContainerRemove(ctx, id, containertypes.RemoveOptions{Force: true})
network.RemoveNoError(ctx, t, c, netName)
_, err = c.NetworkInspect(ctx, netID, types.NetworkInspectOptions{})
assert.Check(t, err != nil, "Network still exists after deletion")
}
// TestWindowsNetworkIsolation validates network isolation between containers on different networks.
// Ensures containers on different networks cannot communicate, validating Windows network driver isolation.
func TestWindowsNetworkIsolation(t *testing.T) {
ctx := setupTest(t)
c := testEnv.APIClient()
// Create two separate NAT networks
net1Name := "isolation-net1"
net2Name := "isolation-net2"
network.CreateNoError(ctx, t, c, net1Name, network.WithDriver("nat"))
defer network.RemoveNoError(ctx, t, c, net1Name)
network.CreateNoError(ctx, t, c, net2Name, network.WithDriver("nat"))
defer network.RemoveNoError(ctx, t, c, net2Name)
// Create container on first network
ctr1Name := "isolated-ctr1"
id1 := container.Run(ctx, t, c,
container.WithName(ctr1Name),
container.WithNetworkMode(net1Name),
)
defer c.ContainerRemove(ctx, id1, containertypes.RemoveOptions{Force: true})
ctr1Info := container.Inspect(ctx, t, c, id1)
ctr1IP := ctr1Info.NetworkSettings.Networks[net1Name].IPAddress
assert.Check(t, ctr1IP != "", "Container IP not assigned")
// Create container on second network and try to ping first container
pingCmd := []string{"ping", "-n", "1", "-w", "2000", ctr1IP}
attachCtx, cancel := context.WithTimeout(ctx, 15*time.Second)
defer cancel()
res := container.RunAttach(attachCtx, t, c,
container.WithCmd(pingCmd...),
container.WithNetworkMode(net2Name),
)
defer c.ContainerRemove(ctx, res.ContainerID, containertypes.RemoveOptions{Force: true})
// Ping should fail, demonstrating network isolation
assert.Check(t, res.ExitCode != 0, "Ping succeeded unexpectedly - networks are not isolated")
// Windows ping failure can have various error messages, but we should see some indication of failure
stdout := res.Stdout.String()
stderr := res.Stderr.String()
// Check for common Windows ping failure indicators
hasFailureIndicator := strings.Contains(stdout, "Destination host unreachable") ||
strings.Contains(stdout, "Request timed out") ||
strings.Contains(stdout, "100% loss") ||
strings.Contains(stdout, "Lost = 1") ||
strings.Contains(stderr, "unreachable") ||
strings.Contains(stderr, "timeout")
assert.Check(t, hasFailureIndicator,
"Expected ping failure indicators not found. Exit code: %d, stdout: %q, stderr: %q",
res.ExitCode, stdout, stderr)
}
// TestWindowsNetworkEndpointManagement validates endpoint creation and management on Windows networks.
// Tests that multiple containers can be created and managed on the same network.
func TestWindowsNetworkEndpointManagement(t *testing.T) {
ctx := setupTest(t)
c := testEnv.APIClient()
netName := "endpoint-test-nat"
network.CreateNoError(ctx, t, c, netName, network.WithDriver("nat"))
defer network.RemoveNoError(ctx, t, c, netName)
// Create multiple containers on the same network
const numContainers = 3
containerIDs := make([]string, numContainers)
for i := 0; i < numContainers; i++ {
ctrName := fmt.Sprintf("endpoint-ctr-%d", i)
id := container.Run(ctx, t, c,
container.WithName(ctrName),
container.WithNetworkMode(netName),
)
containerIDs[i] = id
defer c.ContainerRemove(ctx, id, containertypes.RemoveOptions{Force: true})
}
netInfo, err := c.NetworkInspect(ctx, netName, types.NetworkInspectOptions{})
assert.NilError(t, err)
assert.Check(t, is.Equal(len(netInfo.Containers), numContainers),
"Expected %d containers, got %d", numContainers, len(netInfo.Containers))
// Verify each container has network connectivity to others
for i := 0; i < numContainers-1; i++ {
targetName := fmt.Sprintf("endpoint-ctr-%d", i)
pingCmd := []string{"ping", "-n", "1", "-w", "3000", targetName}
sourceName := fmt.Sprintf("endpoint-ctr-%d", i+1)
attachCtx, cancel := context.WithTimeout(ctx, 15*time.Second)
defer cancel()
res := container.RunAttach(attachCtx, t, c,
container.WithName(fmt.Sprintf("%s-pinger", sourceName)),
container.WithCmd(pingCmd...),
container.WithNetworkMode(netName),
)
defer c.ContainerRemove(ctx, res.ContainerID, containertypes.RemoveOptions{Force: true})
assert.Check(t, is.Equal(res.ExitCode, 0),
"Container %s failed to ping %s", sourceName, targetName)
}
}

View File

@@ -357,7 +357,7 @@ func (c *Controller) agentInit(listenAddr, bindAddrOrInterface, advertiseAddr, d
}
c.mu.Unlock()
go c.handleTableEvents(ch, c.handleEpTableEvent)
go c.handleTableEvents(ch, func(ev events.Event) { handleEpTableEvent(c, ev) })
go c.handleTableEvents(nodeCh, c.handleNodeTableEvent)
keys, tags := c.getKeys(subsysIPSec)
@@ -884,14 +884,17 @@ func unmarshalEndpointRecord(data []byte) (*endpointEvent, error) {
}, nil
}
// EquivalentTo returns true if ev is semantically equivalent to other.
// EquivalentTo returns true if ev is semantically equivalent to other,
// ignoring the ServiceDisabled field.
func (ev *endpointEvent) EquivalentTo(other *endpointEvent) bool {
if ev == nil || other == nil {
return (ev == nil) == (other == nil)
}
return ev.Name == other.Name &&
ev.ServiceName == other.ServiceName &&
ev.ServiceID == other.ServiceID &&
ev.VirtualIP == other.VirtualIP &&
ev.EndpointIP == other.EndpointIP &&
ev.ServiceDisabled == other.ServiceDisabled &&
iterutil.SameValues(
iterutil.Deref(slices.Values(ev.IngressPorts)),
iterutil.Deref(slices.Values(other.IngressPorts))) &&
@@ -899,7 +902,14 @@ func (ev *endpointEvent) EquivalentTo(other *endpointEvent) bool {
iterutil.SameValues(slices.Values(ev.TaskAliases), slices.Values(other.TaskAliases))
}
func (c *Controller) handleEpTableEvent(ev events.Event) {
type serviceBinder interface {
addContainerNameResolution(nID, eID, containerName string, taskAliases []string, ip net.IP, method string) error
delContainerNameResolution(nID, eID, containerName string, taskAliases []string, ip net.IP, method string) error
addServiceBinding(svcName, svcID, nID, eID, containerName string, vip net.IP, ingressPorts []*PortConfig, serviceAliases, taskAliases []string, ip net.IP, method string) error
rmServiceBinding(svcName, svcID, nID, eID, containerName string, vip net.IP, ingressPorts []*PortConfig, serviceAliases []string, taskAliases []string, ip net.IP, method string, deleteSvcRecords bool, fullRemove bool) error
}
func handleEpTableEvent(c serviceBinder, ev events.Event) {
event := ev.(networkdb.WatchEvent)
nid := event.NetworkID
eid := event.Key
@@ -929,8 +939,9 @@ func (c *Controller) handleEpTableEvent(ev events.Event) {
})
logger.Debug("handleEpTableEvent")
equivalent := prev.EquivalentTo(epRec)
if prev != nil {
if epRec != nil && prev.EquivalentTo(epRec) {
if equivalent && prev.ServiceDisabled == epRec.ServiceDisabled {
// Avoid flapping if we would otherwise remove a service
// binding then immediately replace it with an equivalent one.
return
@@ -938,7 +949,11 @@ func (c *Controller) handleEpTableEvent(ev events.Event) {
if prev.ServiceID != "" {
// This is a remote task part of a service
if !prev.ServiceDisabled {
if epRec == nil || !equivalent {
// Either the endpoint is deleted from NetworkDB or has
// been replaced with a different one. Remove the old
// binding. The new binding, if any, will be added
// below.
err := c.rmServiceBinding(prev.ServiceName, prev.ServiceID, nid, eid,
prev.Name, prev.VirtualIP.AsSlice(), prev.IngressPorts,
prev.Aliases, prev.TaskAliases, prev.EndpointIP.AsSlice(),
@@ -947,7 +962,7 @@ func (c *Controller) handleEpTableEvent(ev events.Event) {
logger.WithError(err).Error("failed removing service binding")
}
}
} else {
} else if !prev.ServiceDisabled && (!equivalent || epRec == nil || epRec.ServiceDisabled) {
// This is a remote container simply attached to an attachable network
err := c.delContainerNameResolution(nid, eid, prev.Name, prev.TaskAliases,
prev.EndpointIP.AsSlice(), "handleEpTableEvent")
@@ -960,19 +975,16 @@ func (c *Controller) handleEpTableEvent(ev events.Event) {
if epRec != nil {
if epRec.ServiceID != "" {
// This is a remote task part of a service
if epRec.ServiceDisabled {
// Don't double-remove a service binding
if prev == nil || prev.ServiceID != epRec.ServiceID || !prev.ServiceDisabled {
err := c.rmServiceBinding(epRec.ServiceName, epRec.ServiceID,
nid, eid, epRec.Name, epRec.VirtualIP.AsSlice(),
epRec.IngressPorts, epRec.Aliases, epRec.TaskAliases,
epRec.EndpointIP.AsSlice(), "handleEpTableEvent", true, false)
if err != nil {
logger.WithError(err).Error("failed disabling service binding")
return
}
if equivalent && !prev.ServiceDisabled && epRec.ServiceDisabled {
err := c.rmServiceBinding(epRec.ServiceName, epRec.ServiceID,
nid, eid, epRec.Name, epRec.VirtualIP.AsSlice(),
epRec.IngressPorts, epRec.Aliases, epRec.TaskAliases,
epRec.EndpointIP.AsSlice(), "handleEpTableEvent", true, false)
if err != nil {
logger.WithError(err).Error("failed disabling service binding")
return
}
} else {
} else if !epRec.ServiceDisabled {
err := c.addServiceBinding(epRec.ServiceName, epRec.ServiceID, nid, eid,
epRec.Name, epRec.VirtualIP.AsSlice(), epRec.IngressPorts,
epRec.Aliases, epRec.TaskAliases, epRec.EndpointIP.AsSlice(),
@@ -982,7 +994,7 @@ func (c *Controller) handleEpTableEvent(ev events.Event) {
return
}
}
} else {
} else if !epRec.ServiceDisabled && (!equivalent || prev == nil || prev.ServiceDisabled) {
// This is a remote container simply attached to an attachable network
err := c.addContainerNameResolution(nid, eid, epRec.Name, epRec.TaskAliases,
epRec.EndpointIP.AsSlice(), "handleEpTableEvent")

View File

@@ -4,10 +4,14 @@
package libnetwork
import (
"fmt"
"net"
"net/netip"
"slices"
"testing"
"github.com/docker/docker/libnetwork/networkdb"
"github.com/gogo/protobuf/proto"
"gotest.tools/v3/assert"
)
@@ -43,9 +47,12 @@ func TestEndpointEvent_EquivalentTo(t *testing.T) {
return a.EquivalentTo(b)
}
assert.Check(t, reflexiveEquiv(nil, nil), "nil should be equivalent to nil")
assert.Check(t, !reflexiveEquiv(&a, nil), "non-nil should not be equivalent to nil")
b := a
b.ServiceDisabled = true
assert.Check(t, !reflexiveEquiv(&a, &b), "differing by ServiceDisabled")
assert.Check(t, reflexiveEquiv(&a, &b), "ServiceDisabled value should not matter")
c := a
c.IngressPorts = slices.Clone(a.IngressPorts)
@@ -91,3 +98,321 @@ func TestEndpointEvent_EquivalentTo(t *testing.T) {
l.Name = "aaaaa"
assert.Check(t, !reflexiveEquiv(&a, &l), "Differing Name should not be equivalent")
}
type mockServiceBinder struct {
actions []string
}
func (m *mockServiceBinder) addContainerNameResolution(nID, eID, containerName string, _ []string, ip net.IP, _ string) error {
m.actions = append(m.actions, fmt.Sprintf("addContainerNameResolution(%v, %v, %v, %v)", nID, eID, containerName, ip))
return nil
}
func (m *mockServiceBinder) delContainerNameResolution(nID, eID, containerName string, _ []string, ip net.IP, _ string) error {
m.actions = append(m.actions, fmt.Sprintf("delContainerNameResolution(%v, %v, %v, %v)", nID, eID, containerName, ip))
return nil
}
func (m *mockServiceBinder) addServiceBinding(svcName, svcID, nID, eID, containerName string, vip net.IP, _ []*PortConfig, _, _ []string, ip net.IP, _ string) error {
m.actions = append(m.actions, fmt.Sprintf("addServiceBinding(%v, %v, %v, %v, %v, %v, %v)", svcName, svcID, nID, eID, containerName, vip, ip))
return nil
}
func (m *mockServiceBinder) rmServiceBinding(svcName, svcID, nID, eID, containerName string, vip net.IP, _ []*PortConfig, _, _ []string, ip net.IP, _ string, deleteSvcRecords bool, fullRemove bool) error {
m.actions = append(m.actions, fmt.Sprintf("rmServiceBinding(%v, %v, %v, %v, %v, %v, %v, deleteSvcRecords=%v, fullRemove=%v)", svcName, svcID, nID, eID, containerName, vip, ip, deleteSvcRecords, fullRemove))
return nil
}
func TestHandleEPTableEvent(t *testing.T) {
svc1 := EndpointRecord{
Name: "ep1",
ServiceName: "svc1",
ServiceID: "id1",
VirtualIP: "10.0.0.1",
EndpointIP: "192.168.12.42",
}
svc1disabled := svc1
svc1disabled.ServiceDisabled = true
svc2 := EndpointRecord{
Name: "ep2",
ServiceName: "svc2",
ServiceID: "id2",
VirtualIP: "10.0.0.2",
EndpointIP: "172.16.69.5",
}
svc2disabled := svc2
svc2disabled.ServiceDisabled = true
ctr1 := EndpointRecord{
Name: "ctr1",
EndpointIP: "172.18.1.1",
}
ctr1disabled := ctr1
ctr1disabled.ServiceDisabled = true
ctr2 := EndpointRecord{
Name: "ctr2",
EndpointIP: "172.18.1.2",
}
ctr2disabled := ctr2
ctr2disabled.ServiceDisabled = true
tests := []struct {
name string
prev, ev *EndpointRecord
expectedActions []string
}{
{
name: "Insert/Service/ServiceDisabled=false",
ev: &svc1,
expectedActions: []string{
"addServiceBinding(svc1, id1, network1, endpoint1, ep1, 10.0.0.1, 192.168.12.42)",
},
},
{
name: "Insert/Service/ServiceDisabled=true",
ev: &svc1disabled,
},
{
name: "Insert/Container/ServiceDisabled=false",
ev: &ctr1,
expectedActions: []string{
"addContainerNameResolution(network1, endpoint1, ctr1, 172.18.1.1)",
},
},
{
name: "Insert/Container/ServiceDisabled=true",
ev: &ctr1disabled,
},
{
name: "Update/Service/ServiceDisabled=ft",
prev: &svc1,
ev: &svc1disabled,
expectedActions: []string{
"rmServiceBinding(svc1, id1, network1, endpoint1, ep1, 10.0.0.1, 192.168.12.42, deleteSvcRecords=true, fullRemove=false)",
},
},
{
name: "Update/Service/ServiceDisabled=tf",
prev: &svc1disabled,
ev: &svc1,
expectedActions: []string{
"addServiceBinding(svc1, id1, network1, endpoint1, ep1, 10.0.0.1, 192.168.12.42)",
},
},
{
name: "Update/Service/ServiceDisabled=ff",
prev: &svc1disabled,
ev: &svc1disabled,
},
{
name: "Update/Service/ServiceDisabled=tt",
prev: &svc1,
ev: &svc1,
},
{
name: "Update/Container/ServiceDisabled=ft",
prev: &ctr1,
ev: &ctr1disabled,
expectedActions: []string{
"delContainerNameResolution(network1, endpoint1, ctr1, 172.18.1.1)",
},
},
{
name: "Update/Container/ServiceDisabled=tf",
prev: &ctr1disabled,
ev: &ctr1,
expectedActions: []string{
"addContainerNameResolution(network1, endpoint1, ctr1, 172.18.1.1)",
},
},
{
name: "Update/Container/ServiceDisabled=ff",
prev: &ctr1disabled,
ev: &ctr1disabled,
},
{
name: "Update/Container/ServiceDisabled=tt",
prev: &ctr1,
ev: &ctr1,
},
{
name: "Delete/Service/ServiceDisabled=false",
prev: &svc1,
expectedActions: []string{
"rmServiceBinding(svc1, id1, network1, endpoint1, ep1, 10.0.0.1, 192.168.12.42, deleteSvcRecords=true, fullRemove=true)",
},
},
{
name: "Delete/Service/ServiceDisabled=true",
prev: &svc1disabled,
expectedActions: []string{
"rmServiceBinding(svc1, id1, network1, endpoint1, ep1, 10.0.0.1, 192.168.12.42, deleteSvcRecords=true, fullRemove=true)",
},
},
{
name: "Delete/Container/ServiceDisabled=false",
prev: &ctr1,
expectedActions: []string{
"delContainerNameResolution(network1, endpoint1, ctr1, 172.18.1.1)",
},
},
{
name: "Delete/Container/ServiceDisabled=true",
prev: &ctr1disabled,
},
{
name: "Replace/From=Service/To=Service/ServiceDisabled=ff",
prev: &svc1,
ev: &svc2,
expectedActions: []string{
"rmServiceBinding(svc1, id1, network1, endpoint1, ep1, 10.0.0.1, 192.168.12.42, deleteSvcRecords=true, fullRemove=true)",
"addServiceBinding(svc2, id2, network1, endpoint1, ep2, 10.0.0.2, 172.16.69.5)",
},
},
{
name: "Replace/From=Service/To=Service/ServiceDisabled=ft",
prev: &svc1,
ev: &svc2disabled,
expectedActions: []string{
"rmServiceBinding(svc1, id1, network1, endpoint1, ep1, 10.0.0.1, 192.168.12.42, deleteSvcRecords=true, fullRemove=true)",
},
},
{
name: "Replace/From=Service/To=Service/ServiceDisabled=tf",
prev: &svc1disabled,
ev: &svc2,
expectedActions: []string{
"rmServiceBinding(svc1, id1, network1, endpoint1, ep1, 10.0.0.1, 192.168.12.42, deleteSvcRecords=true, fullRemove=true)",
"addServiceBinding(svc2, id2, network1, endpoint1, ep2, 10.0.0.2, 172.16.69.5)",
},
},
{
name: "Replace/From=Service/To=Service/ServiceDisabled=tt",
prev: &svc1disabled,
ev: &svc2disabled,
expectedActions: []string{
"rmServiceBinding(svc1, id1, network1, endpoint1, ep1, 10.0.0.1, 192.168.12.42, deleteSvcRecords=true, fullRemove=true)",
},
},
{
name: "Replace/From=Service/To=Container/ServiceDisabled=ff",
prev: &svc1,
ev: &ctr2,
expectedActions: []string{
"rmServiceBinding(svc1, id1, network1, endpoint1, ep1, 10.0.0.1, 192.168.12.42, deleteSvcRecords=true, fullRemove=true)",
"addContainerNameResolution(network1, endpoint1, ctr2, 172.18.1.2)",
},
},
{
name: "Replace/From=Service/To=Container/ServiceDisabled=ft",
prev: &svc1,
ev: &ctr2disabled,
expectedActions: []string{
"rmServiceBinding(svc1, id1, network1, endpoint1, ep1, 10.0.0.1, 192.168.12.42, deleteSvcRecords=true, fullRemove=true)",
},
},
{
name: "Replace/From=Service/To=Container/ServiceDisabled=tf",
prev: &svc1disabled,
ev: &ctr2,
expectedActions: []string{
"rmServiceBinding(svc1, id1, network1, endpoint1, ep1, 10.0.0.1, 192.168.12.42, deleteSvcRecords=true, fullRemove=true)",
"addContainerNameResolution(network1, endpoint1, ctr2, 172.18.1.2)",
},
},
{
name: "Replace/From=Service/To=Container/ServiceDisabled=tt",
prev: &svc1disabled,
ev: &ctr2disabled,
expectedActions: []string{
"rmServiceBinding(svc1, id1, network1, endpoint1, ep1, 10.0.0.1, 192.168.12.42, deleteSvcRecords=true, fullRemove=true)",
},
},
{
name: "Replace/From=Container/To=Service/ServiceDisabled=ff",
prev: &ctr1,
ev: &svc2,
expectedActions: []string{
"delContainerNameResolution(network1, endpoint1, ctr1, 172.18.1.1)",
"addServiceBinding(svc2, id2, network1, endpoint1, ep2, 10.0.0.2, 172.16.69.5)",
},
},
{
name: "Replace/From=Container/To=Service/ServiceDisabled=ft",
prev: &ctr1,
ev: &svc2disabled,
expectedActions: []string{
"delContainerNameResolution(network1, endpoint1, ctr1, 172.18.1.1)",
},
},
{
name: "Replace/From=Container/To=Service/ServiceDisabled=tf",
prev: &ctr1disabled,
ev: &svc2,
expectedActions: []string{
"addServiceBinding(svc2, id2, network1, endpoint1, ep2, 10.0.0.2, 172.16.69.5)",
},
},
{
name: "Replace/From=Container/To=Service/ServiceDisabled=tt",
prev: &ctr1disabled,
ev: &svc2disabled,
},
{
name: "Replace/From=Container/To=Container/ServiceDisabled=ff",
prev: &ctr1,
ev: &ctr2,
expectedActions: []string{
"delContainerNameResolution(network1, endpoint1, ctr1, 172.18.1.1)",
"addContainerNameResolution(network1, endpoint1, ctr2, 172.18.1.2)",
},
},
{
name: "Replace/From=Container/To=Container/ServiceDisabled=ft",
prev: &ctr1,
ev: &ctr2disabled,
expectedActions: []string{
"delContainerNameResolution(network1, endpoint1, ctr1, 172.18.1.1)",
},
},
{
name: "Replace/From=Container/To=Container/ServiceDisabled=tf",
prev: &ctr1disabled,
ev: &ctr2,
expectedActions: []string{
"addContainerNameResolution(network1, endpoint1, ctr2, 172.18.1.2)",
},
},
{
name: "Replace/From=Container/To=Container/ServiceDisabled=tt",
prev: &ctr1disabled,
ev: &ctr2disabled,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
msb := &mockServiceBinder{}
event := networkdb.WatchEvent{
NetworkID: "network1",
Key: "endpoint1",
}
var err error
if tt.prev != nil {
event.Prev, err = proto.Marshal(tt.prev)
assert.NilError(t, err)
}
if tt.ev != nil {
event.Value, err = proto.Marshal(tt.ev)
assert.NilError(t, err)
}
handleEpTableEvent(msb, event)
assert.DeepEqual(t, tt.expectedActions, msb.actions)
})
}
}