-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Description
Description
We use VXLan to provide network connectivity between containers in our "compute clusters". Containers have some IPv6 overlay network that we implement on our hosts, and we use VXLan to provide an IPv4 network inside that IPv6 network.
Right now, we're doing this by running these clustered containers with --network=host
. During the createRuntime
OCI hook, enter the container's network namespace from the host and run iproute2
commands to create the VXLan interfaces. This is working OK for us right now, but we're wondering if we'll ever be able to do this without --network=host
.
It seems to me like there are two distinct issues here:
- gVisor does not support creating L2 interfaces via
iproute2
. Both in netstack and host networking mode (with--cap-add all
), this command fails. It also fails for creating adummy
interface:
root@ebcff70f2461:/# ip link add vxlan0 type vxlan id 42 dev eth0 remote 172.18.0.3 dstport 4789
RTNETLINK answers: Operation not supported
- Netstack does not have VXLan support. When we move a VXLan interface into a container's network namespace with Netstack, it fails to pass packets on that interface.
Is this feature related to a specific bug?
No response
Do you have a specific solution in mind?
I'm curious if you can ride the host kernel's VXLan support to do the heavy lifting here and just treat the interface inside the container's namespace as a virtual ethernet device. If you need to implement VXLan in userspace, however, this is certainly more complicated.