You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The instances that fall into one our compute nodes, let's call it: kvmB
and is using the network test-network,
Well those instance have an connectivity issue:
cloud-init cannot access 169.254.169.254 metadata on OpenStack for those instances ( as you see above )
Interface configuration entries are missing on one of the control planes ( below )
root@controller1:~# ip netns exec qdhcp-3caceee8-e1e3-4d89-b2a1-f392df57a5e0 ip a s dev tapae5f7381-e1
11795: tapae5f7381-e1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether fa:16:3e:77:e9:30 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.4/24 brd 10.0.2.255 scope global tapae5f7381-e1
valid_lft forever preferred_lft forever
inet 169.254.169.254/32 brd 169.254.169.254 scope global tapae5f7381-e1
valid_lft forever preferred_lft forever
inet6 fe80::a9fe:a9fe/128 scope link
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe77:e930/64 scope link
valid_lft forever preferred_lft forever
root@controller2:~# ip netns exec qdhcp-3caceee8-e1e3-4d89-b2a1-f392df57a5e0 ip a s dev tap174eb54a-12
12148: tap174eb54a-12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether fa:16:3e:e9:02:92 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.2/24 brd 10.0.2.255 scope global tap174eb54a-12
valid_lft forever preferred_lft forever
inet 169.254.169.254/32 brd 169.254.169.254 scope global tap174eb54a-12
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fee9:292/64 scope link
valid_lft forever preferred_lft forever
root@controller3:~# ip netns exec qdhcp-3caceee8-e1e3-4d89-b2a1-f392df57a5e0 ip a s dev tap0489c152-59
12951: tap0489c152-59: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether fa:16:3e:12:cc:6b brd ff:ff:ff:ff:ff:ff
inet 10.0.2.3/24 brd 10.0.2.255 scope global tap0489c152-59
valid_lft forever preferred_lft forever
inet 169.254.169.254/32 brd 169.254.169.254 scope global tap0489c152-59
valid_lft forever preferred_lft forever
inet6 fe80::a9fe:a9fe/128 scope link
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe12:cc6b/64 scope link
valid_lft forever preferred_lft forever
Did you see it? the one on controller2, we are missing a inet6
Also,
root@controller1:~# ip netns exec qdhcp-3caceee8-e1e3-4d89-b2a1-f392df57a5e0 ip -6 r
fe80::a9fe:a9fe dev tapae5f7381-e1 proto kernel metric 256 pref medium
fe80::/64 dev tapae5f7381-e1 proto kernel metric 256 pref medium
root@controller2:~# ip netns exec qdhcp-3caceee8-e1e3-4d89-b2a1-f392df57a5e0 ip -6 r
fe80::/64 dev tap174eb54a-12 proto kernel metric 256 pref medium
I added those int6 entries manually, and reboot the instances,
ip netns exec qdhcp-3caceee8-e1e3-4d89-b2a1-f392df57a5e0 ip -6 route add fe80::a9fe:a9fe dev tap174eb54a-12
ip netns exec qdhcp-3caceee8-e1e3-4d89-b2a1-f392df57a5e0 ip -6 addr add fe80::a9fe:a9fe/128 dev tap174eb54a-12
But the issue still persistent
Probably as mentioned in the Slack chat, this is just happening on one of the compute nodes when we are using the test-network
If we migrate the instance elsewhere, there is not issue, and instances on the compute node kvmB using other networks are OK.
The text was updated successfully, but these errors were encountered:
[stable/zed]
neutron version:
$ neutron-dhcp-agent --version
neutron-dhcp-agent 21.2.1.dev40
Chart version: 0.3.29
nova version:
$ nova-api --version
26.2.2
Chat version: 0.3.27
reference: https://kubernetes.slack.com/archives/C056YSPJB7U/p1729689879392439
xvlan or project network:
Name: test-network
ID: 3caceee8-e1e3-4d89-b2a1-f392df57a5e0
Subnet:
NameL test-subnet
ID: d800ee64-c0ed-4b6b-b7fb-43cda3097bc0
allocation_pools:
start: 10.0.2.5
cidr: 10.0.2.0/24
dns_nameservers:
enable_dhcp: true
gateway_ip: 10.0.2.1
ip_version: 4
The instances that fall into one our compute nodes, let's call it:
kvmB
and is using the network
test-network
,Well those instance have an connectivity issue:
I found out:
Did you see it? the one on
controller2
, we are missing a inet6Also,
I added those int6 entries manually, and reboot the instances,
But the issue still persistent
Probably as mentioned in the Slack chat, this is just happening on one of the compute nodes when we are using the
test-network
If we migrate the instance elsewhere, there is not issue, and instances on the compute node
kvmB
using other networks are OK.The text was updated successfully, but these errors were encountered: