Determine the networking mode the container operates in.
See the settings of the Hardware Node's adapter the container is bridged to.
See the network and the network mode.
vzlist -a -o ctid,nettype,network,ip
See the routing table.
See the list of container/node interfaces. If there is a vethCTID.x interface, it is bridged.
vznetcfg if list
See all interfaces.
ip a l
See the routing table.
ip r l
See more detailed information about the bridge (brX) interface.
Determine the correctness of the container's network settings
The container IP address, netmask, and default gateway should be set up correctly -- the same way as if you were adding a physical host to the same network.
The router in the LAN segment to which we are adding the bridged container should be able to route packages to the container interface -- that is, there must not be a static ARP table configuration on the router.
Always remember the container belongs to the same network segment as the physical adapter on the Hardware Node to which it is bridged.
Make sure the container has the correct p-t-p settings -- the default in the container is via
venet0; netmask is
ifconfig | grep venet
Make sure there is a static ARP entry on the Hardware Node for the container IP address.
arp -a | grep IP
Make sure there is a route for the container’s IP (on the node) to route packages via venet0.
If there is not an ARP entry or a route, check if you can do ARPing from the node’s interface container’s IP address. If you see any IP address conflict, it means the IP already is assigned in the same LAN segment. If you cannot do ARPing, it is likely there is a router misconfiguration or Parallels Virtuozzo Containers (PVC) cannot do ARPsend on the default node’s interface.
arpsend -c 1 -w 1 -D -e IP eth0
In case a VE is reachable via ping but its resources are not accessible, make sure there is no IP conflict:
traceroute or tracert can be used to check if the icmp packages are going to the correct host, arping can be used to check the MAC address of a host that answers to pings - for CTs in Host-Routed mode MAC should be the same as hardware node's:
# arping 192.168.55.82
ARPING 192.168.55.82 from 192.168.55.81 eth0
Unicast reply from 192.168.55.82 [BC:AE:C5:0A:66:86] 0.712ms
Unicast reply from 192.168.55.82 [BC:AE:C5:0A:66:86] 0.756ms
Documents to Refer to
Frequently Used KBs
The container network will lose connectivity randomly.
Check whether the bridged and host-routed networks are assigned for the container. For example:
In this case, we need to remove one network and assign a new IP address.
# vzctl set CTID --netif_del venet1 --save
Deleting virtual adapters: veth127.1
Saved parameters for Container 127
IPv6 inside containers works if the node has an IPv6 address assigned, but does not work if the node only has IPv4. Containers in bridged mode only use the node's NIC when containers in the host-routed mode use the node's routing as well. Therefore, if you set the IPv6 address to the container, but do not set the IPv6 address to the node, there will be no routing for IPv6 in the containers by design.
After rebooting, the server shows the error "Bringing up interface eth1: bnx2 device eth1 does not seem to be present, delaying initialization Absence of 70-persistent-net.rules causes NICs to be randomly assisgned to interfaces:"
This actually is not related to Parallels Virtuozzo Containers. In fact, it is a common problem with udev and NICs that occurs when there are no persistent rules for the NIC-interface assignment.
Occasionally, the ping test fails between containers.Container 1 has an external connection in the host-routed mode and LAN in bridged:Container 2 has the opposite settings.