This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

ipv6 for hosts behind UTM

Hi list,

I got an ipv6 /48 from my provider. I gave an ipv6 address to the UTM interface connected to the provider, a second one to the internal interface of UTM with ipv6 GW being the UTM interface connected to provider. I don't use Prefix Advertisement which is limited to /64. BTW, would it work if I use another mask like /96 or so ?

My Setup: host with Linux Debian9 and libvirt/kvm. UTM is software in a VM v9.510-5. A second VM act as server for OpenVPN, DHCP, DNS, aso. Everything is working fine with ipv4. I create a FW rule to allow all ipv6 to ipv6 for all services. I setted manually ipv6 address to a host behind the UTM -which means connected to the internal interface- and from here I can ping, ssh or telnet to outside, all is good.

Problem is that I can't connect/reach the other way, outside to internal. I can ping the UTM provider interface, that's all. What is also possible is to ssh an outside port redirected to the ipv6 of the host, but session doesn't finish properly. With tshark I can see the traffic coming and on the client side (ssh -vvv) I have after a while:

debug2: compression ctos: none,zlib@openssh.com
debug2: compression stoc: none,zlib@openssh.com
debug2: languages ctos:
debug2: languages stoc:
debug2: first_kex_follows 0
debug2: reserved 0
debug1: kex: algorithm: curve25519-sha256
debug1: kex: host key algorithm: ecdsa-sha2-nistp256
debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
debug3: send packet: type 30
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
Connection closed by <UTM external ipv6 addr> port <ssh port>

Client is a VM in DC with same set up (Host Debian9, VM Debian9, ipv6 in /64 subnet). From the host behind UTM I can ping, ssh, telnet this client.

Any clue on that ?

Daniel



This thread was automatically locked due to age.
Parents
  • Nobody on this ? Does anyone use ipv6 behind UTM ?

    I restarted all the setup, everything is now in a /64 including the ISP interface. From inside, it's almost working (see *), I can ping all hosts including the UTM on the internal interface. Hosts with no ipv6 fixed ip get's one from prefix advertisement, all is good. Problem is that I can't ping the UTM ISP interface ipv6 :(

    From outside the same, I can ping the ISP ipv6 but none of the internal ! From the UTM, using Support => Tools, I can ping an outside ipv6 using nearest routing -which means ISP interface- but not if I set the internal interface.

    It seems that the firewall is blocking internal ipv6 to external and vice versa. I even try to give an ipv6 GW to internal interface (ISP interface), no changes. Also I see in ip -6 r of UTM

    2a01:xxxx:yyyy::1 dev eth0.1002 metric 1024   ; ISP ipv6 GW
    2a01:xxxx:yyyy::2 dev eth2 metric 1024           ; ISP interface
    2a01:xxxx:yyyy::10:254 dev eth2 metric 1024  ; *** This entry should not be here, that's the ip of the second VM ! *** Internal ipv6 is ::10:1
    2a01:xxxx:yyyy::/64 dev eth2 proto kernel metric 256
    2a01:xxxx:yyyy::/64 dev eth0.1002 proto kernel metric 256
    fe80::/64 dev eth2 proto kernel metric 256
    fe80::/64 dev eth0 proto kernel metric 256
    fe80::/64 dev eth0.1002 proto kernel metric 256
    fe80::/64 dev eth0.1001 proto kernel metric 256
    fe80::/64 dev eth0.100 proto kernel metric 256
    fe80::/64 dev eth0.2 proto kernel metric 256
    fe80::/64 dev eth0.1000 proto kernel metric 256
    fe80::/64 dev eth0.210 proto kernel metric 256
    fe80::/64 dev ifb0 proto kernel metric 256

    Any help or tip is appreciated.

    (*) the second VM can only be reached by the UTM VM and the physical host. Other way it's working like a charm. But that's another story.

     

    Daniel

  • Me again ;)

    what I see in logs is that neighbor sollicitation for external ipv6 get never answered (capture on internal interface)

    19:01:35.455562 IP6 2a01:xxxx:yyyy::10:254 > ff02::1:ff00:2: ICMP6, neighbor solicitation, who has guava, length 32
    19:01:35.578057 IP6 2a01:xxxx:yyyy::10:254 > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2a01:xxxx:yyyy::1, length 32
    19:01:36.474502 IP6 2a01:xxxx:yyyy::10:254 > ff02::1:ff00:2: ICMP6, neighbor solicitation, who has guava, length 32
    19:01:36.602368 IP6 2a01:xxxx:yyyy::10:254 > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2a01:xxxx:yyyy::1, length 32
    19:01:37.498390 IP6 2a01:xxxx:yyyy::10:254 > ff02::1:ff00:2: ICMP6, neighbor solicitation, who has guava, length 32
    19:01:37.626370 IP6 2a01:xxxx:yyyy::10:254 > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2a01:xxxx:yyyy::1, length 32
    19:01:38.522412 IP6 2a01:xxxx:yyyy::10:254 > ff02::1:ff00:2: ICMP6, neighbor solicitation, who has guava, length 32
    19:01:39.034301 IP6 2a01:xxxx:yyyy::10:254 > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2a01:xxxx:yyyy::1, length 32
    19:01:39.546318 IP6 2a01:xxxx:yyyy::10:254 > ff02::1:ff00:2: ICMP6, neighbor solicitation, who has guava, length 32
    19:01:40.058373 IP6 2a01:xxxx:yyyy::10:254 > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2a01:xxxx:yyyy::1, length 32
    19:01:40.457457 IP6 fe80::xyz:ff:zyx:1234 > 2a01:xxxx:yyyy::10:254: ICMP6, neighbor solicitation, who has 2a01:xxxx:yyyy::10:254, length 32
    19:01:40.457574 IP6 2a01:xxxx:yyyy::10:254 > fe80::xyz:ff:zyx:1234: ICMP6, neighbor advertisement, tgt is 2a01:xxxx:yyyy::10:254, length 24

    but is for internal ipv6 (2 last lines).

    Is there a rule to add on firewall to allow those neighbor solicitations/advertisement ?

    Daniel

  • Hello MasterRoshi,

    there is something I don't understand. Above you tell

    MasterRoshi said:

    The ISP modem needs a route that points the 2a01:xxxx:yyyy:10::/64 network to the UTM's 2a01:xxxx:yyyy::2 (external interface).  

    and in a previous message you wrote

    MasterRoshi said:

    Neighbor solicitation is done within the same broadcast domain so if you have two interfaces with the overlapping networks that could be the issue. 

    which seems to me opposite: if the ISP router has to know the route to internal /64, networks are overlapping! I tried to put the external ipv6 with a /48, then to give a2a01:xxxx:yyyy:10::2/64 as additional ipv6 and finally to give this ipv6 to external interface. No luck :(

    If I ask ISP to give me a /64, I still will have to split my internal network to avoid overlapping. So same situation, no?

    I did a traceroute from UTM like

    guava:/root # traceroute6 -i eth0.1002 -n <an external host> ; OK (external interface)

    guava:/root # traceroute6 -i eth2 -n <an external host> ; not OK (internal interface) unable to connect, Network is unreachable

    Then I replaced the <an external host> with my external ipv6 2a01:xxxx:yyyy::2 and have the same result despite the fact that a ping6 to this ipv6 is responding !

    I even try a traceroute using internal interface giving the external ipv6 source address. Network unreachable.

    Well well well ...

    Daniel

  • If RADVD is not working correctly, the Internet cannot "find" your /48 netblock, so this is why, in my opinion, IPv6 doesn't work.

    In my setup, I can ping the internal interface using IPv6 just fine, I can ping the outside gateway just fine from the outside interface, but I cannot ping the outside gateway from the inside. This is consistent with RADVD not working.

    So the question is, why is it that the assigned net blocks broadcast to the adjacent ISP router? If I connect my laptop directly to the modem, I get an IPv6 address and everything works. It is only IPv6 traversing the firewall from the inside that is the issue. For me, this has been going on since late 2017. I suspect it is a low priority issue though.

  • The only way to get around the split of the networks is to have NDP proxy (equivalent of Proxy ARP) which the UTM does not have as a feature. 

    The problem is you are running a dual stack LAN but obviously for ipv4 you are NAT'ing so there is no overlapping routes. 

    https://community.sophos.com/products/unified-threat-management/f/general-discussion/22186/best-way-to-use-ndp-proxy

    Think of it in IPV4 terms:

    Your ISP is 1.1.1.1/24

    Your UTM External interface is 1.1.1.2/25

    Your UTM Internal interface is 1.1.1.129/25 

    Your VM is 1.1.1.130/25 with 1.1.1.129 as the default gateway.

    If you ping from 1.1.1.130 to 1.1.1.1, the ISP will try to arp for 1.1.1.130 instead of routing it back to 1.1.1.2 because it thinks it is on the same subnet. How would it ever work?

    Now in IPV4 you can enable proxy arp to get around this issue so that the UTM external answers any arp request for hosts behind it. 

    The link above does have a workaround but it is not a supported method.

  • I understand your point of vue but remember that ipv6 was (for part) created to avoid all those nating stuff !

    UTM can only deal with /64 mask (reason?), so if I understand you right, If I had a /64 not a /48 it should work. If yes I can test this solution as I have a second provider which is given me a /64 but I don't use it.

    I believe that Edward is right, something is broken/not working/not implemented (RADVD or other). The only way to use ipv6 with UTM is to NAT ipv6 or 6to4. It's not what I want, it's regression and not the future.

    Daniel

  • Hi Daniel,

    It's not that you need a /64 for whatever reason but you need to split up the /48 into smaller networks and route them instead since there is no layer 2 connection to hosts behind the UTM from the ISP side of things. 

  • The UTM should broadcast that it is the router to the whole /48, then when the firewall sees it it will forward the packets accordingly. The problem (I think) is that the UTM never tells the ISP router, and therefore the rest of the Internet,  that it is the router for the /48 (or /64 if that is how you set it up). This is why pinging from the outside interface works (the ISP router, and the world, can route to that IP address), that the devices on the "inside" can ping the gateway (they are on the same broadcast domain), and that the devices in the inside cannot reach devices on the Internet -- (there is no route back). Since things work without the UTM, it strongly implies the UTM is the problem.

  • I agree, for me too it's an UTM problem as everything works fine *FROM inside* the UTM.

    I did another test on an server I have in DC, host + 2 VM. ISP gave me a /64 which is connected to eth0 with default GW being ISP router. I create an /96 on virbr0, gave ipv6 too my VMs with default route being virbr0 ipv6. Everyting is working like a charm despite the fact that both networks are overlapping.

     

    Daniel

  • Hi,

    finally I got it work! It was an ISP problem, they didn't route correctly the /48 mask. After they modify there rules everything worked like it should.

    To summarize:

    . UTM need a /64 for internal stuff like dhcp or whatever which means you need at least two /64 range (external and internal) except if next point is not true

    . network should not overllaping but I'm not sure about this (I didn't test with a /96 inside the /64 for instance, knowing that dhcp and friends wouldn't work)

    . internal UTM ipv6 GW has to be the external UTM ipv6 interface

    . client from inside the lan have internal UTM ipv6 GW

    Thanks to MasterRoshi and Edward for there help.

     

    Regards

  • Hi Daniel,


    Glad you got it working.

    In the end ipv6 is not much different than ipv4 in terms of L2/L3 communication and almost all of the same rules still apply.

    Have a good one!

  • Daniel Huhardeaux said:

     

    . client from inside the lan have internal UTM ipv6 GW 

    Another point: I discovered that ssh from outside to an internal ipv6 was blocked at some parts of the initial connection: this was due to an MTU of 1500 at the internal ipv6 device. Changing it to 1492 and everything worked flowlessly.

    Daniel

Reply
  • Daniel Huhardeaux said:

     

    . client from inside the lan have internal UTM ipv6 GW 

    Another point: I discovered that ssh from outside to an internal ipv6 was blocked at some parts of the initial connection: this was due to an MTU of 1500 at the internal ipv6 device. Changing it to 1492 and everything worked flowlessly.

    Daniel

Children
No Data