MTU Troubleshooting on Cisco IOS

Hi Laz,

Thanks for replying.

I came to 1436 because I also deducted 20 bytes for the TCP header as I thought the mss setting was TCP payload only.

1500 - 20 bytes (outside IP header) - 4 bytes (GRE header) - 20 Bytes (inside IP header)- 20 bytes (TCP header) = 1436.

Thanks.

Sam

1 Like

Hello Guys,

I have on my ASA setup a routed-based vpn by using VTI interfaces. So i would like to know what is best practice to setup the connection tcp mss voor ipsec tunnel. Do i even need to adjust the TCP MSS, because By default the ASA sets the TCP MSS option in the SYN packets to 1380 for s2s IPsec tunnel. And i know that tcpmss forces the tcp connection to have a maximum segment size not larger than 1380 bytes. So should i leave it as it is, or is it necessary to adjust it.

sysopt connection tcpmss 1350 (1360)

regards,
Sul

Hello Sulemani

When using a route-based VPN with VTI interfaces on your ASA, it’s important to consider the impact of the MSS on the performance of the VPN tunnel. The default MSS value of 1380 bytes set by the ASA for site-to-site IPsec tunnels is typically sufficient for most scenarios. This value takes into consideration the overhead introduced by the IPsec encapsulation and helps prevent issues like fragmentation and inefficient use of bandwidth.

You should only make any adjustments if you perceive problems on the network. If you’re experiencing performance issues or connectivity problems on that particular VPN, then investigating the values of the MSS may be a good troubleshooting step. Otherwise, leaving the default values should be perfectly fine.

I hope this has been helpful!

Laz

Hello laz,

Thank u so much for the info. This helps alot.

Thanks,
Sul

1 Like

Great topic and great questions also , just i want explain something the 1500 is for 100mbit & 1gbit links ,

because we live in zone of ( 100 , 1000 mbit because our PC , router home and offices switch all in this zone ) so will see the 1500 MTU always with us , in DataCenter and links 10gbit will see MTU 9000 that

Hello, everyone!

I have this topology here

When it comes to GRE tunnels, what exactly are these MTU values here?
obrĂĄzok
The first one is 17916 bytes while the second one is 1476 bytes. What is the difference here? And is the second one based off the physical interface? R3’s G0/0.

When GRE Is used, a GRE header (4 bytes) and an additional IP header (20 bytes) are added to the original packets, so if the MTU was set to 1500, it would be exceeded, correct? That’s why there is an MTU of 1476.

If the original MTU was kept then any packets that would exceed the MTU would have to fragmented which would slow down the peformance, correct?

Two more things. Is there any point in using the ip mtu command on the tunnel interface? Since it doesn’t affect the actual MTU unless we change it on the physical interface itself.

And the last thing. The bandwidth on GRE tunnel interfaces is set to 100 kbit/sec. This should be changed, right? Since it might affect calculations related to our IGPs or even QoS.

Thank you!

Hello Barakat

Thanks for sharing your observation with us. The MTU size at Layer 2 by default is 1500 for all Ethernet types, even for 10Mbps Ethernet and 10Gbps Ethernet. This standard has remained consistent, and it is a convention that’s been carried forward to faster speeds for compatibility reasons. When moving up in speeds to 10, 40, and even 100 Gbps, the conventional MTU is still 1500, but it is typically best practice at these speeds to use jumbo frames, which can be in excess of 9000 bytes. This makes communications much more efficient especially in datacenter scenarios where higher speeds are the norm.

However, using a larger MTU size also means that if a packet is lost or corrupted, more data will need to be retransmitted, which can lead to inefficiencies. Therefore, for most home and office networks, a smaller MTU size of 1500 is used to balance efficiency and reliability.

I hope this has been helpful!

Laz

Hello David

The tunnel transport MTU of 1476 is as you described it. It is essentially the maximum size that the unencapsulated packet can have before it is encapsulated on the Layer 2 physical interface. The fact that GRE uses 4 additional bytes, plus 20 bytes for the additional IP header requires that the maximum MTU of the packet that will then be encapsulated over Ethernet be 1500-24=1476. This is the default of a GRE tunnel transport MTU.

Now the MTU of 17916 is that of the virtual tunnel interface. That tunnel interface can handle MTUs of that size. If a packet of that size arrives on the tunnel interface, it would need to be fragmented before it can be encapsulated into the tunneled packets, based on the tunnel transport MTU. So yes you are correct when you say:

You can set the ip mtu of the tunnel interface, but this will not change the MTU or the transport MTU of the tunnel interface as shown in the output of your post. This is because these are different values and serve different purposes.

The ip mtu setting focuses on the size of the IP payload before tunnel encapsulation while the tunnel transport MTU focuses on the size of the packet after it has been encapsulated and is ready to be transported over the underlying network.

On a Cisco device, when you configure a tunnel interface (like a GRE tunnel), the default value for the ip mtu of the tunnel interface is typically set to 1400 bytes. This is a default on some platforms, while others may have a slightly different value. This default is chosen as a conservative value to reduce the likelihood of fragmentation due to the overhead introduced by tunneling protocols.

Again, the typical default bandwidth value set on GRE tunnels on Cisco devices is 100Kbps. You should consider changing this to make sure that your IGPs and QoS mechanisms work correctly.

I hope this has been helpful!

Laz

1 Like

hello, i have asr 1001x act as pppoe server and i have pppoe service on vlan 6 and that vlan out to a mikrotik server then eoip tunnel then to switch access
can i change the mtu pf the pppoe service ? and what is the correct one please

Hello Qudama

Yes, you can change the MTU of the PPPoE service on your ASR 1001X. The correct MTU size depends on your specific network environment and needs. I am not clear as to what your setup is exactly. You state that you have an ASR and a VLAN, but the VLAN cannot be assigned to the ASR, unless you’re using router on a stick. Also, where is the EoIP tunnel created, on the ASR or the Mikrotik device?

Regardless of your setup, the standard MTU size for PPPoE is typically 1492 bytes, which is smaller than the 1500 bytes used by Ethernet due to the 8-byte overhead of the PPPoE header. If you have additional overhead from your EoIP tunnel and other services, you should use a small enough MTU so that IP fragmentation will be avoided.

To change the MTU on the PPPoE service, you can use the following command in interface configuration mode:

ip mtu 1492

Remember to apply the same MTU size on all devices in the path (Mikrotik server, EoIP tunnel, switch) to avoid fragmentation and potential connectivity issues.

Let us know a little bit more about your setup so that we can help you more specifically with the MTU issues you face.

I hope this has been helpful!

Laz

Hello,
thanks for replay i will show you my network diagram

MY Network

Hello Qudama

Thanks for the topology, that was helpful! I am assuming that the PPPoE session is between the end user and the ASR, correct? And that in turn is encapsulated within an EoIP tunnel between the Mikrotik devices.

If that is the case, then we must take into account the overhead introduced by each technology involved. Here is a breakdown of these overheads:

  • Ethernet has a default MTU of 1500 bytes
  • PPPoE adds an 8 byte overhead, as I mentioned before.
  • The EoIP tunnel adds another 42 bytes of overhead, (14 from Ethernet, 20 from IP, and 8 from GRE)

Subtracting the PPPoE overhead from the Ethernet default MTU we get 1500 - 8 = 1492 bytes, which is typical if we’re just using PPPoE.

From that we subtract the overhead of the EoIP tunnel and we get 1492 - 42 = 1450 bytes.

So, to avoid fragmentation, the MTU of the ASR should be set to 1450 bytes. Depending upon the software used on the end device, you should be able to set the MTU on the end device to this value as well.

A very helpful tool that will aid you in troubleshooting is using the ping command with sweep ranges of sizes, to determine the largest MTU that can traverse a particular path.

I hope this has been helpful!

Laz

Hello again lagapides :smiley:

all you said was correct and i have 3 questions

1- do we forget the vlan adds more 4 ? And TCP 20 ?
So mtu is 1426?

2- why substitute some guys said you should add to the 1500 ?

3- how to change the mtu in asr 1001x v16 ?

BR

Hello Quadama

The original 1500 bytes of Ethernet already account for the 20 byte TCP header, so there is no need to adjust this. Remember, when we adjust the MTU we are adjusting based on the additional overhead that is added by the PPPoE and EoIP mechanisms, and not the already existing overhead of TCP or IP. These have already been taken into account.

As for the overhead of the VLAN tag, if you are passing VLAN tags between the ASR and the end device, then yes you should subtract those four bytes as well. (Although it is unusual to pass VLAN tags to the end device.) If that is the case, then the MTU should be set to 1450 - 4 = 1446 bytes.

We must subtract because the maximum MTU that is allowed by the Ethernet Interface is 1500. We are setting the IP MTU here, which is the maximum size of IP packet that can be encapsulated within the Ethernet frame. So if we make that smaller, we leave room for the overhead introduced by the other technologies.

For more information, take a look at this Cisco documentation.

I hope this has been helpful!

Laz

Hi all,
Thanks for explaining this topic.
Here is where i get confused though,.
If the default MTU value on an interface is 1500, and i use a protocol that increases the headers (eg PPPoE) then why decrease the MTU (as per PPPoE 1500-8 =1492) and not increase it (1500 +8=1508 or 9000 for jumbo frames for that matter) thus accommodating the increase (due to extra headers) ?

Thanks in advance

ΓΔÎčÎŹ ÏƒÎżÏ… Vasileios!

Keep in mind that there are various MTU sizes that we can adjust:

  • Interface MTU - this is the MTU at Layer 2, which is maximum size of the payload of an Ethernet frame
  • IP MTU - this is the MTU of the maximum size of the IP packet (including the header) that’s used to determine IP fragmentaiton.
  • TCP MSS - This is the maximum size of the TCP segment payload (excluding TCP header)

When we decrease the size of the MTU to accommodate additional headers that are added by protocols such as PPPoE, we are not decreasing the Interface MTU. If we were, then it would make things worse as you have already understood. We are actually making the IP MTU smaller. If we make the IP MTU 1492, then that means that during encapsulation from the Network Layer to the Data Link layer (from IP to Ethernet) the largest payload that the Ethernet frame will receive is 1492 bytes. That means that if we are employing other protocols such as PPPoE, we have room for an extra 8 bytes to add additional headers.

So we are not making the MTU of the interface smaller, but we are making the size of the Ethernet frame’s payload smaller, in order to accommodate additional headers. Does that make sense?

I hope this has been helpful!

Laz

1 Like

Hello Rene/Laz,
By default the maximum size of a payload/MSS is 1460 Bytes. What is the maximum size of a payload for UDP traffic by default since UDP uses a 8 Bytes header at Layer 3?

Thanks a lot

Hello Azm

First of all, the typical default TCP MSS for IPv4 is 536 and for IPv6 it is 1220. This is not found in an RFC, but is generally agreed upon by vendors. But this value can obviously be changed. The value of 1460 is a commonly used value, as seen in the lesson, because of the size of the underlying MTUs of the network layer (IP) and the Data Link Layer (Ethernet). 1460 is used simply because the Ethernet payload is limited to 1500, and we subtract the IP header size (20) and the TCP header size (20) from that to get 1460. It is the most efficient size making full use of the available payload capacity of each PDU.

Now as far as UDP goes, there’s no counterpart MSS associated with this protocol, because it doesn’t use sessions or segments, so no maximum segment size needs to be negotiated. The UDP header has a Length field that specifies the total length of the UDP datagram (header + payload), and that field is 16 bits in length. So theoretically, whole UDP datagram can be up to 65535 bytes. However, there are other factors that make the datagram size much smaller in real-world scenarios.

For UDP, it is the applications themselves, as well as the operation of the NIC cards of the end hosts that determine the actual size of a datagram. These processes take into account the underlying IP and Ethernet MTUs as well, to ensure an efficient transmission of data. But even if the datagram size is large, fragmentation will take place to ensure that the data “fits” in the payload of the underlying protocols.

I hope this has been helpful!

Laz

Hi,

I’m seeing some strange results when playing around with mtu on a couple of directly connected routers:

R1—155.1.12.0/24----R2

R1 has the default mtu/ip mtu of 1500:

R1#sh int Gi0/1 | i MTU
  MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec, 
R1#sh ip int Gi0/1 | i MTU
  MTU is 1500 bytes

R2 has had its mtu/ip mtu changed to 1400:

R2#sh int Gi0/1 | i MTU
  MTU 1400 bytes, BW 1000000 Kbit/sec, DLY 10 usec, 
R2#sh ip int Gi0/1 | i MTU
  MTU is 1400 bytes

So, I would expect the following ping from R1 to R2 to work, which it does:

R1#ping 155.1.12.2 size 1400
Type escape sequence to abort.
Sending 5, 1400-byte ICMP Echos to 155.1.12.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/2 ms

and the following ping to fail (because I have exceeded R2’s MTU) but it doesn’t:

R1#ping 155.1.12.2 size 1401
Type escape sequence to abort.
Sending 5, 1401-byte ICMP Echos to 155.1.12.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 2/2/3 ms

It is only when I increase the datagram size to > 1410 that is starts to fail:

R1#ping 155.1.12.2 size 1410
Type escape sequence to abort.
Sending 5, 1410-byte ICMP Echos to 155.1.12.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 2/2/3 ms
R1#ping 155.1.12.2 size 1411
Type escape sequence to abort.
Sending 5, 1411-byte ICMP Echos to 155.1.12.2, timeout is 2 seconds:
.....
Success rate is 0 percent (0/5)

I did a packet capture using Wireshark and the frame size is indeed 1424.

So, why is an interface with an MTU of 1400 accepting frames with a payload of 1410?

Thanks,

Sam

Hello Samir

When you do these kinds of experiments, you must make sure that you set the Do not Fragment (DF) bit to 1. Otherwise, your pings will still be successful even if the size of your ping is larger than the allowed IP MTU, because the packet will be fragmented. So it is normal to see a successful ping and a wireshark capture of larger MTUs because fragmentation is taking place.

Try your experimentation again using the following ping command:

R1#ping 155.1.12.2 size 1401 df-bit

You should find that starting at 1401 your pings will fail.

Now having said all of that, I find it very strange that your pings failed at 1411 and on! Fragmentation should take care of that. I was able to ping with a size of 18024 which is the largest allowed on the platform I’m using, and I was still successful in my pings, due to fragmentation (into 13 fragments!). Can you do some more experimentation and see if you still get this result with and without the DF bit set? I’d be interested to see the results.

I hope this has been helpful!

Laz