TCP Header

Thanks LAZ that indeed helpful,

the seq number indicate the amount of data has been sent in one window not in the entire TCP session, I was confused because Rene said seq number indicates how much data is sent during the TCP session

Untitled

Hello Hussein

Yes I understand the confusion. Keep in mind that if you take the first sequence number that was used when the session was initiated and the last sequence number that was used before termination, if you calculate the difference between them, it will indeed be the total amount of data in bytes that have been sent over the whole session (taking into account the number of times the sequence number has to be reset to zero when it reaches the upper limit of the 32 bit field).

I hope this has been helpful!

Laz

2 Likes

Hello @lagapidis

Thank you very much, now everything is clear, only one thing which is how to find out the number of times the sequence has been reset to zero ?? I mean is there any filed or option in TCP header determine that ??

Hello Hussein

Unfortunately there isn’t. Because the window size is always going to be much much smaller than the largest available sequence number, it will never reset to zero within a single segment. Segments are always many many orders of magnitude smaller. Only the hosts between them keep track of when the counter resets to zero. Even when it does, they only detect it at that specific segment. Once the segment is received and acknowledged, there is no need to keep track of the resetting of the counter from the host’s point of view.

If you want to keep track of the total amount of data that has been sent in a session, there are other mechanisms that can do that, that belong to higher layers. For example, in an FTP transaction, FTP keeps track of bytes transferred and other such statistics.

I hope this has been helpful!

Laz

1 Like

Thank you very much @lagapidis your answer very clear and helpful for me

Hi there,
Could you please tell me what is Urgent Pointer field and URG in the Flag field.
Thanks.

Hello Muhammad

A host can have many TCP sessions occurring at the same time. Hosts will generally processes TCP segments on a first come first serve (FIFO) basis even when these segments come from multiple TCP sessions. When large volumes of data are being transferred, this can impact the responsiveness of some of the TCP sessions.

If the URG flag is set to zero, segments are treated in a FIFO manner. When the URG flag is set to 1, this tells the receiving host that this segment should be treated as urgent. How urgent? Well that depends on what is found within the Urgent Pointer field. The Urgent Pointer field instructs the TCP stack to halt other sequential data pushes and immediately create a secondary “out of band” channel for those packets to speed up data transmission. The value in the Urgent Pointer is known as a sequence number offset, indicating how far forward in the sequence numbering of the TCP segments this particular segment should be placed.

Examples of the use of the URG flag and the Urgent Pointer include its use in Telnet and SSH sessions where an immediate response, such as the echoing of typed characters, is required.

I hope this has been helpful!

Laz

Thank you Lazaros,
now it is clear.

1 Like

Hi Rene and staff,
please , could you add some explanations about TCP header in TCP connections with MD5 authentication ?
Regards

Hello Dominique

The TCP authentication using MD5 is a feature that is included in the Options portion of the TCP header. This feature is primarily used to protect BGP sessions using an MD5 Signature. This is further described in RFC 2385. However, this is now considered obsolete and has been replaced by the TCP Authentication Option which is described in RFC 5925.

Cisco has support for the Authentication Option in its Nexus platforms which can be seen here:

Most of the info concerning TCP header fields for both AO and MD5 authentication can be found in the RFCs.

I hope this has been helpful!

Laz

hi Rene,
please ,
I have question about TCP Header especially Destination port
how can know the Destination port ??
thanks…

Hello Abd

When a TCP session begins, the initiator sends a TCP segment which includes a source port and a destination port. The destination port is determined based on the service that is being requested.

If the TCP session is initiated by a client towards a server, then the destination port will be that of the service that is being requested. For example, if a PC is connecting to a web server such as www.networklessons.com, then the default destination port used will be 80, because that is what is used for HTTP. If you are using a secure connection, using HTTPS, then the default destination port will be 443. You can change this port by using the following syntax: www.networklessons.com:8080, which will connect you to port 8080. However, if the server is not offering services on this port, you will obviously be denied.

When a server responds to a request, the destination port is that of the original request. So if your PC uses source port 53988 and destination port 80 for the initial communication, the return communication will use a source port of 80 and a destination port of 53988.

I hope this has been helpful!

Laz

1 Like

Thank you so much Lazaros,
now it is clear.

1 Like

Hi Rene,

can you explain me the below wireshark capture file

https://www.cloudshark.org/captures/77c411643fad

first packet shows initiating three way handshake int that windows size shows 29200 how its cames?

Regards
Gowtham

Hello Gowtham

The initial window size that is indicated in the SYN portion of the three way handshake is determined based on the size of the receive buffer of the device, the speed of the device, and the speed of the device’s interface to the network. All of these factors are taken into account by the device’s operating system or firmware, in order for the device to specify an initial windows size. This will then change over time, as traffic is evaluated, and as flow control mechanisms kick in.

I hope this has been helpful!

Laz

thank you lagapides.

one more question why we are using scaling factor.
how many bits is using for scaling factor from TCP header.

Regards
Gowtham

ya i got the answer from TCP header --> option field have 1 to 40 bytes from that we can use 24 bit for scaling factor (multiplier) as per the RFC 1323. is it right?

From the above picture . PC getting their windows size incremented each and every time replying to ack. Its based on the OS, physical memory and depends on TCP Session on the PC is it like that?
From server windows size is fixed ?
From receiver windows size is used to tell the sender how much capable of receiving buffer right?
before receiving full window size?why each and every packet they sending acknowledgement?

Regards
Gowtham

Hello Gowthamraj

Yes, the scaling factor is contained within the TCP options and is a field that takes up 3 bytes (24 bits). It is used as a multiplier for the 16 bit Window field, in order to be able to get larger values for this field to improve performance of TCP. This was a necessary addition to TCP as data transfer rates increased in speed.

This increase in the window size is a phenomenon that occurs in all TCP transmissions. As data is being sent reliably, and acknowledgements are being received without error, the hosts attempt to increase the amount of data that is to be sent before getting an acknowledgement, thus increasing speed and efficiency of the transfer. This is done until some segments are lost. When this happens, window size is reduced and then increased slowly once again. This whole process is called TCP Slow Start, and is further described in the following lesson:

The algorithm used to determine window size increase is performed the same regardless of the OS, or memory, or resources of the sender. However, the largest window size reached does depend on the system resources of the devices that are communicating.

The window size is something that is defined in each direction. It will only increase if there is a need for it to increase, that is, if there is a demand for higher throughput. If there isn’t it will remain the same. In this instance, it seems that most traffic was taking place from the client to the server, so window size for the direction of traffic was increasing.

Although the window size does depend on things like buffer size, NIC speed, memory, and CPU resources, it is not a measure of the buffer itself. It is a value that is used to regulate the flow, or provide flow control of data. A host may be able to handle more traffic when its CPU is idle, and its memory is free, but at other times, its CPU will be occupied, and the memory will be used up, so the window size at that point will not get as large.

Remember that the window size is the maximum amount of data that can be sent before an acknowledgement is received. This doesn’t mean that ACKs can’t be sent for smaller amounts of data. The window is the maximum, but the OS will often send ACKs before that maximum is reached, simply because it has the resources to do so.

I hope this has been helpful!

Laz

thank you lagapides. what are the protocols will use PUSH flag, for example telnet, htpp…

any other protocols ?

i am not getting this point. for my scenario host is sending acknowledgement for every packet receiving from the server and each and every time host increase the TCP windows size while sending an acknowledgement . is it right ?

during the threeway handshake final ack that time itself client send an windows size to the server for an example 400 bytes. that means server will send 400 bytes after sending 400 bytes he waits for an ack from the client once get an ack then only it will send next 400 bytes from server point of you. but in client will send each and every segment sends an ack. is it right?

maximum amount of data that can be sent before an acknowledgement is received that means within that host will send an acknowledgment to the server is it like that ?

Regards
gowtham

Hello Gowthamraj

Any application protocol that uses TCP can make use of the PSH flag. It is useful whenever a small amount of data is sent at a time. An example is Telnet, and SSH, which should send data, such as a character typed on a keyboard, immediately without waiting for the buffer to fill up. But depending on its use, HTTP, FTP, or any other protocol may also use it. It can be dynamically enabled whenever small bits of information need to be sent immediately. More information about the PSH flag can be found at the following post:

Now concerning this question:

Let’s use the example you have mentioned. Let’s say during the three way handshake, a window size of 400 bytes is established. Now this is the maximum amount of data that can be sent before an ACK is returned. So one scenario is:

  1. Host 1 begins sending data to Host 2, and sends a segment of size 40
  2. Host 1 continues sending data to Host 2, sending an additional 9 segments of size 40
  3. Host 1 stops sending data to Host 2 until an ACK is returned acknowledging the successful receipt of these 10 segments of 40 bytes (400 bytes total)
  4. Host 2 sends an ACK with a value of 401, indicating that it is waiting for the next set of data starting at byte 401.
  5. Host 1 begins sending data to Host 2 and continues to do so…

In the above scenario, Host 1 has sent the full “window” before receiving an ACK. Only when the window is exhausted will Host 2 send the ACK. Now, even though this scenario is the one that is most often described in training material (in order for traniees to understand the logic), this is rarely how hosts behave.

The important thing to understand here is that the window size is the maximum amount of data that can be sent before receiving an ACK. That doesn’t mean that the receiver cannot send an ACK before the window size is exhausted. The receiver is actually free to respond whenever it wants, even with an ACK every segment if it chooses to, as long as it responds before the window size is exhausted. This is clearly stated in the RFC 793 under “flow control”:

TCP provides a means for the receiver to govern the amount of data
sent by the sender. This is achieved by returning a “window” with
every ACK indicating a range of acceptable sequence numbers beyond
the last segment successfully received. The window indicates an
allowed number of octets that the sender may transmit before
receiving further permission.

In other words, when Host 2 receives a segment with a sequence number of 1000 and a window size of 400 for example, the range of acceptable ACKs to be returned is anywhere between 1001 and 1400. This means it can send an ACK after it receives the next 40 bytes (ACK = 1040), the next 120 bytes (ACK = 1120) or 240 bytes (ACK = 1240), for example.

What you see in your example is an ACK for every segment. This ACK is well within the range of the window size provided. The receiver doesn’t have to respond so early with so many ACKs, but it simply does so because it has the resources to do so.

I hope this has been helpful!

Laz