The task is the following:
An application in host A wants to transmit two files (pictures) to host B. The user at host B waits for any of the pictures and starts viewing it, if it is available. The order in which the files (pictures) arrive does not matter, he starts with the first he gets.
The files could be anything, pdf books, software and so on.
The problem I have is due to TCPs in order delivery. If one TCP Segment gets lost the following segments stored in Host Bs receive buffer can’t be delivered to the application. The user must wait, till a fast retransmit or timeout occurs and he gets the missing segment. Although Picture 2 (moon) is completely stored in its receive buffer.
Is this the behaviour of TCP?
Would the use of the push flag after one file change TCPs behaviour? (I don’t thinks so)
Would I usually use one TCP-Socket per file to treat this problem? (For small files I would probably experience huge delays cause of handshaking)
Wouldn’t it be better to use one UDP-Socket to cope with this problem (and implement the reliability by myself)?
This is basically how TCP works yes. Keep in mind that TCP is just for transport, it’s up to the application layer (HTTP in this example) how it wants to use TCP.
TCP’s job is to ensure that whatever you throw at it ends up at the other side.
For example, HTTP 0.9 would establish a new TCP connection for each GET request and then terminate it. Nowadays we also have techniques like HTTP pipelining.
I found some examples for HTTP in the “HTTP: The Definitive Guide: The Definitive Guide” book. Take a look at this link:
If you look at the “HTTP connection handling” then they explain some of the differences between serial and parallel transfers. It explains a bit more how HTTP uses TCP.
I agree with you as far as it is to the application layer how to use TCP. But the application can’t change TCP fully ordered behaviour if it would like to have only partial ordered transfer.