The web was developed by people who were interested in text processing rather than in networking and, unsurprisingly enough, the first versions of the HTTP protocol did not make very good use of network resources. The main problem in HTTP/0.9 and early versions of HTTP/1.0 was that a separate TCP connection (“virtual circuit” for them telecom people) was created for every entity transferred.
Opening multiple TCP connections has significant performance implications. Obviously, connection setup and teardown require additional packet exchanges which increase network usage and, more importantly, latency.
Less obviously, TCP is not optimised for that sort of usage. TCP aims to avoid network congestion, a situation in which the network becomes unusable due to overly aggressive traffic patterns. A correct TCP implementation will very carefully probe the network at the beginning of every connection, which means that a TCP connection is very slow during the first couple of kilobytes transferred, and only gets up to speed later. Because most HTTP entities are small (in the 1 to 10 kilobytes range), HTTP/0.9 uses TCP where it is most inefficient.
|• Persistent connections:||Don’t shut connections down.|
|• Pipelining:||Send a bunch of requests at once.|
|• Poor Mans Multiplexing:||Split requests.|