Distributed HTTP?

There are a number of reasons why various people are interested in a distributed Web protocol.

  • The most practical are scaling and reliability — if you don’t have 1 server for your Web traffic, you don’t have to worry about it going down in a flash crowd or when there’s a network problem nearby.
  • Having multiple copies of (and paths to) content is 1 way to make it more available despite attempts to censor it.
  • Cutting the server out of the equation is seen as an opportunity to reset the Web’s balance of power regarding cookies and other forms of tracking; if you don’t request content from its owner, but instead get it from a third party, the owner can’t track you.

As it is, HTTP is an inherently client/server protocol, in that the authority (the part of the link just after “http://“ or “https://“) tells your browser where to go to get the content. Although HTTP (with the help of related systems like DNS) allow servers to delegate that authority to others to allow them to serve the content (which is how CDNs are made), the server and their delegates still act as a single point of control, exposure and failure.

Improving all of this sounds really interesting, both as a technical person and as a user. Why is this not just a simple matter of programming?

Leave a comment