There are a number of reasons why various people are interested in a distributed Web protocol.
- The most practical are scaling and reliability — if you don’t have 1 server for your Web traffic, you don’t have to worry about it going down in a flash crowd or when there’s a network problem nearby.
- Having multiple copies of (and paths to) content is 1 way to make it more available despite attempts to censor it.
- Cutting the server out of the equation is seen as an opportunity to reset the Web’s balance of power regarding cookies and other forms of tracking; if you don’t request content from its owner, but instead get it from a third party, the owner can’t track you.
As it is, HTTP is an inherently client/server protocol, in that the authority (the part of the link just after “http://“ or “https://“) tells your browser where to go to get the content. Although HTTP (with the help of related systems like DNS) allow servers to delegate that authority to others to allow them to serve the content (which is how CDNs are made), the server and their delegates still act as a single point of control, exposure and failure.
Improving all of this sounds really interesting, both as a technical person and as a user. Why is this not just a simple matter of programming?
Tag: http
Kill HTTP
The data retention mandate in this bill would treat every Internet user like a criminal and threaten the online privacy and free speech rights of every American, as lawmakers on both sides of the aisle have recognized. Requiring Internet companies to redesign and reconfigure their systems to facilitate government surveillance of Americans’ expressive activities is simply un-American.
2014-08-19: HTTP shaming. If appstore “reviews” were actually serious they’d block http apps.
2014-12-14: The civil war of our time. Another shot being fired: Proposal: Marking HTTP As Non-Secure
The attacks on fundamental freedoms to communicate that are represented by various government repression of the Internet around the world, and in the US by hypocritical legislation like PROTECT IP and SOPA (E-PARASITE), are fundamentally fascist in nature, despite between wrapped in their various flags of national security, anti-piracy profit protection, motherhood, and apple pie. Anyone or anything that is an enabler of communications not willingly conforming to this model are subject to attack by authorities from a variety of levels — with the targets ranging from individuals like you and me, to unbiased enablers of organic knowledge availability like Google. For all the patriotic frosting, the attacks on the Internet are really attacks on what has become popularly known as the 99%, deployed by the 1% powers who are used to having their own way and claiming the largest chunks of the pie, regardless of how many ants (that’s us!) are stomped in the process.
2015-01-28: Amen. 2015 will be less forgettable if we can kill off most HTTP sites.

New favorite Chrome Canary flag: chrome://flags/#mark-non-secure-as … non-secure! The way it should have been from the start.
Digging Deeper with htracr
visualizing the binding between http and tcp
There’s a lot of current activity on the binding between HTTP and TCP; from pipelining to SPDY, the frontier of Web performance lives between these layers. To get more visibility in exactly what’s happening down there, I decided to throw together a little tool to show how HTTP uses TCP: htracr.
HTTP Proxy Considerations
very good summary. performance, features etc
Dictionary Compression
40% data reduction better than Gzip alone on Google search
HTTP Debugging Pitfalls
curl, livehttpheaders and even wireshark munge the HTTP you get. ugh
Async Page Loads
The latest WebKit nightlies contain some new optimizations to reduce the impact of network latency. When script loading halts the main parser, we start up a side parser that goes through the rest of the HTML source to find more resources to load. We also prioritize resources so that scripts and stylesheets load before images. The overall effect is that we are now able to load more resources in parallel with scripts, including other scripts.
Stale Cache Handling
The other issue we had was when services go down. In many cases, it’s preferable not to show users a “hard” error, but instead to use slightly stale content, if it’s available. Stale-if-error allows you to do this — again, in a way that’s controllable by you.
i like the stale-if-error in particular
HTTP interoperability
I’ve been wondering about HTTP interoperability for some time now. What if a response has 2 Content-Type or Location headers? What if newlines are done using U+00A instead of U+000D followed by U+000A?
HTTP/2
an update on waka
As I mentioned in a earlier post Roy Fielding has started a ASF Lab for Web Architecture that is intended to be a place to work on documentation regarding Web Architecture. This includes existing protocol improvements and Waka a new HTTP upgrade. Waka is still in the head of Roy Fielding and the changes have been alluded to over 8 years in various ApacheCon presentations; in various Apache 2.0 design notes and emails focused mostly around the IO-layer and request-response processing chains in Apache 2.0; emails to rest-discuss and references to various draft RFC and previous HTTP next generation efforts – rHTTP, W3C’s HTTP-NG and Spero’s HTTP-ng.
2007-12-07:
Another reason to revise HTTP is that there’s a lot of things that the spec doesn’t say. The people who were there in the late 90’s understand the context, and those who have been around HTTP enough have learned to understand the thinking behind its design and the intent of its features. However, there’s a whole new generation of implementers and extension builders who haven’t been exposed to this. If we can document the philosophy of HTTP with regard to extensibility, error handling, etc., they have a better chance of understanding the right way to use it.
i want waka 🙂
2014-01-11:
I wrote up a wall of text about HTTP/2 tradeoffs. It makes for good bedtime reading, puts you to sleep in no time.