Tag: web

Web Photoshop

This has to be some kind of high water mark for web capabilities. Pretty late in the game, unclear that it matters much? For example, will WebAssembly take full advantage of Apple custom silicon?

Over the last 3 years, Chrome has been working to empower web applications that want to push the boundaries of what’s possible in the browser. 1 such web application has been Photoshop. The idea of running software as complex as Photoshop directly in the browser would have been hard to imagine just a few years ago. However, by using various new standardized web technologies, Adobe has now brought a public beta of Photoshop to the web.

Memory leak debugging

Guided by BLeak, we identify and fix over 50 memory leaks in popular libraries and apps including Airbnb, AngularJS, Google Analytics, Google Maps SDK, and jQuery. BLeak’s median precision is 100%; fixing the leaks it identifies reduces heap growth by an average of 94%, saving from 0.5MB to 8MB per round trip.

WebAssembly

Today we’re happy to announce, in tandem with Firefox and Edge, a WebAssembly Browser Preview. WebAssembly or wasm is a new runtime and compilation target for the web, designed by collaborators from Google, Mozilla, Microsoft, Apple, and the W3C WebAssembly Community Group.

2018-08-16: WebAssembly Attacks

WebAssembly is a format that allows code written in assembly-like instructions to be run from JavaScript. It has recently been implemented in all 4 major browsers. We reviewed each browser’s WebAssembly implementation and found 3 vulnerabilities. This blog post gives an overview of the features and attack surface of WebAssembly, as well as the vulnerabilities we found.

2023-01-19: While I still think node.js is a dumb joke, this makes a good point for using wasm instead of containers

The following are a few of the reasons WASM is worth keeping an eye on.

  1. It’s Getting Faster
    Speed is a feature, and those behind the WASM specification have been hard at work. A little over 3 years ago we spoke to some of the core dev team, and their estimation was that WASM came with approximately a 20% performance hit versus native code. They speculated that within 2 years that difference could be erased, or at least made negligible enough to not matter. Today, depending on platform and workload, that has proven to be the case; one provider even claimed recently to run faster within WASM than natively. The performance limitations, therefore, that have held WASM back in the past are largely subsiding, making it viable for more and more workloads.
  2. It’s Quick
    If WASM has been compelled to work on its overall performance, there’s no such need with respect to its latency. Even from cold start situations, WASM’s latency is measured in milliseconds, not actual seconds as is typical with other application platforms from containers to function-as-a-service providers. This makes it highly suitable for workloads that are latency-sensitive, which is more and more workloads – and certainly the event-based workloads that are becoming more common within the enterprise.
  3. It’s (Relatively) Secure
    Granting that no software is immune to vulnerabilities, WASM is nevertheless distinguished in this area. Designed from day one to be secure enough to run executables within the context of an individual’s browser, it is based on sandbox principles, with no access to or from the outside by definition. At a minimum, the historical priority placed on security has been higher than other platforms, a fact likely to be appreciated by security-sensitive enterprise buyers.
  4. It’s Lightweight
    Relative to something like V8 isolates, WASM executables are sizable. But just as containers were much lighter weight than the virtual machines they supplanted, so too is WASM dramatically lighter weight than containers. This means that, properly orchestrated (a subject we’ll come back to), WASM deployments can be fantastically dense relative to their container based peers; one provider reports 20X-30X more WASM sandboxes than Kubernetes containers, for example, on a given piece of hardware. Similarly, Cloudflare has talked about their usage of Isolates to achieve the same goal.

    This density is, in part, why the popular assertion that a growth in WASM deployments will enable something of a renaissance of PaaS platforms seems correct. The unit economics of running platforms – potentially more safely – at dramatically higher densities than container-based alternatives make WASM-based PaaS platforms more viable not only technically but economically as well. Both in terms of their overall end user pricing, but also potentially making free or lower cost tiers possible that have previously been deemed cost prohibitive by vendors such as Heroku.

  5. The Language Support is Improving
    For enterprises used to working with container-based platforms, or virtual machines before that, language limitations are non-existent. Whatever the language and runtime, a given application is wrapped in a container and then run on platforms like Kubernetes alongside hundreds or thousands of other workloads, covering a multitude of languages. But as Fermyon’s language support page indicates, WASM’s support for various programming languages varies, and widely. But this is unlikely to be a fatal flaw for WASM-based providers. First, because the support for new languages is improving, and at an accelerating pace as more attention is focused on the technology. Second, because the set of core languages supported already (C/C++. C#, Go, Kotlin, Rust, Swift etc) cover a large number of potential workloads. And lastly because abstract models like PaaS have always imposed such constraints, and if anything that’s likely to become more common rather than less as more and more abstract models emerge.

AMP for standardized measurement

if amp v2 succeeds, we’ll drain the swamp that is today’s web and abp will be unnecessary. this is a far preferable outcome than a bunch of walled gardens.

AMP, through its established `amp-analytics` mechanism, already ships with all the code to perform these measurements. It is vendor neutral and supports a wide range of metrics. This means ads can take advantage of the same “instrument once, report many times” feature that benefits AMP pages today, completely eliminating the bandwidth and runtime cost outlined above.

Bloated web

Facebook has put everyone else on notice. Your content better load fast or you’re screwed. Publication websites have become an absolutely bloated mess. They range from beautiful (The Verge) to atrocious (Bloomberg) to unusable (Forbes). The common denominator: they’re all way too slow. Instant karma’s gonna get them

this is why i have javascript off by default, and only allowlist maybe 10 sites. it avoids all those stupid “widgets” that these sites love so much.

The price of efficiency for advertisers’ is the user experience of the reader. The problem for publishers, though, is that $ and cents — which come from advertisers — are a far more scarce resource than are page views, leaving publishers with a binary choice: provide a great user experience and go out of business, or muddle along with all of the baggage that relying on advertising networks entails

Web standards overview

For example, it’s long been held that when you define an extension point in a standard, you generally need some way to coordinate it. The IETF does this with registries; the W3C had a fashion for using URIs as namespaces for a time (and then vendor prefixes — but that’s another rant). If browsers themselves become that lynchpin, you don’t need registries or namespaces; you just edit the spec — provided that the spec is faithfully reflecting what the browsers implement. The argument goes that in a browser-ruled Web, other software using the specification doesn’t want to diverge from the behavior of a Web browser, because doing so would cause interoperability problems and thereby reduce that software’s value. So, just make sure the browsers are walking in lockstep and document what they do in the specs; you don’t need no stinking registry.

Nice overview how Web standards work these days

Website timestamps are unreliable

In theory, our publishing tools could capture timestamps for the creation and modification of pages. Our web servers could encode those timestamps in HTTP headers and/or in generated pages, using a standard format. Search engines could use those timestamps to reliably sort results. And we could all much more easily evaluate the currency of those results.

In practice that’s not going to happen anytime soon. Makers of publishing tools, servers, and search engines would have to agree on a standard approach and form a critical mass in support of it. Don’t hold your breath waiting.

Website experiments

Rule #1: Small Changes can have a Big Impact to Key Metrics
Rule #2: Changes Rarely have a Big Positive Impact to Key Metrics
Rule #3: Your Mileage WILL Vary
Rule #4: Speed Matters a LOT
Rule #5: Reducing Abandonment is Hard, Shifting Clicks is Easy
Rule #6: Avoid Complex Designs: Iterate
Rule #7: Have Enough Users