Last month the Web Performance WG had a face to face meeting (or F2F, for short), and I wanted to write down something about it ever since. Now I’m stuck on an airplane, so here goes! :)
The F2F was extremely fun and productive. While most of the usual suspects group’s regular attendees were there, we also had a lot of industry folks that recently joined, which made the event extremely valuable.
One of the biggest advantages of working on the web has always been the people. Ever since I started contributing to the web platform, back in 2013, working with browser engineers was always a great experience. An amazing collection of super smart, thoughtful and kind folks.
That’s what kept me invested in the Chromium project throughout the years since then as an external contributor. First when I was working for myself, as a nighttime hobby, then as part of the Responsive Images Community Group, and finally working on browsers and standards as part of my job at Akamai.
This is a republication of my Perf Calendar post, because I really like it and wanted it to be on my blog. Own your content and all that…
There’s a lot of talk these days about browser caches in relation to preload, HTTP/2 push and Service workers, but also a lot of confusion.
So, I’d like to tell you a story about one request’s journey to fulfill its destiny and find a matching resource.
A couple of weeks ago I attended TPAC, the annual week-long W3C festivities meeting, and I’d like to share my notes and impressions from a few of the sessions we ran.
Responsive Images While most days at TPAC are split into Working-Group-specific meetings, Wednesday is traditionally filled with breakout sessions that aren’t necessarily affiliated with any WG in particular. As part of that, we ran a session about responsive images and their aspect ratios.
I’ve spent a few days last week in Stockholm attending the HTTP Workshop, and taken part in many fascinating discussions. One of them revolved around HTTP push, its advantages, disadvantages and the results we see from early experiments on that front.
The general attitude towards push was skeptical, due to the not-so-great results presented from early deployments, so I’d like to share my slightly-more-optimistic opinion.
What can push do that preload can’t A recurring theme from the skeptics was “push is only saving 1 RTT in comparison to preload”.
There have been a lot of talk recently about the Network Info API.
Paul Kinlan published an article about using Service Worker along with the Network Info API to send network information up to the server and let the server adapt its responses to these network info bits. There is also an intent to implement the downlinkMax attribute in the Blink rendering engine.
Since I have Opinions™ on the matter, and Twitter and mailing lists aren’t always the ideal medium, I wrote them down here.
The Web platform is a wonderful thing. Its reach is unparalleled in human history. It enables people all over the world to access vital information, education and entertainment. People old and young, rich and poor. It makes their lives better than they would have been without it. It also enables commerce, banking, and improved supply chains. The world’s economy would have been very different without the Web platform.
The Web platform has over 3 billion users and that number keeps climbing.
Mozilla have recently announced that they are planning to deprecate insecure-HTTP, which includes denying new features from sites that are served over HTTP connections. I believe that is a mistake.
I tweeted about it, but a longer form is in order, so here goes.
Why HTTPS everywhere is important Let me start by saying that I strongly believe that the Web should move to HTTPS, and serving content over plain-text HTTP is a mistake.
I owe you fine folks a blog post.
I recently tidied up my blog and moved it to a new backend and in the process realized that I haven’t blogged in over 18 months. (!!!)
Where did I disappear? A lot have happened during that time:
The markup based responsive images solutions, which seemed to be facing a dead-end in September 2013, were revived and significantly improved. In order to defuse initial resistance from the Blink project, I started implementing required infrastructure in order to implement the features there.
It’s been a year since I last wrote about it, but the dream of the “magical image format” that will solve world hunger and/or the responsive images problem (whichever one comes first) lives on.
A few weeks back I started wondering if such an image format can be used to solve both the art-direction and resolution-switching use-cases.
I had a few ideas on how this can be done, so I created a prototype to prove that it’s feasible.
For the impatient Sizer-Soze is a utility that enables you to evaluate how much you could save by properly resizing your images to match their display size on various viewports.
Basically it shows you how much image data you could save if you deployed an ideal responsive images solution. If you already have a responsive images solution in place, it enables you to see how far it is from that ideal, and improve it accordingly.
For a while now, the art-direction use-case have been treated by browser vendors as resolution-switching’s imaginary friend.
When talking to people who work for browser vendors about that use-case, I’ve heard phrases like “that’s a really obscure use-case” and “No one is really doing art-direction”.
This got me wondering — how big is that use-case? How many Web developers & designers are willing to go the extra mile, optimize their images (from a UI perspective), and adapt them so that they’d be a perfect fit to the layout they’re in?
I just read Jason Grigsby’s post, and tried to answer it in the comments, but saw that my response passed the limits of a reasonable comment. So here I am.
This post is a proposal for a file structure that will enable browsers to fetch images encoded using a responsive image format.
But which format? Regardless of the image format that will eventually be used, a large part of the problem is coming up with a way to download only the required parts of the responsive image, without downloading unneeded image data and without reseting the TCP connection.
Summary for the impatient Lossless compression with current formats can reduce image size on the web by 12.5%.
PNG24 with an Alpha channel comprise 14% of images on the web. We can cut their size by 80% using WebP.
Savings from lossless optimization can save 1.5% of overall Internet traffic!*
Savings from conversion of PNG24 with an alpha channel to WebP can save 2.1% of overall Internet traffic!!!
That’s 2.
Can't be done? All along the responsive images debate, there were several people that claimed that the salvation will come in the form of a new image format that will enable images that are automagically responsive.
My response to these claims was always that it can't be done. It can't be done since the browser needs to download the image in order for it to analyze which parts of the image it needs.
TL;DR Spoiler: If you have inline scripts that must be inside your page, adding an empty <div> before all stylesheet links will avoid a CSS bottleneck!!!
Do we really need Async CSS? I've stumbled upon a post by Guy Podjarny called Eliminating the CSS bottleneck which talks about using async stylesheets in order to avoid external stylesheets from blocking the download of other resources. What bothered me about the entire post is that according to BrowserScope this issue is non existent in all modern browsers.
Responsive Images and cookies Jason Grigsby wrote a great post summarizing different approaches to responsive images and what sucks about them. Among other things, he discussed the problem of getting the first page load right. After a short discussion with him on Twitter, I decided to take a deeper look into the Filament group's cookie based method. Tests Testing their demo site using Firefox showed that the first browse brings in the smaller "
TL;DR Adding a media attribute that supports queries to the base tag is all that's required to have responsive images with no performance penalty. The thread After my last post Nicolas Gallagher pointed me towards a mail thread on html-public mailing list that discusses appropriate solutions to the responsive images problem.* There were a few suggested solutions there: Each image tag will have child source tags with a media attribute in each A new "
TL;DR Responsive images are important for mobile performance. Hacks may solve the problem, but they come with their own performance penalty. Browser vendors must step up to create a standard, supported method. What we have There have been several techniques published lately that enable responsive images using various hacks: Harry Roberts suggested to use background images & media queries to deliver larger images to desktop browsers Keith Clark suggested to use JS at the document head to plant cookies that will then send the device dimensions to the server with every image request.
Proposal flood In the last few days there have been a lot of proposals regarding ways to load lower resolution images for lower resolution devices. There have been Nicolas Gallagher's Responsive images using CSS3, a proposal on the W3C list, and yesterday came Robert Nyman's proposal. Obviously, I also have an opinion on the matter :)
My contribution to the flood While the proposals from Nicolas & Robert are interesting, they throw a huge maintenance burden on the CSS, and make it practically uncacheable in many fast-updating pages.
I had a twitter discussion today with Robert Nyman regarding how we should treat old IEs (6,7 and 8) when we develop. The trigger was his tweet regarding his post from 2 years back that we should stop developing for IE6. Since twitter is fairly limited for this kind of things, here's my real, chatty opinion on this subject.
Stats below are taken from statcounter and regard EU and North America.
There's been a lot of noise recently in the web dev world regarding UA sniffing vs. feature detection. It all started when Alex Russell wrote out a post suggesting that there are cases where feature detection UA sniffing can be used, and where feature detection wastes precious time asking questions we already know the answer to. As he predicted, that stirred up a lot of controversy. Nicolas Zakas backed him up (more or less), Faruk Ates gave a history lecture, and the entire comment thread on Alex's post is very entertaining.
So, I've decided to try this blogging thing all the cool kids been talking about. The last straw was PPK's reason for shutting down comments on his blog: "The opinions and musings of the average blog commenter are just not very interesting. If they were, they’d have a blog of their own." I have interesting opinions and musings. I do! Therefore, I must have a blog. So I built this thing.