Can't be done?
All along the responsive images debate, there were several people that claimed that the salvation will come in the form of a new image format that will enable images that are automagically responsive.
My response to these claims was always that it can't be done.
It can't be done since the browser needs to download the image in order for it to analyze which parts of the image it needs. Yes, the browser can start to download the image and reset the connection once it has enough data to display the image properly, but that will always download much more than actually neccessary. (not to mention, an extremely ugly solution)
Also, introducing new image formats to the web is less than trivial and extremely slow at best (If you're not convinced, see Mozilla's response to WebP a year ago.)
And don't get me started on the lack of fallback mechanisms for new image formats :)
So, in one of the latest twitter discussions, when the subject came up, I was about to make all the above claims once again. But then I realized I was wrong all along. It can be done, it can be done gracefully, and it can be done with current image formats
HOW?!?!The web already has a "responsive" format, which is progressive JPEG. The only issue at hand is getting the browsers to download only the neccesary bytes of the progressive JPEG.
Here's how we can do this:
- The author will compress the progressive JPEG with multiple scans
- The browser would download an initial buffer of each image (10-20K), using the "Range" request header
- This initial buffer will contain the image's dimensions and (optionally) a "scan info" JPEG comment that will state the byte breakpoints of each one of the JPEG scans (slightly similar to the MP4 video format meta data)
- If the image is not a progressive JPEG, the browser will download the rest of the image's byte range
- When the scan info comment is present, the browser will download only the byte range that it actaully needs, as soon as it knows the image's presentation size.
- When the scan info comment is not present, the browser can rely on dimension based heuristics and the "Content-Length" header to try and guess how many bytes it needs to really download.
- DRY and easy to maintain - no need to sync the URLs with the correct resolution between the image storage and the HTML/CSS. Only a single image must be stored on the server, which will significantly simplify authors' lives.
- The image optimization can be easily automated.
- Any progressive image used in a responsive design (or that its display dimensions are smaller than its real dimensions) can benefit from this, even if the author is not aware of responsive images.
- The optimization burden with this approach will lie on the shoulders of browser vendors. Browsers will have to come up with heuristics that correlate between number of bits per scan and the "visually acceptable" output dimensions.
- Two request for every large image, might have a negative effect on the download speed & uplink bandwidth. Browser vendors will have to make sure it won't negatively effect speed. SPDY can resolve the uplink bandwidth concerns.
- It is not certain that savings using the "responsive progressive" method are identical to savings possible using resize methods. If it proves to be an issue, it can probably be optimized in the encoder.
This proposal does not claim that all the current <picture> tag efforts are not neccessary. They are required to enable "art direction responsiveness" to images, and give authors that need it more control over the actual images delivered to users.
With that said, most authors might not want to be bothered with the markup changes required. A new, complementary image convention (not really a new format) that will provide most of the same benefits, and can be applied using automated tools can have a huge advantage.
It is also worth noting that I did not conduct a full byte size comparison research between the responsive progressive method and full image resizing. See the example below for an anecdotal comparison using a single large image.
All of the images in the responsive progressive example are a single progressive JPEG that was truncated after several scans.
This is an attempt to simulate what a single progressive JPEG might look like at various resolutions when only a part of its scans are used, and how much the browsers will have to download.
We can see here that the thumbnail image below is significantly larger as responsive progressive than it is as a resized thumbnail, and the largest image is about the same size.
IMO, the responsive progressive images look significantly better than their resized counterparts, so there's probably room for optimization here.
240x160 - responsive progressive - 17K
240x160 - resize - 5.2K
480x320 - responsive progressive - 21K
480x320 - resize - 15K
960x640 - responsive progressive - 57K
960x640 - resize - 59K
Update: I just saw a slightly similar proposal here. My main problem with it is that a new format will take too long to implement and deploy, and will have no fallback for older browsers.