I had a twitter discussion today with Robert Nyman regarding how we should treat old IEs (6,7 and 8) when we develop. The trigger was his tweet regarding his post from 2 years back that we should stop developing for IE6. Since twitter is fairly limited for this kind of things, here's my real, chatty opinion on this subject.
Stats below are taken from statcounter and regard EU and North America. Other markets' mileage may vary :)
So, what's the problem?
While we certainly can simply ignore IE6 in most of the world (around 2% market share), IE7 still has 7%-11% in the western world. But then again, this is where IE6 was 2 years ago when Robert wrote his post. In any case IE7 can be ignored or simply nudged to upgrade, since it is by all means an obsolete browser.
The real trouble start with IE8. While IE8 is much better then its older brothers, it is still a piece of crap in comparison to today's browsers and it does not support any of the new APIs we need to make the web awesome. It has a market share of 26-34% in the western world and since IE9 is not for XP, it is not going away anytime before the end of 2014 when XP is *finally* decommissioned. It will probably last a little while longer after that as well.
What can we do about it?
There are a few approaches that web developers can use in order to drive people away from the old IEs into the modern web:
- Advocate - Campaigns like HTML5forXP are trying to get the users to upgrade through awareness
- Nag - Display in-site messages that notify the user that he would be getting a better experience if he'd upgrade to a modern browser or install Chrome Frame
- Ignore - Stop testing on old IEs and trying to create a similar experience for these users using various polyfills
- Exclude - Block out old IE users from sites until they upgrade
While ignoring is tempting, you probably don't want 40% of your users to have a shitty experience on your site, so my personal favorite is "nag and ignore the none-essential parts" approach. (kinda like twitter with border radius)
On the other hand, I can't help from reflecting on the fact that Macromedia Flash (before it was bought by Adobe) gained 98% market share through Exclusion. "If you want to see this website - you MUST install Flash" was the paradigm that got it there.
The big question is "Who was the first to exclude users without Flash?" (If anyone knows, I'd love to hear about it). A bigger question is "Will one of the big guns on the web today (Google, Yahoo, Facebook, Bing) start excluding services from old IEs?". I know some of them don't support IE6, but who will be the first to not support IE8? I'm only guessing here, but it probably won't be Bing...
That's it for now.
UPDATE: I found some "Way back machine" stats that indicate that Macromedia Flash made a final market share leap from 90% to 95% in the summer of 2000. Could not find stats before that though...
There's been a lot of noise recently in the web dev world regarding UA
sniffing vs. feature detection. It all started when Alex Russell wrote out a post
suggesting that there are cases where
feature detection UA sniffing can be used, and where feature detection wastes precious time asking questions we already
know the answer to. As he predicted, that stirred up a lot of controversy. Nicolas Zakas backed him up (more or less), Faruk Ates gave a
, and the entire comment thread on Alex's post is very entertaining.
I agree with many of the points Alex makes, and detecting the UA on the server side has a *huge* advantage: We can avoid sending useless JS and image data to browsers/devices that will never use them. But, a couple of issues make good counter-arguments:
- Writing *correct* UA sniffing code is hard
- UA spoofers are left in the dark here. We would serve them content according to what they're pretending to be, rather then content according to their actual browser
The first problem can be solved by a reference project that does the actual detection for major server side languages. The second problem is more complicated. UA spoofing is a practice that came to be in order to circumvent badly written UA sniffing & UA based blocking. While unfortunate, this technique is necessary for minority browser users, as well as in other cases. I for
one have to use it when I'm using my phone for 3G tethering. My operator's network only allows phone UAs to go through
the phone APN, so I fake it. And when I'm getting mobile sites on my desktop browser, that is... well, let's say it's
What we have so far is:
- Feature detection *all the time* slows down things
- UA sniffing kills UA spoofing
So, there must be a third way.
What if we could count on UA sniffing for major browsers UNLESS we detect spoofing is in place?
I thought thoroughly regarding a generic solution here, but failed miserably. We can't trust UA strings (neither sent over the wire nor window properties). We can't trust other window properties (such as vendor) as 100% accurate since they as well may be spoofed.
So, do we raise a big white flag? Give up on the idea that a reliable method can be used to detect browsers and avoid feature detection for every single feature we want to use?
We can cover the most common use cases for UA spoofing and avoid messing them up. These cases are:
- Browsers that pretend to be IE so they won't be blocked by backwards sites
- Browsers that pretend to be mobile devices so they won't be blocked by DPI on their network
If anyone ever reads this and finds other use cases for UA spoofing, please leave a comment.
With these use cases in mind we can do the following:
- Detect UAs on the server side
- If spoofing is suspected, add appropriate code snippet to the page's top
- If UA unknown or spoofing detected, feature detect
- Otherwise (UA is known), send JSON with known features
That way, if IE UA is seen on server side, we add a conditional comment to the page's top. If a recent mobile device UA is seen (iOS, Android) we can detect it by checking for touch events.
There might still be cases we haven't thought about that will still be delivering content according to their advertized
UA, but hey, that's a risk you take on when spoofing.
In most cases, that makes the UA string reliable. We can then serve a JSON feature set for everything we're absolutely sure
the browser supports, and leave feature detect for everything else.
So, thoughts? Ideas? Irrational emotional responses?
Bring it on...:)