Pretty misleading title, since IE10 isn't the latest version. IE11 is the latest and went from 11.51% to 12.80% (big jump compared to Chrome, 16.28% to 16.84%). Interesting (and slightly sad) that Firefox isn't gaining - in light of these stats, I'd say things don't look too shabby for IE.
I'm not too familiar with IE (heh), but is the upgrade mechanism better in IE10? That might explain the higher drop.
> Interesting (and slightly sad) that Firefox isn't gaining
I'm guessing this has more to do with Chrome being easier to integrate into enterprise/group-policy-based deployment than anything to do with the choices of individual users, though. The only place I see Firefox on a PC in a "computer lab" or "thin-client" setting any more is when those PCs are also running Linux.
Surely nothing to do with the fact that there's television commercials for Chrome, ads every time you go to the most visited website on the Internet, etc...
...and the nag ads on the Google.com homepage to install Chrome. It shows up everytime and general public accidently clicks on it.
...and Adobe Acrobat (Reader) that tries to install Chrome like "adware" - one has to be careful to opt-out on the Adobe website. Also many freeware and some open source like WinSCP come with this kind of "adware" like shady auto-installer and often install Chrome (opt-out is available, bit hidden in advanced options).
Turns out "superior browser" is subjective. Firefox has great developer tools (the built in ones, not to mention Firebug). Firefox syncs nicely (and doesn't spy on what you sync). Firefox is easy to use.
For the vast majority of users, none of this stuff matters. What matters is brand recognition and stuff "just working". In-built Flash probably helps Chrome's case.
It doesn't matter which browser is "winning", the vast majority aren't picking which browser to use based on features. If Chrome is the "superior browser", why is IE dominating in market share?
I wonder if any of the gain can be attributed to the bundling deals. Had to install adobe acrobat reader recently and noticed they try to install chrome and the google bar for IE
Except when an upgrade to Flash breaks an important site for a lot of users. Then you are screwed. Reverting to the regular windows plugin would be the first change i made for a large deployment.
IE is hardly evergreen. IE11 is curiously not available on Windows 8.0, although it is available on Windows 7 and Windows 8.1. They also publish update blockers [1]. Maybe this is why StatCounter still shows a huge IE10 presence [2]. The point of an evergreen browser is to make older browser version shares negligable as quickly as possible, and it doesn't seem that's really happening.
It's free, but, yeah, I had to open the Windows Store or whatever and actually make it update. It did not show up in the Windows Update list (not even in the "Suggested Updates").
What about corporate installations? Is 8.0 -> 8.1 seemless enough or will there be a substantial delay in a standard enterprisey environment and thus creating a new browser isle?
8.0 should (hopefully) be a very short lived release in the wild. There's no reason at all not to upgrade to 8.1, even in a corporate setting, 8.1 is free, and the upgrade process is very smooth.
At one point CheckPoint VPN didn't support 8.1, now it does. One of my customers uses it so I held off upgrading. There could well be other apps with similar issues and big corporates would prefer others discover them.
Yes, this is a good thing for everyone. You can see the same pattern when Firefox shifted to faster upgrades, the people on the old regime decayed slowly, while people on the new regime transitioned quickly.
Its not even a new phenomenon, IE 9 plunged quickly (well, for an IE version) when IE10 started getting uptake, falling below IE 8:
If you want bad news for Microsoft, it's not in the rankings of their previous two versions, but the fact that their decreasing market share is spread across so many versions, each of which is a bit pitiful when compared with a single version of Firefox, with even the last version of Chrome and iPad Safari showing up Microsoft's efforts:
It is still interesting though. It shows what I think we all know that Microsoft's most vicious competitor is Microsoft and it means that you have a few users who upgrade everything and many many more who never do.
In my opinion it's focusing on the wrong part. Between the two trends (IE11 growing or IE10 vs IE9) the former is more likely to continue and the latter is likely to end as more machines reach end of life.
I don't think IE10 has a better mechanism. If memory serves you can't upgrade past IE9 on... what, XP? Vista? I forget which. But point being, IE9 is the end of the line for some of the Windows versions.
Net Applications apparently have a sample of 40,000 websites, whereas StatCounter have a sample of 3,000,000 [1]. StatCounter shows radically different numbers (e.g. Chrome by far the most used, whereas NA shows IE majority). IMO, StatCounter's numbers are more likely to be accurate.
The size of the sample is irrelevant for its accuracy. That seems odd and statistics can be counterintuitive, but consider how a random sample of a few hundred people is enough to estimate who will win the elections within a few %.
What matters - beyond a certain reasonable minimum size, which is a few hundred - is if the sample is random and unbiased. Both NetApps and StatCounter are far, far above the minimum size, so the only questions are the other factors.
But more important to realize here is that they measure different things. StatCounter measures pages viewed, NetApps unique users. There is no reason to expect those to match up.
I (W3Counter, 70k sites) measure unique users, and it's always tracked much closer to StatCounter's numbers than NetApps. Same for every other site that tracks this stuff. NetApps has always been an outlier, especially on their IE numbers.
The difference for NetApps is how they track unique users, particularly for countries such as China.
My understanding is that they basically look at the total number of unique visitors from each country that they get, then weight that number by the number of reported internet users in that country. So if NetApps sees 10 users from country A and 5 users from country B, but country B has twice as many 'connected' people as country A, then they both get equal share.
What this basically leads to is that China, which has a very large population of people who infrequently use the internet but are still counted as internet connected (and may be using shared computers), gets inflated numbers based on total population. This is also why IE8 numbers are so large- because it's still in wide use in China.
The NetApps number probably makes sense if you were to ask 'what is the browser use of the individual users across the entire planet that could possibly load my website'. But when looking at browser usage by # of page loads or predicted visitors, the numbers could very well be much closer to StatCounter.
In that example, it is ok for A and B to get equal share - if you are measuring unique users and not usage. There is no inflation here, at least not in the methodology, which looks like a valid statistical one to me.
Of course it could be wrong in practice, if the data used to re-weight is misleading (while the unweighted raw data was more accurate). That's possible in theory, but seems less likely - there is fairly good information about internet usage in general which is what is used to re-weight, certainly compared to browser share.
It is very important for accuracy. Otherwise there wouldn't be a consistent bias between the two. Uniques vs page views doesn't account the large differences. As you said, independent random sampling would certainly result no significant difference, but neither Net Applications' nor Stat Counter's samples are random. Also, their samples are mostly consistent. When there vast differences between different countries, site topic, and market, sample size is very important.
To your other point, election statistics is much more delicate than you make it out to be. They are also not truly random samples, but they are different for each poll--which helps. They also weight the results by location and expected turn out which is a whole different set of assumptions. If surveying the first 1000 people to answer their phone with consent and using the raw numbers was enough, then there wouldn't be any significant differences in the polls.
A population is a population and a 1% sampling has much less meaning than a 99% sampling, whether or not it is random. A larger sample always has more statistical relevance/confidence.
GeneralMayhem pointed out that this is flat out wrong. Here's an example to demonstrate it:
Consider a population where 99% of users use IE, and 1% of
users use Chrome.
From that population you can draw a biased sample consisting of the 99% IE users and conclude that there are no Chrome users at all.
Or you could draw a random sample of 1% of the population, and assuming that 1% adds up to enough people, your chances of drawing a random sample that did not include a reasonable number of Chrome users too would be extremely low.
So depending on methods used, even a 99% sample may be entirely meaningless when compared to a 1% sample.
If you extrapolate the raw numbers, then yes, you can conclude there are no Chrome users at all. But no one would accept this as fact knowing the sampling methodology.
You can, however, still build confidence intervals based on biased samples. In your example, with 100% certainty, between 0-1% of the population uses Chrome. Yet of course, even the 99.999% CI will be wrong due to the severe bias. Now, if in your example only a 1% biased sample were looked at, the 100% CI would be 0-99%. Much less information. Note that you may also still see trends in biased samples if the sample is consistent.
If biased samples were meaningless, then how are Stat Counter or Net Applications results valuable at all since they are not random samples?
That's true, but if the size of your sample is 99% of the population, that sample is always going to be close to random. For all practical purposes it's not actually a sample any longer.
> It is very important for accuracy. Otherwise there wouldn't be a consistent bias between the two.
As mentioned above, they measure different things - page views vs. unique users. That is, usage vs users. There is no reason to expect them to be identical.
That sample size difference could matter depending on their methodology. If they do countrywise segmentation the sample size for many countries may be very small and any resulting error would get amplified by the weighting.
40,000 sites / 200 countries = 200 sites per country on average. How many participating sites do they have in smaller countries?
It is true that if they have a tiny sample in a large country, their result could be very wrong for that country, and leaving it small and not size-corrected would at least leave that error small.
But, even 200 sites per country is enough for a proper random sample (that's about the same size as in election surveys). Even more so when you consider that a site is not a single observation, but reports data about all the users visiting it.
>But, even 200 sites per country is enough for a proper random sample
Yes, but my concern is that they may only have a handful of sites (or none at all) for a long tail of smaller countries.
>Even more so when you consider that a site is not a single observation, but reports data about all the users visiting it.
True, so it depends on whether or not users of popular international sites from a particular country are representative of all users from that country.
Exactly. Statcounter puts IE9 at 3.6% and IE10 at 4% which matches actual stats for most websites. It is true that in some countries IE has a much higher share than reported, but the aggregate is correct.
Net Applications was taken off the wikipedia page about browser market share because their information sources contained fabricated data. They even admitted in their technical documents.
I wouldn't trust any blogspam that references the data, it's so out of align with other sources that reference billions of pageviews.
These figures make no sense to me. The fact that we do not have meaningful browser statistics makes no sense either. Why is it so hard to get meaningful figures and why do the figures from the 'experts' vary so much?
In some ways 'what browser do people use' is a bit like 'what language do people speak', except it should be easier to measure the statistics. Yet, unlike the situation with languages, we have a really hazy idea of what browsers are used. As a developer it is easy to get things working in developer preferred browsers to then be clueless about what percentage of your audience use IE and what versions. Does IE7 matter any more, or IE9? Do people with those browsers almost expect sites to look wrong?
>Yet, unlike the situation with languages, we have a really hazy idea of what browsers are used. //
I use FF27 on Ubuntu, no wait I use FF28 on XP, no IE9 on Win7, no it's Safari, no Opera on Wii, no Opera Mobile, no ... it's all of them. But then I'm a special case, however I imagine that a lot of people use various browsers - home, work, mobile, etc..
As with language just having general stats is no use - what's it matter that Chinese is so popular across the world if I'm selling canoes to Amazonians?
IE7 matters if your users use it and you want to make sure they have a good experience. It's not so relevant that most sites don't have high IE7 numbers, what does your site have; what will that support costs, what revenue will that support gain.
> These figures make no sense to me. The fact that we do not have meaningful browser statistics makes no sense either. Why is it so hard to get meaningful figures and why do the figures from the 'experts' vary so much?
It is hard to get a random sample of internet usage. There is just no easy way - how do you do it?
In theory you would pick a random router around the world, and pick a packet going through it, and if it comes from a web browser, count that. Then normalize by traffic amount in that router. That would, in theory, give the right result with a large enough sample size. But no one (except perhaps the NSA) can do it.
On the other hand, there are at least two companies that pretty much know the truth - Google and Facebook. A huge percent of people visit those sites daily, so much that they don't need to randomly sample, worry about bias factors, etc. - they can just look at the data they have. So internally, they have the numbers. But apparently they consider that information valuable, and do not share it.
Google is much better positioned than Facebook for one reason: Google Analytics. Many sites, even those which aren't otherwise using Google services, embed it and there's really nothing like its reach because all of the competing services cost money.
Google and Facebook only have good data for the countries in which they have significant marketshare (think Russia, China… neither of which are small countries). I'll also posit that Facebook is significantly bias towards younger users.
The problem is that not all sites see the same kind of traffic. These figures are from NetMarketshare which gets most of their data from a modest number of large corporate sites[1] and uses their estimates to adjust traffic country by country. Their numbers skew heavily towards IE, so I'm assuming that their sites must see a lot of business users. If you run a site which was aimed at home users you'd see a much great representation for WebKit and I'm sure sites like HN, ArsTechnica, etc. have a higher percentage of the latest browsers.
I have never worked on a site where the breakdown is close to NetMarketshare's figures. The ones from Akamai have been much closer, particularly internationally:
> Does IE7 matter any more, or IE9? Do people with those browsers almost expect sites to look wrong?
It depends what sort of sites they use the most, and just the sheer number of sites on the Internet, most of which have been around for a long time, means that only the newest ones tend to be incompatible (and as a data point, HN is usable with IE6.)
Can someone explain why the Net Applications numbers are so different from the StatCounter numbers? I would expect some variation in counting "users" versus "page views" but the difference here is stark
I'm a little surprised that their stats show IE having >50% market share. Most stats show IE having between 18% and 25% market share. My own sites show between 10% (for a tech site) and 15% (for a health site). I guess Net Applications is only sampling corporate users, who still tend to use old IE. My own stats show that IE11 has a higher number of visitors than any other version of IE.
I'd be surprised if more than 50% of computer users (55% IE * 90% Windows) changed their default browser. The vast majority of computer users are non-technical people, and they use the default browser unless someone changes it for them.
But they also browse (far) less compared to people who use alternate browsers. So statistically, your sites have a much higher chance of registering Chrome and Firefox users than IE users.
Add: OTOH, I think StatsCounter, Clicky and Wikimedia stats are more useful. They represent the "active" Internet population better. The ones who are more likely to come to your/any website.
It's also so important to filter down to geographic regions you care most about when looking at these stats. It's common to see developers/companies making browser support decisions based on global stats, even when they often derive most of their profits from a much smaller region (with different browser usage stats).
Other commenters have mentioned differences between NetApplications and StatCounter, but I suspect there are other factors that can heavily skew the apparent market share of different browsers.
For example, it is already well known that programming-related websites get a higher than average share of non-IE visitors. I don't think this would affect the likes of NetApplications and StatCounter, because they're used by all sorts of websites. But ...
... it is also much easier to find and install an ad blocker in Chrome and Firefox than it is to do so in IE. Many of those ad blockers and similar extensions will also block the tracking beacons used by NetApplications and StatCounter.
This probably has a negative effect on the apparent market shares of Chrome and Firefox. Especially the latter, I suspect, because lots of people who consciously choose Firefox nowdays do so for privacy reasons. (Can't trust Google anymore, etc.)
FWIW, several DHS department level IT shops are moving to IE11 in a few months. I'm at one where after they've been at IE8 for so many years, they jumped to IE10 last fall and IE11 is around the corner. Love it. It's been so nice not having to support IE < 10 the last several months.
And IE11's dev tools, while still a little clunky to get around, are very comparable to Firefox/Chrome for most of the features.
We also still have some users that haven't upgraded (also smaller resolution monitors) but all less than 10%.
To me this just means that on more modern version of Windows more people than not have enabled auto-updates. I'd guess corporations make up a good chunk of the staid browsers.
IE9 is probably going to stick around for quite a while, it's the last version that Vista users have access to. IE8 should start dropping off at a faster rate after April, but will still be with us due to home owners who don't wish to upgrade past XP. What will be interesting to see is whether home users gravitate towards new PCs or tablets.
Most here are forced on IE 8 because PeopleSoft doesn't work on anything greater. It sucks. Luckily, we installed Chrome as a secondary choice for those who want it. And I work in IT so I install whatever I want :P
Sometimes it is corporate policy, meaning if you are on a work computer you can't manually update it yourself.
There are still some legacy applications that require a specific version of IE, or specific IE functionality, in order to work correctly. The cost of "fixing" the application to work correctly may cost too much or be too much effort.
Of course, there may be others who may have no idea how to update, no doubt there is a whole group of (older) users where auto-update may be turned off and no idea the update process exists.
I'm not too familiar with IE (heh), but is the upgrade mechanism better in IE10? That might explain the higher drop.