Given that I found Chrome, Webkit Squirrelfish and Firefox 3.1 performance to be on par, I started to wonder why so many sources are claiming Chrome is the fastest engine out there.
The reason seems to be this benchmark. Instead of using Sunspider or some other existing benchmark, the V8 guys have created their own benchmark and tuned V8 to perform well with that particular benchmark in mind. And of course, all the excited bloggers are using these numbers.
This has paid off on that benchmark, of course. Webkit scores 391 points, Firefox 3.1 gets 162 points and V8 goes all the way to 1927.
So what’s wrong with this picture? The problem is, looking at the description of the benchmark, nothing in the V8 benchmark is meaningful for a web browser. I’m sure getting big numbers is a cool thing to present at work and gets you the Nerd Bonus, but that just means the team has forgotten the goal they should have had when implementing the engine – high quality web surfing experience. This is in stark contrast with the goals of Sunspider, which aims to benchmark real world use cases, so good score in Sunspider should translate to more comfort when actually using the browser.
In my opinion, the V8 team has defined their success metrics based on something they shouldn’t have. They should have aimed for best possible browsing experience and hence used benchmarks like Sunspider and Dromaeo to tune the engine.
Setting wrong goals seems to be quite general issue with any industry, given that setting the right metrics to measure success requires a deep understanding of the problem at hand and objective approach to solving the issues. At worst, you define the goals to please someone in your organization through numbers. Someone I know was working as a networking system developer and switched jobs after being forced to optimize for throughput benchmarks (to please the boss of his boss), rather than optimizing for real world use cases.
Coming up soon, what questions I iterate to understand the problem.