Check out this comparison between Misultin, Mochiweb, Cowboy, Node.JS, and Tornado. Tests such as this one tend to focus on either raw parsing speed or total number of concurrent connections, and which is more important depends on the ultimate application. Or they perfectly tune the tester’s favorite framework and use a stock configuration for the other competitors. For instance this test doesn’t use multiple cores (which is one of the points of Erlang) and the test server was running locally on the same box as the test client, which could affect the results in unexpected ways. Ironically, Erlang often does poorly in this style of benchmark as it’s not known as being good at raw language performance and even worse at string parsing. However in this case it comes out on top… and would have been much stronger had it been able to run on multiple cores.
Framework benchmarks are a joy to me for a different reason, however. In theory, dispassionate analysis should prevail in benchmark blog posts. In this case Roberto seems to do a good job of testing even-handedly. But what exactly is the goal of the test? Faster = better; and Erlang is rarely called fast. Pretty = better; and Erlang is never the belle at the ball. Would Roberto have published the results if Misultin were slower? I’m not implying he wouldn’t, but (and this is not to pick on him) I’ve never seen a framework battle royale where the author of one of the contenders published the results and their framework didn’t win. More informed readers please enlighten me if there are obvious examples.
Incidentally, one of the things I found fascinating about Tim Bray’s Wide Finder was that he didn’t really have a dog in that fight, so the results seemed…objective. I can’t fault his motives even though I can still wonder what exactly was the point of the project.
Taking a step back, I am always struck by the comment threads below these posts. They remind me that most people deal with insecurity; most people self-identify with groups for (often) arbitrary reasons, and most pick a group and then reason backwards as to why they’re a member of that group or school or team or church. Their motivations are mostly unclear internally, much less to others around them. This is so common in the software world. (See my recent interview with Tino Bredden on Erlang other communities where we discuss this.) Incidentally, I could easily see myself running a blog on programming sociology even though I’m not a sociologist, because I find group behavior -- particularly group geek behavior, as I am one -- fascinating.
We are all somewhat impervious to new information, preferring the beliefs in which we are already invested. We often ignore new contradictory information, actively argue against it or discount its source, all in an effort to maintain existing evaluations. Reasoning away contradictions this way is psychologically easier than revising our feelings. In this sense, our emotions color how we perceive facts. – David P. Redlawsk
What’s the point? In this project, Erlang seemed faster, uglier (according to some), and had one hand tied behind its back, as it wasn’t allowed to use multiple cores. Does that make me feel happy or sad? Does that make me feel like I need to justify my platform or library? Do I feel myself engaging in Motivated Reasoning? Or should I take a step back and realize it probably doesn’t matter either way…and that my response to the test says more about me than about my choice of framework.