Wikipedia has been described as “the encyclopaedia that works in practice but not in theory”. In theory, an encyclopaedia that anyone can edit should suffer from out-of-control trolling and vandalism, and collapse under its own weight in an instant. In practice, that hasn’t happened; vandals attack Wikipedia daily, but it still does very well as a useful source of information, especially if you bear in mind that its primary competition is not Britannica or other paid-for sources, but the rest of the Internet. Nonetheless, a recent controversy over malicious insertion of false information into Wikipedia had some concluding that the very process by which Wikipedia has “incurable flaws”.
Are these flaws incurable? The problem – that anyone can say anything – is one the Internet itself also suffers from. But in 1996/7, something changed which preserved its usefulness in the face of the avalanche of noise and spam that threatened to bury it: Google arrived. To Internet users increasingly finding that their searches were returning nothing but synthesized advertising pages, Google provided a way to pick the wheat from the chaff. The fundamental innovation that made Google possible we now know as a trust metric.
All trust metrics work on the same principle: if I think well of Ann, and Ann thinks well of Bob, I might be more interested in what Bob has to say than just any random Joe. We operate by this principle all the time when we introduce people to others or make recommendations. A trust metric automates the principle, applying it to a very large number of people and recommendations, to find recommendations we would never have been able to find by the manual method. Google’s trust metric simply treated every hyperlink as a recommendation; if lots of trusted pages link to you, then you must be trusted too.
Better than that, Google’s trust metric is attack resistant – it is designed to give good results in the face of active attackers trying to artificially inflate their rank. This simple requirement is one that many trust metrics fail dismally; for example, until a couple of years ago the open source discussion site kuro5hin trivially allowed users to inflate their own rank by creating new “sock puppet” users who rated them highly.
Can attack resistant trust metrics save Wikipedia? Noted free software author Raph Levien thinks so – he has been researching trust metrics for many years, and expresses his frustration that they are not treated as the primary solution to abusive users despite the amazing success of Google in applying them for this purpose.
I agree with him. If you combine trust metrics and wikis, you can get some interesting possibilities.