9 December 2004

Ben Franklin on Recommendations

This came across me: an interesting, and funny, letter - Benjamin Franklin on Recommendations! (PNG image, 2MB)

Thank you to Victor S. Grishchenko.

1 November 2004

Reputation systems promote bad behaviour

"When is reputation bad?". Ely, Fudenberg and Levine discusses a scenario, in a paper of the same title, where the presence of a reputation system may invoke bad behaviour in good players in order to maintain a good reputation. Framing their work within game theory, the authors describe a class of games where:

The key properties are that participation is optional for the short-run players, and that every action of the long-run player that makes the short-run players want to participate has a chance of being interpreted as a signal that the long-run player is "bad."


Paul Resnick has written a detailed review of the paper in his blog - also a recommended layman's read. There are interesting comments from readers along the lines of the age-old game theory vs. irrationality debate.

29 October 2004

Decentralised reputation systems: power to the little voices

I came across an old (July 2002) article by Annalee Newitz in Alternet ("Reputation System"). It throws light on an interesting property of reputation systems, that unpopular community members are too quickly sidelined. And:

Sometimes we need to listen to people who have bad reputations. Often they are the critics, the people with a talent for seeing flaws and problems none of us wants to face. Communities can't thrive if they never answer to the least reputable of their members. So, for now I'm waiting for a new community system, one whose wisdom will destroy reputations and replace them with something more meaningful.


This was written when mainstream reputation systems were largely centralised systems, where filters work on the aggregate ratings of community members and unpopular members hardly ever rise above the user's filter threshold. Furthermore, unless a user feesl very strongly about siding with an unpopular opinion, or she knows that person well, it is unlikely that she will try and promote these "outcasts" vocally, for fear of losing reputation by association, ridiculed or seen as uncool.

By decentralising the reputation system, however, we are giving unpopular opinions a chance. RSS and decentralised rating models allow each of us to take control of whom we would like to listen to. Ad hoc communities are allowed to form around an opinion or meme which may be unpopular in other circles.

Preferences are also private in decentralised systems, so we can continue to be a fan of an "outcast" without fear of being uncool. Our aggregated RSS feeds, for example, are for our eyes only. But chances are there are others like us - all we have to do is find them on del.icio.us and liking that unpopular source may not be so uncool anymore.

This is the Net taken back to grassroots level again, where such powers are to be found. Decentralisated reputation-enabled systems allow the small voices to be heard again and give them a chance to make impact.

27 October 2004

Book: Reputation in Artificial Societies: Social Beliefs for Social Order

This book sounds interesting and the topic should certainly be on the "to know" list of all social software developers.

Reputation in Artificial Societies: Social Beliefs for Social Order (Multiagent Systems, Artificial Societies & Simulated Organizations) by Rosaria Conte, Mario Paolucci.

24 October 2004

Reputation systems used by 26% of adult US internet users

"33 million American internet users have reviewed or rated something as part of an online rating system".

Pew Internet & American Life Project found 26% of adult internet users in the US have at some point used an online reputation system.

15 October 2004

Computational Trust

There's are some very interesting threads on trust metrics on the trustcomp yahoogroup at the moment (discussion group for ACM TRECK). Important questions like whether "trust" is the right concept to use, whether eBay's rating system is considered a trust system and what should a trust metric look like, are being dissected.

If you are serious about these issues, join up.

13 October 2004

Make your life easier... ask someone.

An article on SocialTwister.com, "The Coming of the Database Economy - Hold Onto Your Opinions", reminded me of social science stuff I was reading on trust. Trust, as some has theorised(*), is what we use when faced with the challenge of managing life's complexities. And reputational information is an important factor when deciding whether an info source is trustworthy. In the context of online information, as the SocialTwister posting said:

... When there are millions upon millions of points of data to consider, knowing which the best is becomes far more important.

... What drives the value of Amazon, for example? In the beginning, it was simply enough to have the database of books since no one else had it. What pushes me back to Amazon, more often than not, however, is not the database (I assume everyone has it now). I am drawn in by things like the User Reviews and Ratings, not to mention, the Recommended Reading lists and other hooks like that. Given too many choices, I often find myself polling constantly for external benchmarks to evaluate with.


Thus it makes sense that the explosive interest (in research, development and usage) of reputation/trust systems should happen now. The amount of information online has crossed the line from scarce or even managable to just purely overwhelming. Users are not just connecting to information, but also to people, places and contexts. Information is not just avaiable from well known centralised repositories but from any connected machine, devices and even objects.

Something is needed to manage all this complexity, and the user has called on her trusted tools - reputational information and trust.

* see, for example, Niklas Luhmann, Trust: A Mechanism for the Reduction of Social Complexity, in N.Luhmann: Trust and Power: Two Works (Chichester: Wiley, 1979)

4 October 2004

Trust and the Creative Benefactor System

Jason Rohrer, developer of anonymous P2P app MUTE, wrote an interesting piece (Free Distribution) on how to get paid for creative work without resorting to copyright. One of the suggested methods is an old method called the benefactor system:

One support system used in the past was the benefactor system. In this system, a few high-profile creators received ongoing financial support from wealthy donors. Certain creators forged support relationships with individual donors over many years, and in some cases, over entire lifetimes. Some of these benefactors gave money, while others provided food, housing, and other necessities directly.


but instead of having one wealthy supporter, a creator can use the Net to collect money from a group of donators, as a pure benefactor system is not reliable. Furthermore, instead of one-off payments, ongoing support is crucial to ensuring sufficient funds.

It is a nice idea and I like it. However, I think there there is a trust issue that needs to be addressed for this to work and it relates to how much the potential donor knows about the creator.

Before I decide to donate money to a creator, how will I know that she will continue to carry out productive work while I am supporting her? The amount of money I am willing to donate is irrelevant here and the more important question is one of reliability of the creator and my trust in her. If I don't know this creator well enough then I will find it difficult to enter into an ongoing donation arrangement with her.

Perhaps some kind of reporting or monitoring mechanism could be employed, but this is only useful if the report is public and already has some history for me to make decisions on. Still, this is not a great solution because it is based on feelings of distrust and may breed animosity.

Another solution may involve publicising the the number of people who are already supporting the creator, a kind of reputation system. Knowing that a creator is "popular" may give others confidence in also supporting her, although an empty list may have the adverse of the desired effect! Current total of funds may be another way of showing this, but it may work against donations if the current total is deemed as more than enough by potential donors.

Trust should be allowed to be built progressively so that prospective donors can build up trust to a point where they are willing to donate. This is similar to the process described by Chris Allen in his blog, Progressive Trust. Transparency about the author, a kind of trust CV about her, will help kick start the trusting relationship. Then allow the prospective donor to find out for himself the benefits of the creative project the creator wants support for.

This is best achieved by ongoing dialogue between the two, which is critical in all early trust relationship formation stage. Dialogue can be carried out in many ways: through free use of the software/creation, emails and community culletin boards, personal emails or calls, listening to users and including their feedback in future creative work. These are all existing and proven techniques that can support effective trust building dialogue.

An advantage of this progrssive trust bulding method is that because you are allowing the donor time and space to build trust in you, the actual amount that they may end up donating may be greater than the minimal proposed amount. Creators are afraid to ask for larger amounts because it doesn't make sense to ask this from people (prospective donors) that you don't know. But once trust has been built, the value of the relationship will dictate the amount the donor is silling to donate, and chances are they will be much greater than $1/month.


1 October 2004

Transitiwiki

A wiki has been set up. Called TransitiveWorld, it's role is to collect real world examples of transitive relationships and referrals in action. Lack of specific examples in the literature has prompted this, and it'll also be a useful reference.

Anyone can contribute: http://www.riters.com/transitiveworld

File sharing in Social Networks

Found this interesting article (OpenP2P.com: Next-Generation File Sharing with Social Networks) by Robert Kaye on OpenP2P that looks at how a closed social net based file sharing network would look like.

The "network policy" problem is probably the trickiest bit here - a "tribe leader" must come up with a policy that balances openness to new members and resistance to intruders. Robert hinted at continued monitoring to ensure network integrity but this seem like a lot of work for one "tribe leader" (even the mob hires an army of heavies to do this job).

The key is in the reputation model. Such a network would probably need a model that:

- is expressive enough such that natural trust assertions can be made.
- allows policies to be expressed in its terms
- indicates accountability for recommendations

Also, Clay Shirky's article (File-sharing Goes Social) has a good example on how the network can be attacked.