Dec 5, 2011

On "crowd-sourcing" and aid evaluation.

Crowd-sourcing is based on the idea that a task, usually a huge task, can be divided into a large number of very small pieces. Each piece is completed independently by an individual. When all the small pieces are added together tho huge task is completed. Although it is work in progress, Wikipedia is a clear example. People living in different locations contribute bit by bit, making small changes here and there. The result is this enormous website that has reached the scale of probably the largest encyclopedia in human history. Another example of crowd-sourcing is the software Ushahidi which gathers information from people reporting from disaster zones. People report via text-messages, among others, and Ushahidi creates a map of the reports giving a graphic representation of the density of the disaster. This allows to target assistance more effectively. Other examples include apps that people can use to report corruption in government offices so that a map of corruption density can be visualized. This is important because government reforms can target precisely those areas most affected by corruption. 

It seems however that cowd-sourcing is still in its infancy, but promises a lot. This article tells how crow-dourcing is used to identify the most effective NGOs:
The GlobalGiving Foundation is working on a method to do that. Already famous for its crowdsourcing of global development project funding, starting in 2009 GlobalGiving piloted the Storytelling Project as an experiment to crowdsource impact evaluation to target community members, seeking what they say or would say about the work of development organizations, international and local.
In GlobalGiving’s "game," scribes are told to collect at least two stories about two different events or NGOs who tried to help someone or change something in the community. Scribes are paid 10 to 15 cents per story and can collect 10 to 100 in a month. GlobalGiving then analyzes sets of stories in a dozen different ways to see who is performing the task and who is just sending back junk.
The stories then get fed into Sensemaker, software licensed from U.K.-based Cognitive Edge, along with Wordle and other semantic tools, to reveal patterns and potential biases across stories in aggregate that provide a snapshot of how people talk about change in their community, and to whom they attribute it.
So far, GlobalGiving has collected and analyzed over 26,000 stories from around 5,000 community members in Kenya and Uganda. They’re getting over 1,000 new stories a month from 50 towns and cities across the two countries, and they have plans to expand further.
HT: @owenbarder 

No comments:

Post a Comment