[A] study, funded by the William and Flora Hewlett Foundation, compared the software-generated ratings given to more than 22,000 short essays, written by students in junior high schools and high school sophomores, to the ratings given to the same essays by trained human readers.
The differences, across a number of different brands of automated essay scoring software (AES) and essay types, were minute. “The results demonstrated that over all, automated essay scoring was capable of producing scores similar to human scores for extended-response writing items,” the Akron researchers write, “with equal performance for both source-based and traditional writing genre.”
Read the article here. It has more information. Imagine a future where undergrad thesis, PhD dissertations, and scientific journals articles are reviewed by robots?
HT: Matt Yglesias.
No comments:
Post a Comment