How Good are Humans at Solving CAPTCHAs? A Large Scale Evaluation
Elie Bursztein, Steven Bethard, Celine Fabry, John C. Mitchell, Dan Jurafsky [email protected], [email protected], [email protected], [email protected], [email protected] Stanford University
Abstract—Captchas are designed to be easy for humans but hard for machines. However, most recent research has focused only on making them hard for machines. In this paper, we present what is to the best of our knowledge the ?rst large scale evaluation of captchas from the human perspective, with the goal of assessing how much friction captchas present to the average user. For the purpose of this study we have asked workers from Amazon’s Mechanical Turk and an underground captchabreaking service to solve more than 318 000 captchas issued from the 21 most popular captcha schemes (13 images schemes and 8 audio scheme). Analysis of the resulting data reveals that captchas are often dif?cult for humans, with audio captchas being particularly problematic. We also ?nd some demographic trends indicating, for example, that non-native speakers of English are slower in general and less accurate on English-centric captcha schemes. Evidence from a week’s worth of eBay captchas (14,000,000 samples) suggests that the solving accuracies found in our study are close to real-world values, and that improving audio captchas should become a priority, as nearly 1% of all captchas are delivered as audio rather than images. Finally our study also reveals that it is more effective for an attacker to use Mechanical Turk to solve captchas than an underground service.
I. I NTRODUCTION Completely Automated Public Turing tests to tell Computers and Humans Apart (CAPTCHAs) are widely used by websites to distinguish abusive programs from real human users. Captchas typically present a user with a simple test like reading digits or listening to speech and then ask the user to type in what they saw or heard. The image.