Skip to content(if available)orjump to list(if available)

Mechanical Turk is twenty years old today. What did you create with it?

Mechanical Turk is twenty years old today. What did you create with it?

44 comments

·November 2, 2025

MTurk was built by two two-pizza teams at AWS over the course of a year and launched on Nov 2, 2005. It took a few days for people to find it and catch on, but then things got busy.

At the time, AWS was about 100 people (when you were on call, you were on call for all of AWS), Amazon had just hit 10,000, S3 was still in private beta, and EC2 was a whitepaper.

What did you create with MTurk and the incredibly patient hard-working workforce behind it?

frmersdog

If there's any justice, a good number of comments will focus on the ethical nightmare MTurk turned out to be. Apologies to the people who worked on it, but it's fair and appropriate for observers to point out when someone has spent their time and energy creating something that is a net negative for the state of society. That's probably the case here.

linkregister

If mturk workers had better opportunities, they'd take them. mturk is competing with local economies in low opportunity locales. It is rational to work in a cybercafe doing rote web tasks for 8 hours if you'd receive the same amount of money performing manual labor.

larodi

Happily I then can state we did create nothing based on MTurk as it had this negative ethical side to it from day one.

maxrmk

What do you see as net negative about it? I’m familiar with the product but not that aware of how it’s been used.

akerl_

It's basically a way for people to externalize tasks that require a human but pay fractions of what it would cost to actually employ those humans.

Mechanical Turk was one of the early entrants into "how can we rebrand outsourcing low skill labor to impoverished people and pay them the absolute bare minimum as the gig economy".

amelius

Yes, and "use the output of MTurk workers to make themselves redundant."

crossbody

"probably". Care to provide reasoning or is this just a knee jerk reaction? Are you familiar with the service and how it works?

edoceo

These are extraordinary claims (yea?). I'm sure there are great stories of opportunity creation and destruction - how could we even measure the net effect?

pvankessel

I used MTurk heavily in its hey-day for data annotation - it was an invaluable tool for collecting training data for large-scale research projects, I honestly have to credit it with enabling most of my early career triumphs. We labeled and classified hundreds of thousands of tweets, Facebook posts, news articles, YouTube videos - you name it. Sure, there were bad actors who gave us fake data, but with the right qualifications and timing checks, and if you assigned multiple Turkers (3-5) to each task, you could get very reliable results with high inter-rater reliability that matched that of experts. Wisdom of the crowd, or the law of averages, I suppose. Paying a living wage also helped - the community always got extremely excited when our HITs dropped and was very engaged, I loved getting thank yous and insightful clarifying questions in our inbox. For most of this kind of work, I now use AI and get comparable results, but back in the day, MTurk was pure magic if you knew how to use it to its full potential. Truthfully I really miss it - hitting a button to launch 50k HITs and seeing the results slowly pour in overnight (and frantically spot-checking it to make sure you weren't setting $20k on fire) was about as much of a rush as you can get in the social science research world.

social_quotient

I’ve run millions of jobs on MTurk.

For a major mall operator in the USA, we had an issue with tenants keeping their store hours in sync between the mall site and their own site. So we deployed MTurk workers in redundant multiples for each retail listing… 22k stores at the time, checked weekly from October through mid-January.

Another use case.. figuring out whether a restaurant had OpenTable as an option. This also changes from time to time, so we’d check weekly via MTurk. 52 weeks a year across over 100 malls. Far fewer in quantity, think 2-300. But it’s still more work than you’d want to staff.

A fun more nuanced use case: In retail mall listings, there’s typically a link to the retailer’s website. For GAP, no problem… it’s stable. But for random retailers (think kiosk operators), sometimes they’d lose their domain, which would then get forwarded to an adult site. The risk here is extremely high. So daily we would hit all retailer website links to determine if they contained adult or objectionable content. If flagged, we’d first send to MTurk for confirmation, then to client management for final determination. In the age of AI this would be very different, but the number of false positives was comical. Take a typical lingerie retailers and send it to a skin detection algorithm… you’d maybe be surprised how many common retailers have NSFW homepages.

Now some pro tips I’ll leave you with.

- Any job worth doing on mturk is worth paying a decent amount of money for.

- never runs. Job 1 tile run it 3-5 times and then build a consensus algo on the results to get confidence

- assume they will automate things you would not have assumed automated - And be ready to get some junk results at scale

- think deeply on the flow and reduce the steps as much as possible.

- similar to how I manage Ai now. Consider how you can prove they did the work if you needed a real human and not an automation.

pvankessel

The automation one is so true! When I first deployed a huge job to MTurk, with so much money on the line I wanted to be careful, and I wrote some heuristics to auto-ban Turkers who worked their way through the HITs suspiciously quickly (2 standard deviations above the norm, iirc) - and damn did I wake up to a BUNCH of angry (but kind) emails. Turns out, there was a popular hotkey programming tool that Turk Masters made use of to work through the more prized HITs more efficiently, and on one of their forums someone shared a script for ours. I checked their work and it was quality, they were just hyper-optimizing. It was reassuring to see how much they cared about doing a good job.

null

[deleted]

comrade1234

My wife had dozens, well probably over 100, handwritten recipes from a dead relative. They were pretty difficult to read. I scanned them and used mturk to have them transcribed.

Most of the work was done by one person - i think she was a woman in the Midwest, it's been like 15-years so the details are hazy. A few recipes were transcribed by people overseas but they didn't stick at it. I had to reject only one transcription.

I used mturk in some work projects too but those were boring and maybe also a little unethical (basically paying people 0.50 to give us all of their Facebook graph data, for example.)

cactusplant7374

Do you think ChatGPT could do the same work now? It would be interesting to try it.

fsniper

I used Gemini to decode and transcribe an old (and well known) cursive hand written mail. I couldn't read it at all. It managed to do this in a few seconds. I am not sure if it used an already available transcription or not. However if not, it was amazing work.

mtmail

We asked user to evaluate 300x300 pixel maps. Users were shown two image and had to decide which better matched the title we chose. Answers were something like "left", "right", "both same", "I don't know". Due to misconfiguration the images didn't load for users (they only loaded in our internal network). Still we got plenty of "left" and "right" answers. Random and unusable. Our own fault of course.

ruralfam

Used it to capture respondent data for a unique research tool we run. Got good results. Had to custom-code all the server/client interactions to handle MTurks' requirements. Went well. Still use the content from MTurk users as demo of "...how to get unique insights from your consumers". As things progressed, stopped using it. However all our project setup/server/client code still have variables/functions that start with mturk_. Not causing any issues, so there they sit. I feel guitly every time I think about not having cleansed the code. BTW: Just added new custom code for Prolific. Hoping to test their respondents this week. However Prolific's affect on the code was nothing compared to interacting with MTurk's servers.

goykasi

During the beta, the only consistent HITs were to identify an album based on a picture and 4 or 5 choices (if Im remembering correctly). These paid pretty well since the workforce volume was very low and the service was brand new. Well, I noticed that the image link contained an ASIN. So, I wrote a greasemonkey script that would look it up on Amazon and highlight the most likely correct answer. I then turned around and shared it with the forum I frequented. It became extremely popular and spread to other forums before we moved it to a private forum. The damage was already done though.

People kept asking me to automate it, but I felt it was against the spirit of mTurk. So, another member would take my updates and add an auto-clicker. That lasted for a couple of weeks at most before the HIT volume dried up and very few would be released. I guess Amazon caught on to what was happening. But before that, several forum members made enough to get some high dollar items: laptops, speakers, etc. Eventually, I relented and created a wishlist. Thats how I ended up with the box sets for the first run of Futurama seasons.

danpalmer

I have looked at MTurk many times throughout my career. In particular my previous company had a lot of data cleaning, scraping, product tagging, image description, and machine learning built on these. This was all pre-LLM. MTurk always felt like it would be a great solution.

But every time I looked at it I persuaded myself out of it. The docs really down played the level of critical thinking that we could expect, they made it clear that you couldn't trust any result to even human-error levels, you needed to test 3-5 times and "vote". You couldn't really get good results for unstructured outputs instead it was designed around classification across a small number of options. The bidding also made pricing it out hard to estimate.

In the end we hired a company that sat somewhere between MTurk and fully skilled outsourcing. We trained the team in our specific needs and they would work through data processing when available, asking clarifying questions on Slack, and would reference a huge Google doc that we had with various disambiguations and edge cases documented. They were excellent. More expensive that MTurk on the surface, but likely cheaper in the long run because the results were essentially as correct as anyone could get them and we didn't need to check their work much.

In this way I wonder if MTurk never found great product market fit. It languished in AWS's portfolio for most of 20 years. Maybe it was just too limited?

ebcase

When we first created Domainr (then domai.nr, now domainr.com) back in 2008, we needed a list of “zones under which domain registrations were somehow possible.” E.g. not just the root zone list from IANA, but all the .co., .edu., .net., etc. variants. We found what we could from Wikipedia, and used Mturk to find the rest from registry websites, etc.

It wasn’t perfect, but it didn’t need to be. We essentially needed a “good enough to start with” dataset that we could refine going forward. It got the job done.

stevejb

Using the Propser.com data set (a peer-to-peer lending market), I used MTurk to analyze the images of people applying for a loan. This was used in a finance research project with 3 University of Washington professors of Finance.

The idea was that the Prosper data set contained all of the information that a lending officer would have, but they also had user-submitted pictures. We wanted to see if there was value in the information conveyed in the pictures. For example, if they had a puppy or a child in the picture, did this increase the probability that the loan would get funded? That sort of thing. It was a very fun project!

Paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1343275

hiddencost

Yikes. Have you ever considered that you were hurting people?

xwowsersx

How so? Read the paper. The methodology was entirely observational. They did not intervene in the prosper.com loan market or interact with the borrowers. If anything, the paper identified a form of bias that exists in the real world, namely that people commonly "perceived" as less trustworthy are penalized despite their actual creditworthiness.

akerl_

The paper is a study of an existing market. They looked at data about people who had requested loans and data about which of those loans were funded, with the intent of seeing whether or not lenders were being biased by requester photos. They found that they were.

Say more about how studying that bias is hurting people?

electroly

Several times, I had MTurk workers transcribe a yearly printed pricing catalog that was a boon to our small business. Inconsistently-structured tabular data intermixed with pictures that OCR of the day did a terrible job with.

Later, we needed to choose the best-looking product picture from a series of possible pictures (collected online) for every SKU, for use in our website's inventory browser. MTurk to the rescue--their human taste was perfect, and it was effortless on my part.

Neither of these were earthshattering from a tech perspective, and I'm sure these days AI could do it, but back then MTurk was the perfect solution. Humans make both random and consistent errors and it was kinda fun to learn how to deal with both kinds of error. I learned lots of little tricks to lower the error rate. As a rule, I always paid out erroneous submissions (you can choose to reject them but it's easier to just pay for all submissions) and just worked to improve my prompts. I never had anyone maliciously or intentionally try to submit incomplete or wrong work, but lots of "junk" happens with the best of intentions.

rzzzt

I'm not a participant nor creator, just remembering: "Bicycle Built for Two Thousand" recreated IBM's "Daisy Bell" by asking each person to take a short snippet and sing the part: https://youtu.be/Gz4OTFeE5JY

Delightful.