The joke wrote itself. “AI, Actually Indians.” Variations of that punchline ricocheted around the internet after a viral post claimed a buzzy startup, lavishly funded by Microsoft and SoftBank, had no artificial intelligence at all, just 700 engineers in India furiously typing behind the curtain. It was a perfect meme for the moment, combining suspicion of tech hype with the oldest illusion in computing, the human inside the machine. It was also, in the crucial details, wrong.
The company at the center of the pile-on was Builder.ai, a once high-flying software platform that promised to help anyone turn an app idea into running code. The meme hardened into a narrative: the AI was fake, the code secretly hand-built offshore. But engineers who worked there told The Pragmatic Engineer that the sensational claim was untrue, and that Builder.ai did in fact build and run a code-generation system, codenamed Natasha, that used large language models like GPT and Claude to scaffold projects, generate code, and run tests link.
“AI, Actually Indians.” A viral quip, not a forensic finding.
So how did we get here? First, there is a real story. Builder.ai spiraled into crisis amid allegations that it misled investors on revenue and, according to The Economic Times, even booked fake business with a media partner to pad sales figures link. The governance questions are serious. But the now-viral claim that the product itself was a Potemkin AI staffed by 700 secret coders, that part collapses under scrutiny. The messier truth is both less meme-able and more instructive.
Humans in the loop are a feature, not a scandal
Modern AI systems, especially those deployed in high-stakes or messy real-world environments, rarely run unattended. They are paired with people who label data, audit outputs, and correct errors. This is not new. Amazon’s cashierless stores famously relied on a large team to review and reconcile edge cases. In 2024, Business Insider reported that “Just Walk Out” receipts were vetted by roughly 1,000 people in India link.
Amazon’s cashierless vision still needed humans, about 1,000 of them, to keep the system honest.
There is a historical name for the illusion we keep falling for, the Mechanical Turk, the 18th century automaton that seemed to play chess but hid a person inside. The metaphor endures because it describes a pattern: we over-ascribe competence to the machine and undercount the human labor that makes it work. The difference today is that people are not there to pretend to be the AI, they are there to supervise, to curate data, and to catch the inevitable mistakes when models meet reality.
When should you expect humans in the loop? At least in these cases:
- Safety critical tasks, like medical billing or fraud detection, where false positives and negatives both cost money.
- Long-tail edge cases, where the data distribution shifts faster than models can be retrained.
- New domains, where ground truth is still being established and labeled.
None of this absolves companies of misrepresentation. But equating any human involvement with “no AI whatsoever” sets up the wrong standard. If anything, responsible builders advertise where humans sit in the loop, and why.
What actually broke at Builder.ai
According to engineers who worked on Natasha, Builder.ai’s AI team was small, in the dozens, and focused on an orchestration stack that planned tasks, scaffolded tests, generated code, and created pull requests. The company also maintained a large network of outsourced developers to deliver bespoke features for customers and to stitch together reusable blocks, which is where the “700 engineers” figure likely comes from, not a hidden call center pretending to be an AI link.
The problem, say those same insiders, was not that there was no AI, it was that the company tried to be too many things at once. Instead of buying commodity tools, Builder.ai reportedly built its own video meetings, whiteboards, chat, IDE, even JIRA-like tracking. Focus diffused, costs mounted, and revenue growth did not keep pace. Meanwhile, reporting by The Economic Times alleges fabricated sales with a partner to inflate topline numbers, and coverage summarized by The Pragmatic Engineer points to lenders yanking support after troubling audit findings link, link.
The scandal was not “no AI”, it was governance. If the allegations hold, the core failure mode was classic: aggressive growth narratives colliding with reality, and executives smoothing the gap with accounting tricks. That a misframed meme overshadowed the substantive issues is a symptom of our AI discourse, where the presence of people is treated as fraud rather than as part of the engineering.
How to see through the fog of AI-washing
Distinguishing hype from substance does not require a PhD. It requires better questions. If you are a buyer, investor, or partner sizing up an “AI-powered” product, ask for specifics in plain English. Good teams will have crisp answers, and they will not be offended by the questions.
- Where are humans in the loop, and why? Ask for the oversight model, error handling, and the human-to-machine ratio at steady state.
- What models are in use, and how are they evaluated? Look for internal benchmarks, regression tests, and how often the company swaps models.
- What is the unit economics? GPUs and inference cost money. A believable plan should include per-task costs, latency targets, and quality escape rates.
- What gets built in-house versus bought? Rebuilding commodity tools is a smell unless there is a clear moat.
- How are metrics reported? Revenue recognition, pipeline quality, and churn should be plain, auditable, and consistent.
There are also tells of sincerity. Teams that publish model cards, explain failure modes, and describe labeling operations are usually the ones treating AI as an engineering discipline rather than a slogan. Teams that react defensively to questions about humans in the loop often have a marketing problem, or worse.
It is tempting to read the “Actually Indians” meme as harmless. It is not. It collapses a global, skilled workforce into a punchline and distracts from the real accountability story. More to the point, it misunderstands how frontier systems are built. The future will not be “all AI” or “all humans.” It will be composites, human judgment wrapped around statistical machinery, sometimes elegantly, sometimes clumsily.
Investors have been here before. In every boom, the picks-and-shovels sellers cash in while operators figure out viable use cases. This cycle is no different. The good news is that useful AI work is happening, quietly, inside companies that tell you exactly where the humans are. The lesson from Builder.ai is not that AI is a sham. It is that execution, focus, and honest bookkeeping still decide who makes it out of the hype cycle.
The meme will fade. The engineering tradeoffs will not. The next time someone whispers that an “AI” is just people, the right response is not outrage, it is the follow-up question: which people, doing what work, with what quality controls, and at what cost? The answers tell you more about the health of a business than any demo ever will.
