While the tech world often buzzes with the idea that artificial intelligence will eventually become self-sustaining, those on the front lines of AI training are offering a reality check. Matt Fitzpatrick, the CEO of Invisible Technologies, recently pushed back against the popular narrative that humans will soon be obsolete in the development process.
Speaking on the “20VC” podcast, Fitzpatrick explained that the notion of AI training itself through “synthetic data” without human oversight does not just lack evidence. In his opinion, it does not make sense from a logical standpoint.
Matt Fitzpatrick has faith in human intervention in data creation
According to a Business Insider report, Matt Fitzpatrick, CEO of Invisible Technologies, opined in a recent interview that human intervention is necessary in data creation. To understand why, it helps to look at how these systems actually learn.

Much of the industry is currently divided between using synthetic data—information created by other AI—and human-led training, often referred to as “Reinforcement Learning from Human Feedback” (RLHF).
Although synthetic data is useful when real-world information is scarce or sensitive, it cannot replace the nuanced judgment of a person. Fitzpatrick, who previously led AI research at McKinsey, argues that people will likely need “humans in the loop” for decades to come to ensure these models understand cultural context and specialized professional knowledge.
This perspective is fueling a massive, multibillion-dollar industry. Companies like Invisible, Scale AI, and Surge AI are no longer simply “labeling data.” Rather, they are essentially acting as elite universities for AI. They employ millions of people to teach models everything from advanced mathematics and coding to more human traits like empathy and a sense of humor.
These startups have seen their valuations skyrocket because tech giants have realized that the “internet’s worth of data” was merely the starting line. To get to the finish line—where AI is truly reliable—they need high-quality human guidance.
Related: SpaceX, OpenAI and Anthropic are set to launch their own IPOs
Human-driven workplaces undergo massive changes as new technologies emerge
However, the nature of this human work is changing. As Garrett Lord, CEO of Handshake, recently pointed out, the industry has moved past the need for generalists. Since AI has already “read” nearly every public book and website, it no longer needs a human to tell it how to summarize a basic paragraph.
Instead, the demand has shifted toward highly specialized experts—doctors, lawyers, and PhD-level scientists—who can provide the “ground truth” for complex problems that the internet alone cannot solve.

Other industry leaders, such as Brendan Foody of Mercor, have echoed this sentiment, noting that the real “secret sauce” of a successful AI company isn’t just the code, but the quality of the people training it. There is a growing consensus that as AI becomes more sophisticated, the “human touch” becomes more valuable, not less. One would say that people are moving away from a world of simple data entry and into a specialized economy where human expertise is the most precious resource in the AI supply chain.
Ultimately, the goal of this massive human effort is to ensure that AI remains grounded in reality. Without human mentors to correct errors and provide context, AI models risk becoming Echo Chambers of their own mistakes. By keeping people at the center of the training process, the industry is ensuring that as technology advances, it stays aligned with human values and real-world accuracy. Rather than being replaced by the machines they build, people are becoming the essential architects of their intelligence.
Also Read: Google CEO Sundar Pichai Promotes Pluralism in AI Architecture

