Expert systems emerge as the next AI frontier, with OpenAI's latest update enabling ChatGPT to leverage internal expertise. Discover how specialized AI solutions are driving 400% faster growth and unlocking enterprise value.
Are we running out of training data for AI?
Elon Musk says, “We've now exhausted basically the cumulative sum of human knowledge …. in AI training.” And his solution—synthetic data, where models generate data themselves—has a place. According to Gartner, 60% of AI projects in 2024 relied on synthetic training data.
But there’s an alternative, more nuanced and specific solution based on this axiom: a technology’s effectiveness is directly proportional to its paired expertise.
Enter Expert Intelligence, an approach that creates an enterprise flywheel of expertise and AI systems: expert data is used to train AI systems, which are then fine-tuned by experts and then “listen” to new data created by those experts. With OpenAI’s latest bombshell, our ability to create Expert Intelligence systems increased dramatically.
In the final months of 2024, OpenAI quietly unveiled something revolutionary. On day two of their holiday-themed "12 Days of OpenAI" event, they introduced reinforcement fine-tuning to ChatGPT—democratizing a capability previously unavailable for external developers, one that will fundamentally change how organizations can embed human-powered domain expertise directly into AI models.
Developments like this major ChatGPT announcement clearly indicate where we’re headed: nuanced, expert-driven, vertical AI solutions for domains like healthcare, finance, law, and engineering. Expert Intelligence—the seamless integration of domain expertise and AI systems to create solutions and expert AI models that neither could achieve independently—is the next wave of AI innovation.
It’s hard to pick up the signal in the noise in the AI space, and OpenAI’s announcement was just day two of twelve, but some missed the significance. After all, "reinforcement fine-tuning" is a technical description of an approach to training AI. But here’s what this means: companies can train ChatGPT-powered AI solutions directly through their in-house expertise.
https://www.youtube.com/watch?v=yCIYS9fx56U&ab_channel=OpenAI
Organizations attempting to implement AI solutions have to decide based on training data: general AI versus expert-trained, vertical AI. General AI applications have their use (generating creative marketing copy, for example), but do you want your doctor using ChatGPT or a model fine-tuned by hundreds of healthcare professionals?
The results of general AI-only implementations have been predictably disappointing. We've seen error-prone AI transcription tools, hallucinating chatbots, and massive company layoffs that correlate to mismanaged AI applications.
Introducing reinforcement fine-tuning to ChatGPT is a big step out of the Trough of Disillusionment (our current location in the hype cycle) and toward refined, verticalized, human-powered AI applications. Organizations can now amplify their human expertise directly in a publicly available AI application. OpenAI’s announcement will kick off a surge forward in AI designed to incorporate human feedback and expertise.
The result will be solutions that neither AI nor human-only systems could achieve independently: Expert Intelligence. It’s an approach that demands more input, but the organizational understanding of internal data, expert engagement, and AI processes can create extraordinary outputs.
Behind every successful AI implementation lies a foundation of human expertise, and the quality of that human expertise matters. We’re seeing companies move away from a basic level of human touch, like when Amazon used a hidden workforce comprising 1,000+ employees in India to augment and monitor its AI stores. The requisite human expertise has increased exponentially for vertical-specific applications for higher-stakes applications—like coding, investing, and healthcare. (See the video in the prior section, which demonstrates models learning from experts in their respective fields.)
A recent study on AI-powered investment decisions perfectly illustrates this principle. The paper found that when GPT-4 summarizes earnings calls to match investor expertise levels, sophisticated investors (read as expert investors) achieve a remarkable 9.6% improvement in one-year returns. In comparison, novices see only a 1.7% gain. This is a crucial point about expert systems in AI: the technology's effectiveness is directly proportional to the domain expertise it's paired with.
This "expert effect" isn't limited to financial markets. Take Intuit's transformation through Expert Intelligence integration. While many companies chase the latest AI headlines, Intuit quietly built an AI-driven expert platform that combines deep tax expertise with artificial intelligence. The results speak for themselves: they project $1.4 billion in revenue for fiscal year 2024 from their live, expert-based service alone—accounting for 30% of total revenue.
What makes Intuit's approach different? They've created a purpose-built system to amplify their vast store of expertise. Their platform seamlessly trains their AI with tax professionals' knowledge, creating what they call an "AI-driven expert platform." Their system is engineered to learn from and reinforce itself—the delivery of Intuit’s services is their reinforcement fine-tuning.
The net result is a fluid system that shifts between humans and AI:
More organizations are turning to the specificity and nuance that expert systems in AI offer, solving domain-specific applications and creating vertical platforms and features.. When training large language models, companies like OpenAI and Google increasingly use domain experts to guide and refine their AI's understanding. Companies like Turing, which partners with OpenAI and Google, employ skilled programmers to "teach" AI coding software through supervised fine-tuning. Similarly, Scale AI hires PhDs, doctors, lawyers, and writers to train their AI models in specialized domains.
We’re running out of valuable AI training data, yet any companies possess vast repositories of expert knowledge. This would seem to contradict the top of this piece, but the problem is the quality of that data. Companies have stockpiles of messy, disorganized enterprise data, but are struggling to utilize it, which only compounds the dearth of valuable training data. Too much noise and not enough signal.
Enterprises face a paradox in the AI era: the more data they accumulate, the harder it becomes to transform that expertise into scalable AI products. According to a recent Wall Street Journal report, organizations are drowning in data—collecting three to four terabytes daily—yet struggling to extract meaningful value from it. "The volume of data is not slowing down. We all are gathering a lot of data everywhere," says Shamim Mohammad, chief information and technology officer at CarMax. "Every company is getting overwhelmed with information."
“The volume of data is not slowing down. We all are gathering a lot of data everywhere. Every company is getting overwhelmed with information,”
—Shamim Mohammad, Chief Information and Technology Officer, Carmax
According to a 2023 Salesforce global survey of 10,000 business leaders, 33% can't generate meaningful insights from their data, and 30% feel overwhelmed by sheer volume. The problem is compounded by the quantity and complexity of modern business data infrastructure. Companies typically juggle hundreds of different cloud applications, creating data silos that fragment their institutional knowledge and imprison it as the data cannot then be extracted from those applications. As Juergen Mueller, chief technology officer of SAP, notes, "No company I meet is saying they have their data fully under control."
A Salesforce graph breaks down barriers to unlocking data value: “A lack of understanding of data” (41%), “A lack of ability to generate insights from data” (33%), and “Too much data” (30%). A primary barrier to effective expert systems in AI is data management.
Image description: A primary barrier to effective expert systems in AI is data management.
Since AI relies primarily on a hefty diet of data, the fact that 70% of AI projects fail points to data issues as a central theme. Consider IBM's Watson for Oncology, which promised to revolutionize cancer treatment. By 2021, it was a $62 million cautionary tale, providing "unsafe and incorrect" cancer treatment advice. There has been some contention about whether the data Watson was trained on was real patient data, with reports that only one or two physicians were involved in training it, Additionally, Watson relied on synthetic training data based on cohorts of patients at Memorial Sloan Kettering Cancer Center rather than real patient data (though IBM continues to claim otherwise). Because Watson was trained from data at a single institution, the AI failed to align with local guidelines.
The failure was destined by design—it couldn't effectively integrate broader medical expertise. Expertise wasn’t baked in.
Legacy systems compound these challenges. According to recent Bain & Company data, organizations face three critical barriers: insufficient in-house expertise, inadequate data infrastructure, and existing technology platforms not ready for AI integration. The result? Companies spend up to 80% of their data projects simply cleaning data and recreating context rather than extracting value from their expertise (according to Mueller).
Compounding these core issues is a fundamental inability to execute change management well. Only 43% of employees rate their organizations as good at change management—down from 60% in 2019. AI-specific change management begets natural concerns about job security; unhelpfully, Microsoft has begun explicitly marketing AI's potential to reduce headcount, with their chief marketing officer Jared Spataro noting that CFOs are asking, "Show me what you took out of our budget by using AI."
Until now, big tech has gone out of its way to avoid stating this so bluntly; a Google AI case study specifically axed the phrase “labor costs” from a blog post. Focusing on cost reduction through automation misses the larger opportunity: creating systems that amplify rather than replace human expertise.
General AI is no longer a meaningful competitive advantage when compared side by side with vertical use cases. The true differentiator will be how effectively organizations can capture and scale their unique domain expertise through AI systems. To be blunt: Expert Intelligence will make or break companies.
Some organizations innovate by creating structured programs that bridge the gap between domain experts and AI systems. For example, Uber's new "Scaled Solutions" division is a network of "nuanced analysts, testers, and independent data operators" across multiple countries, creating a flexible framework for integrating diverse expertise into AI systems. Unlike traditional approaches that treat AI implementation as a purely technical challenge, these programs recognize that success requires new ways of capturing, training, and scaling human knowledge.
Vertical AI solutions are growing 400% faster than traditional AI applications. Companies like PathAI have achieved remarkable success not by implementing better algorithms but by deeply integrating medical expertise into their AI systems. The result? Their technology is now used by 90% of the top 15 biopharma companies. Similarly, Luminance Legal's approach to AI-powered contract review has generated millions in annual savings for their clients while achieving accuracy rates that general-purpose AI solutions can't match.
OpenAI just democratized these strategies.
Better data leads to better AI, but the best way to unlock Expert Intelligence is through a system like the one that Intuit built. Organizations that fail to develop these systems risk becoming irrelevant in a market where competitors leverage their internal expertise for powerful AI-driven products and services.
OpenAI's reinforcement fine-tuning announcement, arriving quietly in the days leading up to Christmas, is another pebble that indicates a landslide. This is a gateway to a new era where organizations can embed their unique domain expertise directly into AI systems, both internal and external.
The tools are now available to anyone. How will you use them?