As we enter AI's Trough of Disillusionment, expert intelligence strategy can move companies beyond general LLMs to vertical solutions that deliver on AI's hype.
As enterprise software bloat reaches critical mass, with 80% of features going unused (representing $29.5B in waste), organizations are discovering that throwing general AI at the problem isn't the solution. While general AI models like Whisper show concerning error rates, a new approach combining human expertise with artificial intelligence is emerging: Expert Intelligence (EI).
Expert Intelligence: /ˈekˌspərt inˈteləjəns/ - noun A human-centric theory and system for digitizing connections with known intents within specific vertical domains that drive guaranteed outcomes that unlock the value of higher-level human intelligence through AI and automation.
This article explores why today's software products and today's Generative AI implementations often fail to deliver value, and how organizations can leverage Expert Intelligence to create more effective, industry-specific solutions that actually solve real business problems.
In 2006, British mathematician Clive Humby coined the phrase “Data is the new oil,” suggesting that data would be the primary resource driving the economy in the same way oil revolutionized the industry—and he was correct. With the advancement of AI, Chat (ChatGPT’s new moniker) 3.5 alone was trained by 570GB of data pulled from disparate text sources, including Wikipedia.
For decades, we have worked to build AI technology that mimics and unlocks human insights, with data as the foundation of this work. Today, AI algorithms analyze medical imaging like X-rays, MRIs, and CT scans to detect diseases like cancer accurately. Self-driving cars use AI to navigate complex environments, promising to reduce accidents and improve traffic flow. AI algorithms analyze billions of real-time transaction patterns to protect us from fraudulent activities. The launch of Chat GPT in November of 2022 ushered in a new wave of promise, kicking off an AI arms race leveraging generalized Large Language Mode (LLM) technology to drive efficiency, solve complex problems, and drive significant revenue growth across industries.
However, countless other attempts at AI implementation could have been smoother. What many people miss with Humby’s metaphor is that data—like oil—needs to be refined, validated, and processed to be useful. Data can be dirty, wrong, fictional, or biased. What you put into your engine directly impacts how smoothly it runs, if it ever starts in the first place.
For many enterprises, the "check engine light" has been flashing for years—getting AI projects off the ground has been challenging or even impossible. This article will explore how companies can avoid the pitfalls of using general Large Language Models (LLMs) across verticals and instead leverage Expert Intelligence (EI) for more effective, industry-specific AI implementations.
In a world where we expect instant gratification, we must step back and understand the cycles of emerging technologies like AI. Yes, these cycles are accelerating, but they follow a familiar pattern that helps organizations manage expectations, develop strategies, and balance investments across their business.
Gartner’s 2024 Hype Cycle for Emerging Technologies illustrates where we are on the path to the Plateau of Productivity, which is reached when a technology generates significant revenue and the product is taken for granted (smartphones, broadband, social media). We are encouraged by the early “game-changing” promises as company after company rolls out flashy demos of their new AI investments (who can forget when Google Duplex's AI assistant successfully booked a haircut appointment in a main stage presentation?). However, as increasing numbers of organizations across the spectrum of sectors attempt to apply these promises to real-world use cases using general LLMs, the check engine light soon comes on.
To be blunt, simply lifting general LLMs and applying them across verticals is dangerous. Just look at the number of hospitals adopting error-prone AI transcription tools despite warnings. OpenAI claimed that Whisper approached "human-level robustness" in audio transcription accuracy. However, as a University of Michigan researcher told the AP (as reported by Ars Technica):
“Whisper created false text in 80 percent of public meeting transcripts examined.”
As Winston Churchill famously said, “Perfection is the enemy of progress;” with proper guardrails in place, a small percentage of errors (hallucinations) may be acceptable. However, as these new models are moved into mission-critical production environments, companies will have to find new ways to collect, structure, and train data as they navigate the Trough of Disillusionment and unlock the true potential of AI.
To do this, start by carefully examining the rise of enshittification already plaguing the enterprise software market. Enshittification is a concerning trend defined as a gradual decline in quality. Specific to SaaS, this happens in stages as an app or platform trades user experience for ads and other attempts to squeeze money from its user base with new features and pricing tiers, eventually leading to a less useful platform than was initially offered.
Unfortunately, enshittification parallels how most organizations approach AI. Most AI solutions are being integrated into bloated products with complex pricing. SaaS—broadcasting features rather than designing solutions based on listening to what their customers are trying to achieve. To make things worse, these AI features are being built on today's standard foundation, LLM’s overpromising capabilities, and, in many cases, create significant errors.
As organizations grapple with these implementation challenges, a clear pattern emerges: success comes not from broader AI applications, but from more focused, expert-driven approaches.
This scenario will probably feel familiar:
You arrive at work early, ready to be "productive," and boot up the latest version of the enterprise software your team relies on. You're hoping to finally access last quarter's sales data without the system timing out—a persistent issue that's been "under review" for months. Instead, you're met with a cluttered interface and a dizzying array of new features you'll never use. And are those... premium data tiers? Seriously? The basic analytics your team once took for granted are now locked behind an additional subscription wall while flashy new AI-powered "insights" dominate the dashboard. As you sit there staring at the computer screen, watching the loading icon spin endlessly on a simple data query, you can't help but wonder—who exactly is this software built for? It sure doesn't seem to be for you and your team.
Welcome to enshittification.
We’re witnessing the enshittification of everything, especially enterprise software experiences. Technology companies are managing massive amounts of legacy code used to run some of the world's largest organizations while under pressure to grow revenue by adding functionality (features), entering new markets (go horizontal), and adopting emerging technologies (AI). What could possibly go wrong…
Here’s a dirty secret about enterprise software: 80% of features in the typical cloud software product are rarely or never used. The conclusion: about $29.5 billion is wasted. So, to answer the above question (“Who exactly is this software built for?”), it’s not for you anymore. It’s for the investors, the portfolio companies who want to see growth, growth, growth. It’s also for the numerous submarkets that the product has attempted to cater to over time (becoming more horizontal as a result), which frustrates users further as the product becomes increasingly generalized.
This feature-driven horizontal approach drives products further from the initial value they were created to achieve. A different approach was outlined in the competition between Qualtrics and SurveyMonkey: Product-Led Growth's Failure - How a Scrappy Utah Software Company Ignored Every Silicon Valley Heuristic and Won Anyway. While many companies are continuing to invest in features for growth, others are focused on delivering outcomes:
Qualtrics succeeded by "selling impact vs selling technology" and repositioning surveys as "Experience Management," choosing to go "directly to the C-Suite" rather than taking a bottom-up approach through product managers and analysts. Their strategy was people, including a “large professional services arm that now accounts for roughly ⅓ of revenue.” Most importantly, the article highlights the power of positioning: "despite having a more expensive product with similar capabilities to competitors, their positioning empowered their sales teams to win the market and satisfy customers."
The results: Qualtrics outperformed SurveyMonkey by a large margin. It has achieved the highest rating for Experience Management in G2's Spring 2023 Enterprise Grid®. With over 18,000 customers, including 91 of the Fortune 100 companies, Qualtrics has established itself as the clear market leader in customer and employee experience solutions, despite the fact that they offer a product very similar to SurveyMonkey. Read as: they built their company with expertise in the foundation. In dollars, that equated to a $14.8 billion higher valuation in 2021 (Qualtrics was acquired by CPP and Silver Lake Investments in 2023).
Adding professional services—people—to a model based on outcomes is the perfect bedrock for a system that can leverage AI to its full potential. It’s foundational to EI strategy and realizing true value from AI implementation. Companies striving for this must first understand that EI can’t be “tacked on” to a preexisting ineffective strategy; it’s an entirely new way of approaching product creation.
The current generation of LLMs is starting to tank in performance, and the underwhelming results point to a fundamental challenge facing the AI industry:
“The diminishing supply of high-quality training data and the need to remain relevant in a field as competitive as generative AI.”
We're in the stage of the hype cycle where alternative strategies for implementing standard foundation LLMs need to be considered. To get through the Trough of Disillusionment, companies will have to start considering alternative strategies for training models as they integrate AI into their products. If not, they risk creating “Clippie 2.0.”
One of the underutilized resources in the world is human intelligence, and if leveraged skillfully, we can create systems and products that unlock our collective potential. I started exploring how organizations tap into a global pool of expert talent in my book Gig Mindset and have expressed how an approach rooted in Expert Intelligence (EI) can take B2B companies from data-rich and insight-poor to organizations that help better service customers and create sustainable value. What is EI?
EI requires working backward from the desired customer outcome, shifting focus from product features to actual customer outcomes (The Future Of Customer Success Belongs To The Handyman, Not The Toolbox). Adopting this mindset helps organizations understand where automation and AI can provide the most value to their customers. Today, it might be a professional service, and in the future, I believe it’s a full-functioning system that blends human expertise with AI, creating a powerful listening mechanism. (Who else knows better what nuances a customer will want from a specific command or query than the experts themselves?)
Suppose you do apply a general LLM to a specific vertical. There’s already the issue of sheer cost, and scaling laws (the idea of training an LLM on more data plateaus at a certain point) also come into play—though this is sometimes the subject of debate.
And if you’re programming a vertical AI, how do you know if the data you’re feeding it isn’t itself AI-generated? Expert data is becoming a premium in a noisy data environment. The clear solution is to vet the data—or have the AI trained—by an actual expert.
With these examples in mind, organizations can begin planning their own EI strategy considering the following:
Assess Your Vertical Focus - Map your industry's pain points by auditing current product usage, interviewing key customers, and identifying specific processes where expert-powered AI could drive immediate value.
Build Your Expert Network - Launch your expert recruitment campaign by targeting domain specialists in your vertical, establishing clear collaboration frameworks, and creating systems to capture and validate their expertise.
Design Your AI Training Approach - Deploy your AI development strategy by establishing expert-led training protocols, implementing rigorous quality controls, and building feedback mechanisms that continuously refine your AI's performance.
Execute Implementation Timeline - Drive your EI strategy forward through a structured 12-16 month rollout, starting with expert onboarding, advancing through AI training and testing, and scaling to full implementation with regular performance reviews.
Start Small - Create a small team of cross functional disciplines focused on solving a critical customer pain point. Learn from early experiments before significant investment and scale.
These focus areas will help set the ground work successful experimentation and proticitzation of EL strategies. Now, lets dive into verticalization.
We are starting to see vertical AI solutions growing 4X faster than traditional AI applications as companies begin to evaluate implementation strategies. As companies enter the age of AI, they face decisions about the products they are building. Will it be a broad scalable platform (think Amazon, Google) or more hyper-focused vertical solutions, like AI tailored directly for pathology and diagnostics (PathAI) or market intelligence and research (AlphaSense)? In the article Platforms vs. Verticals and the Next Great Unbundling Andreesen Horowitzowitz speaks to this trend:
“One of the most effective forms of that competition often comes in the form of newcos who aspire to take chunks out of that emergent platform by better addressing the needs of a specific vertical within that platform—by creating a user experience or business model that’s much more tailored to the unique attributes of that vertical.”
AI accelerated the need for companies to understand better the strategies necessary to develop end-to-end vertical value and deliver more in-depth solutions. In all but a few circumstances, horizontal platforms become victims of their own success, as witnessed in the below now-well-known diagram that documents the unbundling of Craigslist; the same is happening with the rise of Gen AI.
According to Andreessen Horowitz, “As the platforms grow, their submarkets grow too; their product gets pulled in a million different directions. Users get annoyed with an experience and business that caters to the lowest common denominator. And suddenly, what was previously too small a market to care about is a very interesting place for a standalone company.”
The artificial intelligence landscape is experiencing a similar shift. While horizontal AI solutions continue to make headlines, a more targeted approach—vertical AI—is quietly revolutionizing how enterprises create and capture value using AI. Growing as fast as 400% year-over-year, vertical AI represents a fundamental reimagining of how organizations can leverage artificial intelligence for industry-specific transformation. With vertical SaaS companies already on the rise, incorporating AI in vertical offerings is natural and offers a significant growth opportunity. But how do companies get the valuable data needed to succeed?
To understand how Expert Intelligence works in practice, let's look at how one company is already successfully implementing this approach.
On January 20th, 2019, I watched the NFC Conference Championship at a friend’s house. The New Orleans Saints (NOLA’s my roots) were close to clinching their second Superbowl appearance. Fate and a bad call would rob them of that chance, but it was a tax commercial that has stuck with me from that day.
During the first quarter of the game, Intuit spent millions to share how they are adding “people to the software.” While putting people in software seems simple (it’s not), the strategy seemed revolutionary to me at the time, creating a new model for products that need to capture valuable data while improving customer outcomes.
There aren’t many things as complex as the US tax code, which has grown from 400 pages in 1913 to over 4 million words in 2023. Intuit has outlined an EI strategy to navigate this complexity: “Our AI-driven expert platform strategy and 5 Big Bets position Intuit as a mission-critical platform that delivers end-to-end solutions, driving sustained growth.” Intuit projects $1.4 billion in revenue for fiscal year 2024 from its live, expert-based service alone, which accounts for 30% of its total revenue.
As companies look to drive product adoption, we can see these types of strategies across industries, from Amazon's work to accelerate cloud adoption with AWS IQ to Google and others' work to create the next generation of AI software developers.
Much of the allure of AI's "magic" often obscures the human labor behind it (So, Amazon’s ‘AI-powered’ cashier-free shops use a lot of … humans), but human expertise is becoming integrated into AI training as companies start to understand the approaches to unlock AI’s promise.
Businesses rely on massive amounts of data to train AI models, sometimes up to one petabyte of data (180 million words) for a large language model. This data needs human labeling for accuracy, and data annotation teams are critical in ensuring AI models function correctly. However, we are seeing an increase in the level of domain-specific expertise needed in systems. Volumes of data plucked from YouTube might work okay for a general LLM application like Chat. However, for expert applications, like coding solutions developed by OpenAI and Google, real expertise from real experts is crucial.
For Example, OpenAI has been leveraging programmers from a company called Turing to essentially “teach” its AI coding software through supervised fine-tuning (SFT). Google and Anthropic are now also partnered with Turing. They’re all hoping to dramatically improve their AI’s coding abilities because doing so would allow their programs to automate vast numbers of white-collar jobs globally. Companies like Scale AI and Surge AI compete with Turing as the demand for highly skilled AI experts increases. Turing makes no attempts to disguise it’s long-term vision:
"Turing aims to leverage its code-evaluation business to fulfill its original mission of becoming a cheaper, better version of consulting firms such as Accenture or Boston Consulting Group, which provide businesses with software engineers for specific projects."
Applying AI to a specific domain—the vertical AI approach—requires particular expertise to train and validate your models. Coding and software development isn’t the only arena of AI development demanding AI expertise—Scale AI is hiring PhDs, doctors, lawyers, and writers to train its AI. Want to build an AI assistant for surgery prep? Hire a team of doctors to teach your AI. Want to create a contract analysis tool? Hire legal experts to show the AI the nuances of certain legal documents. And that expertise offers opportunities for a more ethical application—experts can offer recommendations for training data provided with creator consent.
We can start to recover from the enshittification of everything, but it will require reevaluating the strategies that got us here. Systematically, EI means creating software based on desired customer outcomes and investing resources into new ways of product development. This fundamentally opposes launching new features to support “growth hacking” and the latest top-of-funnel email campaign.
We’re moving beyond general LLMs and toward more effective, industry-specific AI implementations that combine human expertise with AI. But we’re just getting started. In future installments, my colleagues and I will dive deeper into the more practical and tangible aspects of realizing an EI strategy: culture and change management as foundational aspects, how to approach data collection and management, initiating AI training processes with companies like Turing and Scale, case studies, and more.
We may be entering the Trough of Disillusionment in the hype cycle, but I like to think of it in more optimistic terms: the Great Reassessment. This is an opportunity for organizations and individuals not to become disillusioned but to clarify strategies that lead to the groundbreaking promises that caused the hype in the first place.