8 min read • Sep 30, 2020
James is a Big Data Engineer and certified AWS Solutions Architect with a passion for data driven applications. He's spent the last 7-years helping his clients to design and implement huge scale streaming Big Data platforms, Cloud-based analytics stacks, and severless architectures.
The trendy pull to offer ML solutions may be tempting, but it’s important to avoid falling into a rabbit hole of model complexity and data quality gaps.
Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) seem like what every tech company, startup, enterprise, even the IT department is doing these days. “We use Machine Learning and Artificial Intelligence to make __ better” - we’ve all heard it before.
To satisfy my curiosity, I decided to look at Google Trends for each of these terms - and sure enough, global search traffic for AI, ML, and DL has increased by more than 400% in the past five years alone. Validation that, if nothing else, this perception of Machine Learning hype is a real thing. Perhaps this is where CTO’s and other tech deciders get drawn into a trap, like with so many other trendy stuff in tech.
If we’re not doing it, does that mean we’re not an exciting place to work?
If our products don’t incorporate ML, are they less appealing to consumers?
Are we not innovative and trailblazing?
This bears a striking resemblance to the last great thing in technology: Big Data. I’m reminded of something Dan Ariely, a former Economics professor at Duke University, posted on Facebook once. He humorously compared Big data to teens having sex, “Everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it.” This take on data is strikingly accurate for machine learning too.
The relationship between AI, ML and DL.
Simply put, deep learning is a subset of machine learning, which in turn is a subset of artificial intelligence. The terms tend to be used interchangeably and incorrectly.
In general, AI refers to any system or machine that attempts to simulate part (or all) of human intelligence, including learning, reasoning, and self-correction. Broadly speaking, there are two classes of AI - Weak AI is a type of AI developed for a specific purpose or task, whereas Strong AI attempts to mimic general intelligence.
Machine Learning is a specific application of AI systems that can learn and improve without programming or fixed rules. For example, ML is used to power the recommendation engine that provides suggestions for customers after learning their previous behaviors. In this way, ML is simply a technique for realizing AI--it’s a method of training algorithms to learn how to make decisions.
Deep learning is a very specific ML technique, referring to a particular approach used to create and train neural networks. DL algorithms are inspired by the structure and processing mechanisms found in the human brain; they attempt to identify patterns and classify various information types.
As an aside, data science is also misused and misunderstood. Data science uses scientific and mathematical methods and algorithms to extract knowledge and insight from data. Although not specific to AI, data science is a broader field of applying statistical methods to gain understanding from existing data. To learn more about these types of intelligent technologies, you can read our guidebook article here.
“Data is to ML what gasoline is to the internal combustion engine: essential and non-negotiable.”
So often, the decision to adopt AI is top-down and, as a result, possibly ineffective and costly. Generally, what happens is, a high-level executive decides that AI is just what the company needs to solve problems X, Y, and Z. This gets passed down the corporate food chain to someone who then figures out how to execute. Usually, this manifests itself into hiring a bunch of brilliant data scientists--most of whom probably have a background in mathematics and have spent years experimenting with data, predicting the future, and solving complicated problems in cool and exciting ways. Driven by a large budget and a mandate for a significant change program, the teams will spend large amounts of time developing complex systems to support these data scientists to revolutionize their business with AI’s power.
It’s critical to keep in mind that high-quality data is the fuel and lifeblood of complex data science. Data is to ML what gasoline is to the internal combustion engine: essential and non-negotiable.
The problem here is that the business data domain is constrained not by ambition, technology, money, or ability, but by data. For advanced ML, and even DL to really work, vast quantities of high-quality data are needed. Most businesses simply don’t collect this because they’ve never previously needed to, and storing and processing it was expensive. Sure, there are exceptions: tech-savvy startups, and of course, the Googles, Facebooks, and Amazons. Most enterprises are different, and often these large, ambitious analytics change programs fall flat only due to a lack of quality data.
Companies are frequently wasting their time and money on the goal of making AI and ML a core part of their business. The goal is noble, but they’ve missed the point. They’ve picked the tool before looking at the problem. The answer requires confidence but is remarkably simple: focus on use cases. Obsess about use cases.
At Virtasant, we’ve developed a framework for developing and vetting AI and ML use cases. It’s simple, repeatable, and, most importantly - it works. We start not with the technology but with the business. First, we educate the business about what advanced analytics is, and then we ask them to come up with ideas for how it can be applied. Stakeholders know their businesses far better than IT ever can, which is why we always start here. Typically, this is met with enthusiasm and some far-fetched ideas, but we also gather great insight. Next, we ask stakeholders to give examples of how such a solution would improve their business line, and use this to derive value propositions.
This is your starting point. This is your long list.
In parallel to this, we’re going to need some technology, and remember, it doesn’t need to be super complex since these are early-stage conversations. We’ll require somewhere to store raw business data, like an S3 bucket or buckets, a place to perform data experiments, like Sagemaker Notebooks, a method for transforming and cleaning raw data, like Athena queries and Glue jobs. In the end, we’re left with the fundamental building blocks for a successful machine learning program.
You have your use case long list, and you have the foundations of an analytics technology platform; the key now is to figure out which of your use cases can actually deliver value. Perform an initial assessment of a maximum of one day per use case—interview business stakeholders and data owners to understand each idea’s feasibility and then prioritize your list.
The next phase is what we call the EDA phase. An EDA is a time-boxed (usually 2-3 weeks) assessment of a use case. The EDA team should be small - typically a business expert, a data scientist, and a data engineer. The outcome of an EDA should be a “go/no go” on a use case, but it’s essential to answer some key questions:
Armed with this information, you’re then able to determine which use cases and ideas make sense to be developed further, and which can simply be thrown away. For some use cases, you may even discover that a custom solution can be replaced with a commoditized ML package, such as Amazon Personalize or Amazon Forecast.
Remember that data is key to AI, ML, and DL. Without significant quantities of high-quality data, your machine learning program will not succeed. The data landscape for almost all businesses is hugely complex. In many cases, nobody can answer the question: Do we have enough data to support use case X? This is the problem because you want to avoid spending hard-earned dollars and several months developing an all-singing, all-dancing analytics platform, only to realize you’ve come up short on data.
How do we solve this? We obsess with use cases, rapidly prioritizing them, assessing them, and starting data experiments early and fast. Remember to time box these EDAs, and set clear goals at the start: business value, chance of success, data support, length of time expended. Answer these questions, and while you might eliminate 80% of your initial list, you will be left with winners rather than losers. Follow this approach, and you set the stage for an ML program, which is a pragmatic success, rather than an expensive failure.