What Is a Minimum Viable AI Product?
To be successful, an AI pilot needs to go beyond the basic requirements of standard IT projects.
Topics
One of the key attributes of the lean startup approach popularized by Steve Blank and Eric Ries is the development and refinement of a minimum viable product (MVP) that engages customer and investor attention without large product development expenditures. Initially defined by technologist Frank Robinson, an MVP may not meet all customer needs, but it offers enough capabilities for highly interested customers to get started. It’s a paradigm that has become well established in technology product development.
But what does the concept of an MVP mean for artificial intelligence? This is a question that is relevant not only to startups but also to large enterprises. Many companies are developing AI pilots in different areas of their business that they hope will demonstrate the potential value of AI and eventually lead to production deployment. An MVP for a large organization has many similarities to a pilot or proof of concept.
Get Updates on Leading With AI and Data
Get monthly insights on how artificial intelligence impacts your organization and what it means for your company and customers.
Please enter a valid email address
Thank you for signing up
For any organization pursuing AI, then, it’s important to understand what constitutes a successful MVP. It’s equally important to a venture capital firm that invests primarily in AI companies — like Glasswing Ventures, with which we are both involved — to understand AI MVPs and what it takes to improve them. Based on several Glasswing portfolio companies and others we’ve researched, we’d argue that while some of the necessary attributes are true of IT products in general — that it’s useful even in its earliest stages, that customers’ early use can be monitored in order to improve the product, and that it can be developed relatively quickly and cheaply — early AI products have some unique requirements in terms of what qualifies them for MVP status.
Data and the MVP
Machine learning is a common underlying technology for AI, and it improves via copious amounts of data. Supervised learning, by far the most common type in business, requires data with labeled outcomes. Therefore, data is perhaps the single most critical resource for an AI product, and it is necessary even at the MVP stage. With no data, there is no trained machine learning algorithm.
Anyone attempting to create an AI MVP should be able to answer the following types of questions — and investors or enterprise sponsors should be asking them:
- What data assets do your primary models rely on for training?
- Do you already have sufficient data to train a somewhat effective model? (More later on why “somewhat effective” may be ample.)
- How proprietary is the data used to train your models?
- How much data integration, cleaning, and other activities remain to be performed on your data before it is useful for training?
- Do you envision that additional data will become available to improve your models at some point?
Machine learning algorithms or models themselves are becoming somewhat commoditized. One provider of automated machine learning software, DataRobot, advertises that it has created over a billion models (though not all of them are being used, of course). But data remains a more challenging resource; it can require an enormous amount of effort to clean, integrate, and transform it into usable formats. And if the data source used by an early minimally viable AI product is broadly available — for example, the ImageNet database of labeled images — it is unlikely to provide much competitive advantage.
An example of valuable proprietary data is the information used by Armored Things, a startup in the Glasswing portfolio. Armored Things’ customers are major event venues and campuses looking to improve their physical security as well as their facilities and operations management. The company’s AI combines data from existing video, Wi-Fi, smart door locks, and other sensors into a “spatial intelligence layer” in building a real-time crowd intelligence platform. This unique data set is vital in enabling visibility over how people use and move through physical spaces and helped push this young company’s offering into MVP status.
The Los Angeles Football Club professional soccer team is using Armored Things to gain a real-time understanding of fan flow and to make smarter decisions about crowd density, sanitation, and security for the club’s 22,000-seat venue, one of the most high-tech settings for professional sports. Such technology is crucial as fans begin returning to sporting events after the disruption caused by COVID-19. Fast data analysis and action are integral to building trust and optimizing a safe fan experience.
Intelligence Beyond Data and an Algorithm
Machine learning alone — and deep learning in particular — is often not enough to create effective AI, even when coupled with clean, proprietary data. Machine learning solutions to problems involving perceptual tasks (speech, vision), control (robotics), and prediction (customer demand planning) vary greatly in tractability and complexity.
Early AI products may need to focus on the following four areas in order to achieve minimum viability.
1. AI MVPs may require complex hybrid models. Challenges such as modeling human dialogue, which can be a sparse-data problem because of the limited amount of information available, are unlikely to be solved using brute-force approaches. In such cases, it may be more practical, when reaching for an MVP, to contemplate using hybrid solutions that combine deep learning with a priori knowledge modeling and rules-based logical reasoning. These AI solutions are less complex and require less data than deep learning, and they supply greater transparency. Such hybrid algorithms are rarely available off the shelf, so it’s important that founders consider the implications of the associated exploratory research they require.
For instance, Cogito uses artificial intelligence to improve call center conversations by interpreting about 200 verbal and nonverbal behavioral cues in agents’ conversations. These include vocal volume, intensity, consistency, pitch, tone, pace, tension, and effort. The tool sends real-time signals to human workers to guide them to speak more confidently and empathetically so they can do their jobs at a higher level. As Cogito CEO Joshua Feast has said, the software “helps people be more charming in conversation,” which translates into higher Net Promoter Scores (28% higher, according to one study), shorter average call times, and fewer instances where customers escalate a call to a manager. The hybrid of natural language processing through machine learning, combined with the detection of social signals, creates substantially better recommendations than either technology alone.
2. AI MVP pilots need to show integration potential. Most organizations don’t want to use a separate AI application, so a new solution should allow easy integration with existing systems of record, typically through an application programming interface. This allows AI solutions to plug into existing data records and combine with transactional systems, reducing the need for behavior change.
Zylotech, another Glasswing company, applies this principle to its self-learning B2B customer data platform. The company integrates client data across existing platforms; enriches it with a proprietary data set about what clients have browsed and bought elsewhere; and provides intelligent insights and recommendations about next best actions for clients’ marketing, sales, data, and customer teams. It is designed specifically to directly complement clients’ existing software suites, minimizing adoption friction.
Another integration example is Verusen, an inventory optimization platform also in the Glasswing portfolio. Given the existence of large, entrenched enterprise resource planning players in the market, it was essential for the platform to integrate with such systems. It gathers existing inventory data and provides its AI-generated recommendations on how to connect disparate data and forecast future inventory needs without requiring significant user behavior change.
3. AI MVPs must exhibit evidence of domain knowledge. This relates to showing integration potential: Understanding how a solution will fit into existing vertical ecosystems and workflows is absolutely critical. For example, there are many cases in which otherwise good health care AI applications (such as diagnostic assistants) end up gathering dust on a shelf because they simply do not assimilate well into a doctor’s routine.
An MVP needs to solve a particular business or consumer problem, so it is important for the team to have domain knowledge of that problem. ClimaCell, a weather intelligence center, is a prime example of such a platform. ClimaCell’s team has drawn information from satellites, wireless signals, airplanes, street cameras, connected cars, drones, and other electronic sources to deliver street-by-street, minute-by-minute weather forecasts up to six hours in advance (and less time-specific forecasts up to six days out). Its on-demand “micro weather forecasts” have helped organizations like Uber, Ford, National Grid, and the New England Patriots football team improve their own readiness and provide better details and service to customers.
4. AI MVPs need to provide Day Zero value. AI applications often improve over time with additional data. However, when developing an AI MVP, it’s important to think about that first customer and how to deliver value from Day Zero.
This may require focusing initially on cleaning customer data to build a data set that can feed the AI product, training models early on with public data sets, adopting a human-in-the-loop approach that validates early responses with low confidence, or adopting rules-based technology. MVP developers need to ensure that initial customers will become the company’s biggest champions.
A Minimum Viable Product Requires Minimum Viable Performance
It is important to also take into account another MVP — minimum viable performance. Given the target task, how well does the product have to perform in order to be useful? The answer is problem-specific, both in terms of the relevant business metric and the required performance level. In some applications, being 80% successful on Day Zero might represent a large and valuable improvement in productivity or cost savings. But in other applications, 80% on Day Zero might be entirely inadequate, such as for a speech recognition system.
The goal is to beat the baseline, not the world. A good standard may be to simply ask, “How can a minimum viable AI product improve upon the status quo?” Even large software companies need to ask this question. At Salesforce.com, sales propensity models that predict which customers and leads are likely to respond to various sales activities were among the first tools developed with Salesforce’s AI product, Einstein. This tool was an easy addition because all the data was already in the Salesforce cloud, and the predictive machine learning models were a familiar technology to the sales staff that would use the information. Even an imperfect ranking of customers to call on is probably better than a salesperson’s unaided intuition.
It’s also a good idea for an AI MVP to support a “low-hanging fruit” business process. In the case of Verusen, the company focused its tool on parts inventory management, which is typically conducted in an ad hoc way. By structuring and improving that process, Verusen was able to show millions of dollars in savings to each of its early customers.
MVP-oriented thinking is important with any type of system, and AI is no exception — no matter how exciting the technology seems all on its own. Users can adopt a minimally viable AI product without large expenditures of time or money, and it can be improved with feedback from early clients. With that type of thinking, products and internal applications can proceed smoothly from useful-but-basic capabilities to transformational offerings.
Comment (1)
Jo koorapati