The Building Blocks of an AI Strategy
Organizations need to transition from opportunistic and tactical AI decision-making to a more strategic orientation.
Topics
The AI & Machine Learning Imperative
Brought to you by
AWSAs the popularity of artificial intelligence waxes and wanes, it feels like we are at a peak. Hardly a day goes by without an organization announcing “a pivot toward AI” or an aspiration to “become AI-driven.” Banks and fintechs are using facial recognition to support know-your-customer guidelines; marketing companies are deploying unsupervised learning to capture new consumer insights; and retailers are experimenting with AI-fueled sentiment analysis, natural language processing, and gamification.
A close examination of the activities undertaken by these organizations reveals that AI is mainly being used for tactical rather than strategic purposes — in fact, finding a cohesive long-term AI strategic vision is rare. Even in well-funded companies, AI capabilities are mostly siloed or unevenly distributed.
Get Updates on Leading With AI and Data
Get monthly insights on how artificial intelligence impacts your organization and what it means for your company and customers.
Please enter a valid email address
Thank you for signing up
Organizations need to transition from opportunistic and tactical AI decision-making to a more strategic orientation. We propose an AI strategy built upon three pillars.
1. AI needs a robust and reliable technology infrastructure. Given AI’s popularity, it is easy to forget that it is not a self-contained technology. Without the support of well-functioning data and infrastructure, it is useless. Stripped of the marketing hype, artificial intelligence is little more than an amalgamation of mathematical, statistical, and computer science techniques that rely heavily on a stable infrastructure and usable data.
This infrastructure must include support for the entire data value chain — from data capture to cleaning, storage, governance, security, analysis, and dissemination of results — all in close to real time. It is not surprising, then, that the AI infrastructure market is expected to grow from $14.6 billion in 2019 to $50.6 billion by 2025.
A good infrastructure allows for the establishment of feedback loops, whereby successes and failures can be quickly flagged, analyzed, and acted upon. For instance, when Ticketmaster wanted to tackle the growing problem of opportunists — people who buy event tickets ahead of genuine customers, only to resell them at a premium — it turned to machine learning algorithms. The company created a system that incorporated real-time ticket sales data along with a holistic view of buyer activity to reward legitimate customers with a smoother process and block out resellers. As the company soon realized, resellers adapted their strategies and tools in response to the new system. Ticketmaster then modified its infrastructure to include feedback loops, allowing its algorithms to keep up with the resellers’ evolving techniques.
2. New business models will bring the largest AI benefits. AI has the potential to offer new sources of revenue and profit, either through massive improvements over the current way of doing things or by enabling new processes that were not previously possible. But incremental thinking about how AI can be used will most likely lead to modest results. Significant benefits are unlikely to be achieved without a new business model mindset, or a so-called intelligence transformation.
AI allows for improvements that far surpass human capabilities. For example, OrangeShark, a Singapore-based digital marketing startup, uses machine learning for programmatic advertising, thus automating the process of media selection, ad placement, click-through monitoring and conversions, and even minor ad copy changes. Because of the efficiency offered by its system, OrangeShark is able to offer a pay-for-performance business model, whereby clients only pay a percentage of the difference between customer acquisition costs from a standard advertising model and the OrangeShark model. By completely automating a previously semi-automated task, the company has created a new business model that makes monetization of massive efficiency gains possible.
At the other end of the spectrum, Affectiva, which calls itself an “emotion measurement” company, houses the world’s largest image database of sentiment-analyzed human faces. The company analyzes and classifies a range of human emotions using deep learning models that can then be made available to clients. Some applications study emotional responses to ad campaigns, while others help people relearn emotional responses after a stroke. Affectiva has built a business model based on providing intelligence as a service in an area where nonhuman intervention was previously impractical.
These examples merely scratch the surface of possible AI-enabled business models. We will soon have smart cameras that facilitate franchising contracts and employee compensation schemes. Machine learning on granular data will allow for customization of products and services across time. As these and similar developments open up new sources of revenue and profit, new business models should therefore be considered as a foundation of any AI strategy.
3. AI without ethics is a recipe for disaster. The final AI strategy pillar is ethics, which is not necessarily a common component of technology strategy. However, the use of AI raises many potentially thorny ethical issues, such as incorrect insights and inherent biases due to poorly constructed algorithms, and an upswing in unemployment due to the substitution of human labor with machine output.
Take, for example, facial recognition, one of the most common AI use cases today. While the technology has proved to be effective in a number of areas, such as catching criminals, finding missing people, and even monitoring blood pressure, it also raises a number of ethical concerns, such as the right to avoid surveillance and the accuracy of the algorithms used to identify individuals and groups. For example, most AI systems are better at accurately identifying people who are white than people of other ethnicities, and at identifying men’s faces rather than women’s; indeed, some systems misidentify gender in up to 35% of darker-skinned females.
In December 2018, Google announced that it would suspend sales of its facial recognition software, citing concerns over ethics and reliability. Google’s competitors, in contrast, took an additional 18 months to reach the same decision. Only in early June 2020, in response to the Black Lives Matter movement, did IBM halt the sale of facial recognition software to police forces in the United States. Two days later, Amazon announced a one-year moratorium on sales of its facial recognition software to police, followed by Microsoft the very next day. For these organizations, the reputational damage of producing systems that systematically misidentified minorities, and selling the technology to police forces to identify criminals, had already been done. Google was proactive, while IBM, Amazon, and Microsoft were reactive, demonstrating that compliance with today’s ethical standards is insufficient; instead, organizations must also anticipate future ethical issues.
The temptation to extract additional value from the data they have collected or purchased may push organizations to overstep ethical boundaries, such as by repackaging and selling data without consent.
The need for a responsible approach to AI is likely to increase even further, for three reasons. First, as organizations scale up their use of AI, the ease of capturing sensitive, personal data about individuals will increase. Already, we are faced with the prospect of social networks and internet giants knowing significantly more about our day-to-day habits than our loved ones (and perhaps even we ourselves) know.
Second, as organizations transition into newer business models, the marginal value of collecting and using data will increase. Organizations will be able to assign a dollar value to each bit of data collected and accurately calculate the risk-reward ratio associated with each data point. Under these circumstances, the temptation to extract additional value from the data they have collected or purchased may push organizations to overstep ethical boundaries, such as by repackaging and selling data without consent.
Third, despite the importance of ethics, there is a general lack of overarching guidelines or benchmarks for responsible AI practices. Without a single established ethical arbiter, each organization and industry will have to determine its own standards and limits.
Unfortunately, the fragmented approach to AI will only exacerbate this problem. Unless organizations take a coordinated approach to AI ethics, it will be too easy for a rogue team to breach ethical guidelines. It is possible that an AI ethics office will need to be created within organizations to oversee AI activities, establish and implement ethical AI guidelines, and hold the organization accountable for its ethical practices. Companies that consider the ethics function as a branding and trust-building mechanism will come out ahead of those that deem it merely a regulatory issue. In addition to efforts within organizations to manage AI ethical practices, industry associations, governments, and multinational nongovernmental organizations can also play a role by setting out clear guidelines governing the responsible use of AI technologies.
Because AI is not a regular technology, the AI strategy needs to be approached differently than regular technology strategy. The power of AI to fuel the extremes of corporate performance, both positive and negative, requires a purposeful approach built on three pillars: a robust and reliable technology infrastructure, a specific focus on new business models, and a thoughtful approach to ethics. An AI strategy needs to be built on a solid foundation to survive the strong winds of change.
Comment (1)
Atique Malik