Three People-Centered Design Principles for Deep Learning
Bad data and poorly designed AI systems can lead you to spurious conclusions and hurt customers, your products, and your brand.
Topics
Frontiers
Over the past decade, organizations have begun to rely on an ever-growing number of algorithms to assist in making a wide range of business decisions, from delivery logistics, airline route planning, and risk detection to financial fraud detection and image recognition. We’re seeing the end of the second wave of AI, which began several decades ago with the introduction of rule-based expert systems, and moving into a new, third wave, termed perception AI. It’s in this next wave where a specific subset of AI, called deep learning, will play an even more critical role.
Like other forms of AI, deep learning tunes itself and learns by using data sets to produce outputs — which are then compared with empirical facts. As organizations begin adopting deep learning, leadership must ensure that artificial neural networks are accurate and precise because poorly tuned networks can affect business decisions and potentially hurt customers, products, and services.
The Importance of People-Centered Principles for AI
As we move into this next stage, the key question for organizations will be how to embrace deep learning for driving better business decisions while at the same time avoiding biases and potentially bad outcomes. In working with numerous clients across multiple industries, we have determined patterns that can help companies reduce error rates when implementing deep learning initiatives.
Our experiences working with organizations in these early stages of AI adoption have helped us create design principles for a people-centered approach to deep learning ethics with a strong focus on the data employed to tune networks. A designed people-centered approach helps address both short-term concerns — poorly trained AI networks that produce spurious solutions — as well as the long-term concerns that machines might displace humans when it comes to business decision-making.
When we talk about people-centered design, we mean principles that provide benefit to all individuals and communities compared with instances where only a few individuals benefit at the expense of others. Our people-centered design principles support the goal of providing and informing with data to allow people more opportunities in their work. In our experience, there are three key principles organizations need to hold up as pillars for any AI implementation:
- Transparency. Wherever possible, make the high-level implementation details of your AI project available for all of those involved. In this case of a deep learning initiative, people should understand what deep learning is, the way it works to include how data sets are used to tune algorithms, and how deep learning may affect their work. When intellectual property or other sensitive information might be exposed, an organization may want to include a panel of external stakeholders, keeping in mind that certain data sets might need to be protected from disclosure if they contain sensitive information or there are privacy concerns.
- Explainability. Employees within an organization and external stakeholders, to include potential customers, should be able to understand how any deep learning system arrives at its contextual decisions. The focus here is less on an explanation of how the machine reached its conclusions — as deep learning often cannot be explained at that level of detail — and more on the level of what method was used to tune the algorithm(s) involved, what data sets were employed, and how human decision makers decided to use the algorithm’s conclusion.
- Reversibility. Organizations also must be able to reverse what a deep learning effort “knows.” Think of it as the ability to unlearn certain knowledge or data, which helps protect against unwanted biases in data sets. Reversibility is something that must be designed into the conception of an AI effort and often will require cross-functional expertise and support.
There are three methods that companies can take to put these principles into action in their AI projects. These methods aim to reduce the risk of introducing poorly tuned AI systems and inaccurate or biased decision-making in pilots and implementations.
Get Updates on Leading With AI and Data
Get monthly insights on how artificial intelligence impacts your organization and what it means for your company and customers.
Please enter a valid email address
Thank you for signing up
Create data advocates. To reduce the risk of poorly tuned artificial neural networks, organizations can create a data advocate or ombudsman function that brings together human stakeholders from different business units (with potential to include outside stakeholders as well). Data advocates are responsible for ensuring the data sets are both appropriate for the questions being asked of any artificial neural network and sufficiently diverse for optimal tuning.
It’s well documented that AI efforts trained on bad data can pose risks for human workers and favor human biases. By proactively setting up a data advocate function, organizations can leverage AI while benefiting from human oversight to ensure any errors or flaws in data sets and AI outputs are caught early.
Mindful monitoring. Another way to reduce risk is for organizations to establish a mindful monitoring system to test data sets for biases. This technique requires identifying three pools of data sets: (1) trusted data — the “trusted pool”; (2) potentially worthwhile data — the “queued pool”; and (3) problematic or unreliable data — the “naysayer pool.” (See “The Mindful Monitoring System for AI.”) In this type of monitoring system, the data outputs from a deep learning system — which are tuned on a queued pool of data (yet to be fully vetted or accepted) — are compared with the outputs from the trusted pool of data.
For example, a company’s trusted pool of data for deep learning training might include already classified images of street signs and the appropriate action to take at each sign. The queued pool may include additional images of street signs at different angles, in different lighting conditions, and different weather conditions — with tagging by an unvetted source external to the company. By combining human and automated review, the organization can then assess if the queued pool of data can be useful in expanding the company’s existing pool of trusted data. This allows the deep learning system to improve and get smarter while monitoring and protecting against inaccurate data.
At the same time, the organization would want to compare data outputs from the queued pool and the naysayer pool. For the same example, the naysayer pool might include images that look like street signs but aren’t. The goal of the naysayer pool is to challenge the robustness and diversity of the data used to train the deep learning system and to check if previously approved data for the organization might now be obsolete, problematic, or unreliable.
Bounded expectations. Organizations also should clearly specify how data sets will be used to train AI networks, and explain to external stakeholders and internal employees what the accepted norm will be for how the company relies on the data gathered with deep learning. For example, the organization may use data sets on financial transactions for the last seven years to inform what credit cards to offer customers — but it will not use its deep learning system to make credit card offers on the basis of gender or race, which would be immoral and illegal. This method of setting bounded expectations requires a clear list of what the organization can do with the data it generates or acquires, along with what it cannot do. Companies should also make clear the steps that have been taken to verify these bounds — ideally through a third party, such as an outside compliance review.
These methods specifically focus on deep learning, given the dependence of artificial neural networks on the data- and people-centered choices organizations make to produce optimally trained algorithms. Taken together, they can help organizations prepare for implementing successful AI programs in the future that avoid major risk.