Coming to Grips With Dangerous Algorithms
Algorithms power transformative technology but also present many threats to users — which raises the question of how to prevent and regulate against potential disaster.
Topics
Frontiers
For technology users, particularly social media users, 2018 has been a year of awakening. The media began scratching the surface of the dangers of social media with the story of Russian parties influencing the U.S. election. Soon after, a slew of reports followed with details of how Cambridge Analytica used social media data to influence votes in both the United Kingdom and the United States. People were suddenly exposed to the dangers of how easily social media and the algorithms underpinning social platforms can be used to influence other users, and we’re now seeing how widespread the practice has become. Harmless, everyday actions performed by millions of users, such as taking fun surveys, had suddenly become tools for unscrupulous data miners.
The investigation into the Cambridge Analytica scandal was a high point for awareness of privacy breaches in the social media community, but it certainly was not the first. In February 2018, Guillaume Chaslot, a former YouTube employee, went public with his study on YouTube’s algorithms, which found extreme bias in relation to the 2016 election. The study found that 84% of videos recommended by the algorithm were pro-Trump, with only 16% pro-Clinton. Meanwhile, Twitter came under attack as a documentary by Project Veritas purportedly proved political bias in its regulation of its users.
The push for better regulation with regard to how algorithms work and how to protect user privacy has already advanced, with the European Union’s General Data Protection Regulation (GDPR) governing online data privacy and use of user data having gone into effect in May 2018. However, we contend that while these efforts have been aimed at regulating user data, efforts must be made to regulate algorithms themselves.
Algorithms Are More Than Just Social Media
The truth is, algorithms pervade our lives. They have existed in the systems that run and regulate our lives for decades, performing tasks from a national security early warning system to traffic control systems. More recently, algorithms have found their way into our cars, our homes, and now have tasks as varied as deciding how suitable we are as job candidates or helping to identify health issues.
Get Updates on Innovative Strategy
The latest insights on strategy and execution in the workplace, delivered to your inbox once a month.
Please enter a valid email address
Thank you for signing up
As with social media, while these algorithms have delivered convenience and usability, they have also failed us. In March and April 2017, Tesla was hit by two lawsuits by citizens claiming that Tesla’s autopilot function was “dangerously defective.” Risk-assessment algorithms regularly used to predict likelihood of reoffending in criminal offenders in six states throughout the United States have been found to be significantly racially biased, resulting in black offenders receiving longer and heavier sentences than their white counterparts. Tay, Microsoft’s machine-learning chatbot, was taken offline within 24 hours of its launch on Twitter after its conversation and language patterns became disturbingly racist.
The risk to users of technology reliant on algorithms is about more than just privacy concerns. There is risk associated with the algorithms themselves — the purpose for which they are built and their error rates in fulfilling these purposes.
We contend that there is a mismatch between the purpose of the algorithms and the rigor with which the algorithms are tested for efficacy, and the impact of a failure in the algorithm. For example, a minor failure in Tesla’s autopilot function to properly operate in certain conditions could result in numerous injuries and deaths if it fails just a fraction of the time. An element of racial bias in algorithms involved in sentencing criminal offenders would reinforce racial discrimination and serve to prevent minor offenders from rehabilitation.
What Do We Do Now?
It’s clear that we have come to the point where the risks of relying on algorithms are becoming too large to ignore. Politicians and journalists have already begun calling for regulation of social media algorithms, even as the platforms continue to mine user data and defend their algorithms. As of May 2018, 33 states in the U.S. had already introduced regulations relating to self-driving cars, but not their algorithms.
But how do we even begin to approach the regulation of a technology that is so widespread and widely used? Are all algorithms equally dangerous? Adding to the complexity of regulation is the fact that algorithms are typically considered proprietary technology, and different algorithms with different uses are governed by different agencies. For example, self-driving cars are governed by transportation authorities, while medical algorithms for disease detection or risk assessment are governed by the Food & Drug Administration. Some tech organizations don’t believe it’s possible to fully regulate algorithms and have argued that this kind of regulation would be a goal that is beyond the capabilities of a government.
An Approach for Assessing and Managing Algorithmic Risks
We propose that a combination of government regulation and self-regulation be introduced as a balanced approach that both protects proprietary assets and helps manage the impact of algorithms on our lives. This approach will also allow for some level of transparency and building an element of trust with the global community.
Government regulation. While algorithms remain valuable proprietary assets for technology companies, there is an inherent need for greater transparency and understanding of how those algorithms work. In his book The Black Box Society, Frank Pasquale recommends allowing a greater role for regulators such as the Federal Trade Commission (FTC) to test algorithms for ethical values such as fairness, social bias, and anti-competitiveness. This should be supported by the requisite funding and would require the ability and willingness to prosecute for ethical violations in the same way that the financial system is regulated. In fact, starting with social media, the FTC has shown a willingness to ensure that algorithms reflect accurate and fair information. For instance, in October 2009, the FTC revised its Endorsement Guides to encompass blogging, and, more recently, the FTC has further flexed its regulatory muscles in the social media sphere, now ensuring that social media influencers comply with these same kinds of regulations.
Regulations to ensure consumer protection from the use of algorithms in technology are on the horizon and cannot come soon enough. In January 2017, the Consumer Product Safety Commission produced reports on the safety of emergent and future technologies, which highlights the state of the art on the use of algorithms to draw insights from consumers or control the behavior of robots and digital assistants. In 2018, the FTC invited public comment and began conducting a series of public hearings on issues arising from the use of digital technologies and algorithms that are likely to help in the development of policies. For example, the FTC received comments from the Information Technology & Innovation Foundation (ITIF) on “the consumer welfare implications associated with the use of algorithmic decision tools, artificial intelligence, and predictive analytics,” in which the ITIF highlights the inadequacy of existing regulations and the need for regulators to protect individuals from companies using algorithms.
Self-regulation via an industry body and an ethical framework. We recommend that technology companies construct an industry-wide ethical framework that applies to algorithms to address fairness, social bias, and anti-competitiveness. This can be constructed by an industry body tasked with constructing the framework and best practices to be implemented, as well as light enforcement in the form of identifying companies that are aligned with their standards. Examples of successful implementations of a similar approach in other industries are numerous, including the palm oil industry with RSPO certification, the fair trade model in agriculture, and the Made in USA standard.
An industry can use this approach to introduce a method to self-regulate its own proprietary algorithms. This also aligns with the previously proposed approach to government regulation, which requires both self-assessment on the part of the technology firm and enforcement by government regulators.
Self-regulation via a risk assessment and prevention framework. In discussing the risks of technology, in his 2010 book The Technology Trap, L.J. Dumas highlights the need to assess the maximum credible risk of technology used on a high-volume basis. The implication is that, when a technology is used at high volumes, even rare events become more likely to occur.
We propose that technology companies use a similar approach to construct a risk assessment framework to apply to algorithms. The aim is to identify critical risks that take the volume of the algorithm’s users into consideration and assess the likelihood of critical events — that is, events that can be considered disastrous. Companies should use the results of such a framework to construct a comprehensive risk prevention framework to minimize the likelihood and impact of critical events.
As news continues to surface of algorithms exploiting public opinion and the impact of algorithmic failures, the need for regulation becomes clear. Nevertheless, consumer awareness, responsible business practices, and governmental protection are still no match for the threat algorithms present to our privacy and social equality.
Important steps toward upholding the legal and ethical principles of democratic societies in the digital age include organizations’ efforts to incorporate values and transparency into their algorithms, and government regulations that encourage innovation and accountability. Finally, it’s important to delineate clear penalties to those who use algorithms with disregard for or intent to harm.
Comments (2)
Robert Jones
Chandra Pandey