Pushing Back on Pervasive Technology
Innovators and leaders must consider how new technology will be monitored to prevent against potential harmful and irreversible effects.
Topics
Frontiers
Nothing illustrates our need to think differently about how we manage technology better than Facebook. Recent criticism in the media reflects not only a condemnation of the company’s actions, but a powerful backlash against tech as a whole. Indeed, beyond the immediate issues raised around data privacy and security, deeper investigations into Facebook’s practices illustrate the dangers inherent in our uncritical acceptance of and unbridled addiction to technology.
Technology, which has made our lives incomparably better in numerous ways, now constitutes one of our biggest threats. It threatens not only our physical, but our mental well-being. It forces us to confront some of the most important questions facing the human condition: What is the extent and the depth to which we are willing to allow technology to intrude into our lives? In other words, how pervasive and invasive are we willing to let technology be? Because we can do something, does that mean we ought to do it? Such questions are profoundly ethical and moral, for they force us to confront the basic issue: “Who is to control whom?”
In brief, the supreme challenges facing us are not technical but ethical. They are embodied in the justifications we give for allowing a technology to proceed in the first place, let alone the ethical ends it’s intended to serve. Technology is never neutral; it always reflects the biases and purposes of its creators. For example, efforts are underway to build robots that can not only read but also respond to our emotions. Cutting-edge technology and advances in human-machine interactions may mean that soon many people will feel more comfortable talking to an AI-enhanced robot about their deepest feelings and emotional states than to a fellow human being. But again, just because something can be done, is it ethical to do it? The question is: How will it affect human relationships in general?
Measuring Pervasiveness and Invasiveness in Technology
Pervasiveness and invasiveness are two of the primary dimensions that are critical in the evaluation and ranking of the threats posed by various technologies. Facebook, for example, scores high on both. It’s pervasive with regard to its effects on society as a whole, and it has the ability to spread and serve as a platform for fake news, misinformation, and interference in elections by nefarious foreign governments. Facebook is also invasive in that a growing body of studies has shown that both adolescent and adult users of platforms like Facebook and YouTube suffer from a wide range of issues, from loneliness and isolation to anxiety and depression, as a direct result of their use of social media.
Other dimensions play an equally important role in our evaluation of the threats posed by technology, and thus hopefully, in our ability to control it for our benefit: whether the potential dangers of a particular technology are preventable, and whether they are reversible. These two dimensions should play a major part in whether organizations and individuals move forward in the development and subsequent use of a technology. An important question for leaders to ask becomes: If we go forward and later find that a technology is harmful, are the effects reversible? Obviously, those effects that are neither preventable nor reversible are the most serious.
A Deeper Look
The frameworks depicted in this article give a more in-depth look at preventability and reversibility in our everyday lives. (See “Reversibility and Preventability of Crises” and “Reversibility and Preventability: Technology Versus Natural Disasters.”) In particular, the first gives a broad overview of the general framework. Thus, the upper left-hand quadrant indicates those things and events over which we have control, making them avoidable. The second framework gives two real-world examples, one from the field of natural disasters and the other from technology.
Although preventing fires demands that we take serious measures — often at great expense, such as pruning forests — they are theoretically preventable, and their effects are reversible, assuming that we want to and should rebuild in fire-prone areas. With regard to technology, users have the ability to alter the “privacy settings” on mobile devices, operating systems, and individual social applications. This allows or disallows the particular technology to track location, see their contacts, track purchase behavior, and access other important personal data. In this sense, in theory, threats to privacy data are both preventable and reversible.
In contrast, the upper-right quadrant indicates those things and events that are not preventable but are reversible. In the physical realm, earthquakes are a prime example. Locales such as Alaska that experience earthquakes with great regularity have learned to recover as quickly and as efficiently as possible.
With regard to technology, far more debatable is whether humankind is prepared to reverse — and in this sense, limit — those effects of AI that are increasingly becoming unpreventable. A prime example is the replacement of humans by algorithms in the complex decisions involved in medical diagnoses. This issue could fit in the upper-right quadrant or along with climate change in the bottom-right catastrophic quadrant.
Get Updates on Innovative Strategy
The latest insights on strategy and execution in the workplace, delivered to your inbox once a month.
Please enter a valid email address
Thank you for signing up
The bottom-left quadrant also features climate change, but this type of climate change falls into the category of having irreversible yet preventable factors. On the technical side, the recent experiments by a Chinese scientist to directly alter the DNA of twin babies using CRISPR in order to produce children who are resistant to HIV infection is a prominent example of something that, while preventable, is irreversible once it’s done.
These matrices represent the factors that underlie consideration of how and whether a technology should be developed. Innovators and leaders must consider how new technology will be monitored to protect against potential harmful and irreversible effects for humans.
One solution for introducing better monitoring and oversight into the development process of new technology products is the use of independent government agencies. There are historical examples of this working in the past, such as the Office of Technology Assessment, which existed from 1972 until 1995. Calls for the revival of the OTA speak to the current problem: While technology advances at a rapid pace, we cannot trust technologists and tech firms to self-regulate. Recent efforts and concerns to protect the privacy of users’ data, while admirable, do not go far enough. In order to ensure technology acts in our best interests, at both an organizational and societal level, we need new and better safeguards in place.