We Make the Road by Walking was the book title that Myles Horton and Paulo Freire adapted from a proverb by the Spanish poet Antonio Machado.
Daring to gloss a rich and multifaceted book with in a few words, I’d say that the path to social justice is not at all clear. Nevertheless, we must act, and that act will light the way,
Horton and Freire lived, as well as articulated, that insight, a version of learning by doing. It applies in many domains, but seems especially relevant to discussions about the new AI and its implications for ethical life.
I’d like to suggest three heuristics applicable to any new technology, but none more so than the new AI:
- The path isn’t there until we make it.
- Paradoxically, it’s already there.
- We need to engage.
Much of the discourse around the new AI adopts a deterministic stance:
We’re confronted, against our desires, will, or knowledge with a new device. It acts independently of us and even the expects who built it. All we can do is watch as it upends medical care, environmental protection, racial justice, privacy, education, military preparedness, intellectual property, and democratic life, just for starters.
This is a discourse of inevitability. It portends a world that we don’t understand and can’t control. And, most of the scenarios are catastrophic. The fact that it might make shopping easier doesn’t count for much when the world’s about to end.
But is that future inevitable? Should we hunker down, or as many do, imagine possibilities more benign, even glorious?
Making the path: An Oppenheimer moment?
Robert Oppenheimer was the wartime head of the Los Alamos Laboratory, the home of the Manhattan Project. He’s seen as the “father” of the atomic bomb. But he’s equally famous for his realization of the potential disaster his project had wrought. He famously quoted from the Bhagavad-Gita: “Now I am become Death, the destroyer of worlds.”
Two years after the Trinity explosion, he said “the physicists have known sin; and this is a knowledge which they cannot lose.”
Are we facing a new Oppenheimer moment? Tristan Harris and Aza Raskin see that and call for action to address the “AI dilemma.” Some things in their video are already dated; it was made 11 days ago.
Regardless of whether we adopt the stance of Cassandra or that of Pollyanna, we will follow some path. But when we take one of the extremes, we’ll find that our path is defined by some corporation’s idea of how to maximize their profit or some government’s idea of how to control the populace,’
Horton and Freire would tell us that we need to engage in making that path ourselves.
It’s always-already there
Always-already is a widely used term in philosophy (Heidegger, Derrida, Althusser, etc.). Generally it means that the features of a phenomenon seem to precede any perception of it; they’re “always already” present. It’s related to the idea in hermeneutics that there’s no understanding free of presuppositions, or bias.
When we come to the impact of new technologies, such as the new AI, this always-already sense is very evident. For all the novelty of the technology and its impact, none of the disaster scenarios is entirely new.
For example, many people rightly worry about how AI chat programs based on large language models can be used to promote disinformation, including malicious attacks on individuals or groups, promotion of fascist ideologies, or incitement to war.
But disinformation has been a problem since the beginning of language, was exacerbated by writing and then the printing press, and already seems off the rails in the age of the web and social media. Could AI chat programs make that worse? Probably yes. But we always-already know much of what that could look like and much of what we could do about it, even as we often fail to act.
We could say similar things about employment, public health, democracy governance, and other arenas that the new AI may affect. We don’t know what will happen; but we can be sure that what does happen will be a product of both the technology per se and the way we as humans have responded in the past and present.
What, for example, is our response to disinformation already? Do we expand public radio and TV? Provide tools for citizens to examine claims that are made? Teach critical thinking? Promote civic discourse? Emphasize public education at all levels? Fund research?
Or, do we ban books, starve libraries and schools, treat rants of extremists as “news events”?
Characteristics of new technologies will make a difference, but less than our response to them.
Writing about an educational innovation, Quill, Andee Rubin and I said:
When an innovation that calls for significant changes in teacher practices meets an established classroom system, “something has to give.” Often, what gives is that the innovation is simply not used. Rarely is an innovation adopted in exactly the way the developers intended… the process of re-creation of the innovation is not only unavoidable, but a vital part of the process…. [The users’] role in the innovation process is as innovators, not as recipients of completed products.Electronic Quills, p, 293
The re-creation process clearly applies to general prescriptions, such as “plan ahead.” But it also applies to the most solid, apparently immutable technologies.
For example, over the last century and a quarter automobiles have changed the world. We now have parking lots, suburbs, traffic laws and traffic deaths, carbon emissions, changes in sexual and family relations, and drive-in movies. But none of these were inevitable consequences of a four-wheeled vehicle with an internal combustion engine.
We could for example, value human life more and systematically restrict vehicle speed. Or, we could ban cars from urban areas, as some cities, especially in Europe, are beginning to do. We could have done many such things in the past and still could. Some would be good; some bad; some inconsequential.
The point is that how we engage is what matters in the end, not just the technology per se, if such a concept is even viable,
For the first Earth Day, in 1970, Walt Kelly made a poster pointing the finger at all of us, not just evil polluters or a few thoughtless individuals. He declared: “We have met the enemy and he is us.”
It’s useful to apply a critical view to new AI technologies. We should ask how they work and what their potential might be. But ultimately, we need to look at ourselves.
- If we’re concerned about job loss from AI chatbots, then we ought to ask how we think of securing work with dignity for all, whether AI chat bots exist or not.
- If we’re concerned about robots controlled by opaque, unregulated software, then we ought to ask questions about the control and use of any robots, even those controlled by opaque, unregulated humans.
- If we’re frightened by the thought of nuclear war initiated by rogue AI, then we ought to work towards guaranteeing that that never happens due to rogue humans, regardless of how much they’re aided by AI.
One positive from the advent of the new AI is that people are beginning to ask questions about the camino (the path) that we’re on, questions that deserve better answers independent of the new AI. We need to realize that the path is one that we alone can make.