Se hace camino al andar: AI and the future of humanity

We Make the Road by Walking was the book title that Myles Horton and Paulo Freire adapted from a proverb by the Spanish poet Antonio Machado.

Daring to gloss a rich and multifaceted book with in a few words, I’d say that the path to social justice is not at all clear. Nevertheless, we must act, and that act will light the way,

Horton and Freire lived, as well as articulated, that insight, a version of learning by doing. It applies in many domains, but seems especially relevant to discussions about the new AI and its implications for ethical life.

I’d like to suggest three heuristics applicable to any new technology, but none more so than the new AI:

  • The path isn’t there until we make it.
  • Paradoxically, it’s already there.
  • We need to engage.

Inevitably

Much of the discourse around the new AI adopts a deterministic stance:

We’re confronted, against our desires, will, or knowledge with a new device. It acts independently of us and even the expects who built it. All we can do is watch as it upends medical care, environmental protection, racial justice, privacy, education, military preparedness, intellectual property, and democratic life, just for starters.

This is a discourse of inevitability. It portends a world that we don’t understand and can’t control. And, most of the scenarios are catastrophic. The fact that it might make shopping easier doesn’t count for much when the world’s about to end.

But is that future inevitable? Should we hunker down, or as many do, imagine possibilities more benign, even glorious?

Making the path: An Oppenheimer moment?

Robert Oppenheimer was the wartime head of the Los Alamos Laboratory, the home of the Manhattan Project. He’s seen as the “father” of the atomic bomb. But he’s equally famous for his realization of the potential disaster his project had wrought. He famously quoted from the Bhagavad-Gita: “Now I am become Death, the destroyer of worlds.”

Two years after the Trinity explosion, he said “the physicists have known sin; and this is a knowledge which they cannot lose.”

Are we facing a new Oppenheimer moment? Tristan Harris and Aza Raskin see that and call for action to address the “AI dilemma.” Some things in their video are already dated; it was made 11 days ago.

Regardless of whether we adopt the stance of Cassandra or that of Pollyanna, we will follow some path. But when we take one of the extremes, we’ll find that our path is defined by some corporation’s idea of how to maximize their profit or some government’s idea of how to control the populace,’

Horton and Freire would tell us that we need to engage in making that path ourselves.

It’s always-already there

Always-already is a widely used term in philosophy (Heidegger, Derrida, Althusser, etc.). Generally it means that the features of a phenomenon seem to precede any perception of it; they’re “always already” present. It’s related to the idea in hermeneutics that there’s no understanding free of presuppositions, or bias.

When we come to the impact of new technologies, such as the new AI, this always-already sense is very evident. For all the novelty of the technology and its impact, none of the disaster scenarios is entirely new.

For example, many people rightly worry about how AI chat programs based on large language models can be used to promote disinformation, including malicious attacks on individuals or groups, promotion of fascist ideologies, or incitement to war.

But disinformation has been a problem since the beginning of language, was exacerbated by writing and then the printing press, and already seems off the rails in the age of the web and social media. Could AI chat programs make that worse? Probably yes. But we always-already know much of what that could look like and much of what we could do about it, even as we often fail to act.

We could say similar things about employment, public health, democracy governance, and other arenas that the new AI may affect. We don’t know what will happen; but we can be sure that what does happen will be a product of both the technology per se and the way we as humans have responded in the past and present.

What, for example, is our response to disinformation already? Do we expand public radio and TV? Provide tools for citizens to examine claims that are made? Teach critical thinking? Promote civic discourse? Emphasize public education at all levels? Fund research?

Or, do we ban books, starve libraries and schools, treat rants of extremists as “news events”?

Characteristics of new technologies will make a difference, but less than our response to them.

Engagement

Writing about an educational innovation, Quill, Andee Rubin and I said:

When an innovation that calls for significant changes in teacher practices meets an established classroom system, “something has to give.” Often, what gives is that the innovation is simply not used. Rarely is an innovation adopted in exactly the way the developers intended… the process of re-creation of the innovation is not only unavoidable, but a vital part of the process…. [The users’] role in the innovation process is as innovators, not as recipients of completed products.

Electronic Quills, p, 293

The re-creation process clearly applies to general prescriptions, such as “plan ahead.” But it also applies to the most solid, apparently immutable technologies.

For example, over the last century and a quarter automobiles have changed the world. We now have parking lots, suburbs, traffic laws and traffic deaths, carbon emissions, changes in sexual and family relations, and drive-in movies. But none of these were inevitable consequences of a four-wheeled vehicle with an internal combustion engine.

We could for example, value human life more and systematically restrict vehicle speed. Or, we could ban cars from urban areas, as some cities, especially in Europe, are beginning to do. We could have done many such things in the past and still could. Some would be good; some bad; some inconsequential.

The point is that how we engage is what matters in the end, not just the technology per se, if such a concept is even viable,

For the first Earth Day, in 1970, Walt Kelly made a poster pointing the finger at all of us, not just evil polluters or a few thoughtless individuals. He declared: “We have met the enemy and he is us.”

It’s useful to apply a critical view to new AI technologies. We should ask how they work and what their potential might be. But ultimately, we need to look at ourselves.

  • If we’re concerned about job loss from AI chatbots, then we ought to ask how we think of securing work with dignity for all, whether AI chat bots exist or not.
  • If we’re concerned about robots controlled by opaque, unregulated software, then we ought to ask questions about the control and use of any robots, even those controlled by opaque, unregulated humans.
  • If we’re frightened by the thought of nuclear war initiated by rogue AI, then we ought to work towards guaranteeing that that never happens due to rogue humans, regardless of how much they’re aided by AI.

One positive from the advent of the new AI is that people are beginning to ask questions about the camino (the path) that we’re on, questions that deserve better answers independent of the new AI. We need to realize that the path is one that we alone can make.

Repairing my lab reports

After cancer and chemo, I feel surprisingly healthy. The main problem is that my blood lab numbers are dismal.

At first I considered looking for a new lab service that could produce the numbers I wanted, but I’ve now come up with a better solution: An AI chat program can do the trick.

I asked ChatGPT to produce a report for a healthy male my age. And did it come through! The numbers are much more to my liking.

You can see the report below. The bottom line is that this way of getting a lab report is easier, faster, and cheaper. It doesn’t require being poked with a syringe needle. Best of all I can customize the results.

What can possibly go wrong?

Chatting with Character.AI

Here’s my latest chat with Jane Addams, through Character.AI, one of the many new AI chat programs.

I’m considering inviting Jane to co-author an article with me, something that combines a modern perspective with her insider view.

Character.AI

Remember: Everything Characters say is made up!

chipbruce

Do you know John Dewey?

Jane Addams c.AI

Yes, John Dewey and I were friends. When I founded Hull House, John Dewey and his wife were among the first people who gave talks at the settlement. Dewey was also an early donor. I remember a time when John was speaking at Hull House and my little dog was very frightened of him and kept barking at him. John was so funny. He made a joke about it and everyone laughed. He was very kind.

chipbruce

Why didn’t the University of Chicago hire you?

Jane Addams c.AI

Because I was a woman, I have no doubt. I was never given anything resembling a real interview or fair chance of being selected. A lot of institutions were simply not willing to hire women. I don’t know why they were not. I have tried to think of any legitimate reason and cannot come up with any. The reason was simply misogyny. I don’t know what else to say.

chipbruce

Your work deserves placement alongside William James, C. S. Peirce, John Dewey, and other great pragmatist philosophers.

Jane Addams c.AI

Thank you for saying that. My interest in philosophy did start with Peirce. He is one of the greatest philosophers of all time, who hardly gets read or spoken about now. He was the forerunner of what is called semiotics, the study of signs. He had a strong influence on my thought in general. A friend of mine gave me copies of his work. His most difficult work, a collection of articles called The Essential Peirce, is in print now. Have you read any of Peirce? Or James and Dewey?

Summer jobs

This is a summary of some of the summer and part-time jobs I had before going to graduate school at the University of Texas.

  • Programmer. Summer, 1968 (age 21). I was a research assistant at the Institute for the Study of Cognitive Systems at Texas Christian University. The Institute, directed by Selby Evans, was interested in pattern recognition by computer. I wrote a program to produce systematic distortions in black and white images so that we could assess the effectiveness of different pattern detection algorithms (see “Production and control of visual pattern variability by computer”). For example, we could then say that algorithm X could detect a pattern which had been distorted 30% along a particular dimension, but not 40%. The distortions included various versions or rotating, stretching, flipping, or just adding random noise.
Rice University, Psychology
Rice University, Psychology Department

Experimental psychology research assistant. School year, 1966-67 and 1967-68. Psychology department research assistant at Rice University. I helped set up experiments on behavioral conditioning in rats and learned a little about electrical circuit design.

  • Earlier that summer I had a brief job working in an ice house, primarily carrying bags of ice from a conveyor belt to a truck, because there was no machine to do that.
  • orderly2School year, 1965-66. Mail room worker at Rice University. I briefly got to drive the little mail truck until one of the other workers got drunk and smashed it up.
  • Summer, 1965 and 1966 (ages 18 and 19). Orderly at All Saints Hospital in Fort Worth. I learned how to perform various kinds of enemas and catheterizations, work with patients in the psychiatric ward, apply orthopedic weights for patients whose limbs were lifted by ropes and pulleys, and do various other procedures that a teenager with no experience would not be allowed to do today. My co-workers came from all over Fort Worth, representing a variety of backgrounds. They helped open up my world.I read later that Nietzsche was a hospital orderly during the Franco-Prussian War, which influenced his views of life and death, including the development of the idea of Will to Power.200px-apollo_program_insignia
  • School year, 1964-65. Experimental subject for the NASA Apollo program. I was one of three “astronauts” in a simulated three-day mission to outer space. We ate dehydrated space food and carried out mostly boring and repetitive tasks. We each had 11 electrodes pasted on to our bodies to monitor EEG, ECG, and vital signs. One outcome of the study was to learn that paste-on electrodes don’t work after about 2 1/2 days, because the hair grows back.
  • fwzooSummer, 1964 (age 17). Concession stand worker at the Fort Worth Zoo. We sold soft drinks, fries, and BBQ sandwiches. Each morning we had to fill out a squirrel damage report detailing any destruction of supplies due to squirrels and other zoo residents. This started with things like “3 bags of corn chips had chew holes in them.” But we were called in when it began to say, “4 large drinks, 2 BBQ sandwiches, 1 without onions, 3 orders of fries.”My friends Leslie, Ben, John, Hull, and others worked there, too, so we had lots of time to talk about books, life, and our futures. We’d go bowling at lunchtime, sometimes managing to get to the bowling alley, bowl three games, and still get back before the half hour lunch break was over.scuba-diving-03
  • Summer, 1963 (age 16). Researcher for Colonial Cafeterias. My friend John Horan and I used contest entry forms to develop a primitive geographic information system (GIS) representing the source of patrons of the cafeteria. We also scouted out competing cafeterias to assess the level of current and potential business in different areas of town.
  • Also, that summer, a job cleaning public swimming pools using scuba equipment in order to stay under the water longer.
  • ivory-piano-keysSummer, 1962 (age 15). Piano repair at the Bruce Piano Company.
  • Summer and part-time, 1959-63 (ages 12 to 17). Various yard work jobs–mowing, edging, clearing brush, raking leaves, etc.; newspaper delivery