As a computer scientist, I don’t like the term “A.I.” In fact, I think it’s misleading—maybe even a little dangerous. Everybody’s already using the term, and it might seem a little late in the day to be arguing about it. But we’re at the beginning of a new technological era—and the easiest way to mismanage a technology is to misunderstand it.

The term “artificial intelligence” has a long history—it was coined in the nineteen-fifties, in the early days of computers. More recently, computer scientists have grown up on movies like “The Terminator” and “The Matrix,” and on characters like Commander Data, from “Star Trek: The Next Generation.” These cultural touchstones have become an almost religious mythology in tech culture. It’s only natural that computer scientists long to create A.I. and realize a long-held dream.

What’s striking, though, is that many of the people who are pursuing the A.I. dream also worry that it might mean doomsday for mankind. It is widely stated, even by scientists at the very center of today’s efforts, that what A.I. researchers are doing could result in the annihilation of our species, or at least in great harm to humanity, and soon. In a recent poll, half of A.I. scientists agreed that there was at least a ten-per-cent chance that the human race would be destroyed by A.I. Even my colleague and friend Sam Altman, who runs OpenAI, has made similar comments. Step into any Silicon Valley coffee shop and you can hear the same debate unfold: one person says that the new code is just code and that people are in charge, but another argues that anyone with this opinion just doesn’t get how profound the new tech is. The arguments aren’t entirely rational: when I ask my most fearful scientist friends to spell out how an A.I. apocalypse might happen, they often seize up from the paralysis that overtakes someone trying to conceive of infinity. They say things like “Accelerating progress will fly past us and we will not be able to conceive of what is happening.”

I don’t agree with this way of talking. Many of my friends and colleagues are deeply impressed by their experiences with the latest big models, like GPT-4, and are practically holding vigils to await the appearance of a deeper intelligence. My position is not that they are wrong but that we can’t be sure; we retain the option of classifying the software in different ways.

The most pragmatic position is to think of A.I. as a tool, not a creature. My attitude doesn’t eliminate the possibility of peril: however we think about it, we can still design and operate our new tech badly, in ways that can hurt us or even lead to our extinction. Mythologizing the technology only makes it more likely that we’ll fail to operate it well—and this kind of thinking limits our imaginations, tying them to yesterday’s dreams. We can work better under the assumption that there is no such thing as A.I. The sooner we understand this, the sooner we’ll start managing our new technology intelligently.

If the new tech isn’t true artificial intelligence, then what is it? In my view, the most accurate way to understand what we are building today is as an innovative form of social collaboration.

A program like OpenAI’s GPT-4, which can write sentences to order, is something like a version of Wikipedia that includes much more data, mashed together using statistics. Programs that create images to order are something like a version of online image search, but with a system for combining the pictures. In both cases, it’s people who have written the text and furnished the images. The new programs mash up work done by human minds. What’s innovative is that the mashup process has become guided and constrained, so that the results are usable and often striking. This is a significant achievement and worth celebrating—but it can be thought of as illuminating previously hidden concordances between human creations, rather than as the invention of a new mind.

As far as I can tell, my view flatters the technology. After all, what is civilization but social collaboration? Seeing A.I. as a way of working together, rather than as a technology for creating independent, intelligent beings, may make it less mysterious—less like HAL 9000 or Commander Data. But that’s good, because mystery only makes mismanagement more likely.

It’s easy to attribute intelligence to the new systems; they have a flexibility and unpredictability that we don’t usually associate with computer technology. But this flexibility arises from simple mathematics. A large language model like GPT-4 contains a cumulative record of how particular words coincide in the vast amounts of text that the program has processed. This gargantuan tabulation causes the system to intrinsically approximate many grammar patterns, along with aspects of what might be called authorial style. When you enter a query consisting of certain words in a certain order, your entry is correlated with what’s in the model; the results can come out a little differently each time, because of the complexity of correlating billions of entries.

The non-repeating nature of this process can make it feel lively. And there’s a sense in which it can make the new systems more human-centered. When you synthesize a new image with an A.I. tool, you may get a bunch of similar options and then have to choose from them; if you’re a student who uses an L.L.M. to cheat on an essay assignment, you might read options generated by the model and select one. A little human choice is demanded by a technology that is non-repeating.

Many of the uses of A.I. that I like rest on advantages we gain when computers get less rigid. Digital stuff as we have known it has a brittle quality that forces people to conform to it, rather than assess it. We’ve all endured the agony of watching some poor soul at a doctor’s office struggle to do the expected thing on a front-desk screen. The face contorts; humanity is undermined. The need to conform to digital designs has created an ambient expectation of human subservience. A positive spin on A.I. is that it might spell the end of this torture, if we use it well. We can now imagine a Web site that reformulates itself on the fly for someone who is color-blind, say, or a site that tailors itself to someone’s particular cognitive abilities and styles. A humanist like me wants people to have more control, rather than be overly influenced or guided by technology. Flexibility may give us back some agency.

Still, despite these possible upsides, it’s more than reasonable to worry that the new technology will push us around in ways we don’t like or understand. Recently, some friends of mine circulated a petition asking for a pause on the most ambitious A.I. development. The idea was that we’d work on policy during the pause. The petition was signed by some in our community but not others. I found the notion too hazy—what level of progress would mean that the pause could end? Every week, I receive new but always vague mission statements from organizations seeking to initiate processes to set A.I. policy.

These efforts are well intentioned, but they seem hopeless to me. For years, I worked on the E.U.’s privacy policies, and I came to realize that we don’t know what privacy is. It’s a term we use every day, and it can make sense in context, but we can’t nail it down well enough to generalize. The closest we have come to a definition of privacy is probably “the right to be left alone,” but that seems quaint in an age when we are constantly dependent on digital services. In the context of A.I., “the right to not be manipulated by computation” seems almost correct, but doesn’t quite say everything we’d like it to.

A.I.-policy conversations are dominated by terms like “alignment” (is what an A.I. “wants” aligned with what humans want?), “safety” (can we foresee guardrails that will foil a bad A.I.?), and “fairness” (can we forestall all the ways a program might treat certain people with disfavor?). The community has certainly accomplished much good by pursuing these ideas, but that hasn’t quelled our fears. We end up motivating people to try to circumvent the vague protections we set up. Even though the protections do help, the whole thing becomes a game—like trying to outwit a sneaky genie. The result is that the A.I.-research community communicates the warning that their creations might still kill all of humanity soon, while proposing ever more urgent, but turgid, deliberative processes.

Recently, I tried an informal experiment, calling colleagues and asking them if there’s anything specific on which we can all seem to agree. I’ve found that there is a foundation of agreement. We all seem to agree that deepfakes—false but real-seeming images, videos, and so on—should be labelled as such by the programs that create them. Communications coming from artificial people, and automated interactions that are designed to manipulate the thinking or actions of a human being, should be labelled as well. We also agree that these labels should come with actions that can be taken. People should be able to understand what they’re seeing, and should have reasonable choices in return.

Read More