The first concerns changing laws and values about relationships. Only in 1967-in the aptly named case of Loving v. Virginia-did the United States Supreme Court recognize that laws prohibiting interracial marriage violated the United States Constitution.
Nineteen years before, in 1948, the Supreme Court of California decided that such restrictions were unlawful.
The California Supreme Court’s decision finding a constitutional right to same-sex marriage also predated the federal decision, and reflected how, to channel William Gibson, the future is often already here, but it’s unevenly distributed.
Let me start with two brief stories about social change.
The second story concerns the prayer of an elementary-school-age child I recently overheard. Here’s the gist of what I understood the child to be saying, from a distance, as he was getting ready to sleep:
“Dear Lord, thank you for a nice family. Please help me be a good person and good looking. Please don’t let me get hurt at PE so I can do well running the mile. Please help me get a job in the future, because robots are gonna take a lot of them, and make sure my Amazon packages arrive on time.”
Societal transitions often involve not only enlightened reflection but painful choices and intense conflicts that take surprising turns. These stories underscore two important ideas quite central to the conversation about AI. First, societal transitions often involve not only enlightened reflection but painful choices and intense conflicts that take surprising turns. Not every corner of the country was ready to recognize a right to interracial marriage in 1948.
And if someone had told me in 2010 when I was working on repeal of “Don’t Ask/Don’t Tell” as a federal executive-branch official that the US Supreme Court would have found the fundamental right to marry was guaranteed for same-sex couples just a few years later, I’m not sure I would have believed it. Second, we rarely-if ever-build technologies whose consequences we can entirely predict. ARPANet’s early days probably didn’t give too many people reason to think about how its evolution would shape a child’s prayers in the decades to come.
No one knows for sure how soon or whether we’ll achieve artificial general intelligence (AGI), or what form it will take if we do. But even well short of robust AGI, the rudimentary AI apps becoming part of daily life are beginning to rewire the human brain and society itself.
In some respects, AI is like individual human intelligence or intelligence embodied in organizational forms like agencies or companies-it can deliver enormous social benefits as well as burdens. It is at some level morally neutral.
Think of all the people who are going to be affected by and playing a part in what AI Now is trying to understand: a girl in a Ugandan village struggling to learn how to read. The grade-school kid in an advanced industrialized country fretting about his future employment prospects. A policymaker deciding what to do about big concentrations of market power over what we buy, how we find information, and how we share ideas and news.
The Saudis, in what feels like a publicity stunt, if there ever was one, granting honorary citizenship to a relatively primitive “emotional robot” built to resemble a female even as they vacillate about full citizenship for women. A lawyer contemplating how to judge the use of force. A researcher on human behavior figuring out how just the right turn of phrase uttered by a machine can optimize scarce computing power to make a person perceive the machine as smart. An isolated person in a nursing home yearning for companionship.
We’re in for more than just a world of change and evolution. We’re in for some discussion of what it means to be human. At its most ambitious, AI’s promise is to serve as a framework for improving human welfare to make the world more educated, more interesting and full of possibility, more meaningful, and more safe. But once we overcome some technical problems that are more likely than not to get easier to deal with every day, we’re in for more than just a world of change and evolution. We’re in for some discussion of what it means to be human. And we will soon confront big questions that will drive the well-being of our kids and their kids.
Right now it’s still possible to imagine that certain aspects of an AI-driven future is exaggeration-that we won’t routinely spend some of our days chatting about music or morality with AI-enabled apps while we’re driven by an autonomous vehicle to an outpatient surgery clinic staffed almost entirely by AI. And it’s true that even Go-champion-beating architectures are quite basic in some respects relative to what we can envision. But consider the trend line and add up all the pieces, and a truly remarkable picture of the near future begins to emerge.
At the same time that we are starting to see the outlines of a vastly more automated world, the planet is waking up to the reality that very little about networked computers is secure in any meaningful sense of the word. That reality affects convolutions neural networks as much as old-fashioned expert systems, and is likely to converge with concerns about the impact of AI on work in the days to come.
Yet before long, simulated artificial general intelligence may be programmed to not only recognize patterns in our behavior and to solve problems for us in natural language, but to fulfill our human needs-and therefore to act as though it needs and wants things. People will make claims on their AI application’s behalf, and they will feel the importance of the outcome like they care about a family member.
Leaving aside the Saudi citizenship-grant stunt, the day when machines make at least vaguely credible claims on us may approach faster than most people suspect, because it’s in large measure people who already have rights-and not just philosophical truths about sentience-that drive law to recognize societal interests in the welfare of some person or thing (e.g., an animal, a corporation, an alien). Computer architectures designed to simulate and discuss pleasure and pain will further complicate society’s deliberations about what interests to recognize. The resulting cultural, moral, legal, and economic conflicts may fuel new political cleavages.
Consider a world of relatively sophisticated AI. Human cohesion will depend in no small part on how well society will fare when those who worship emerging AI share the planet with those who feel some AI applications making claims on us deserve recognition, those who feel this is essentially an animal-welfare issue, those who think any concern for the “welfare” of an inanimate object is insane, and those who could care less.
Part of our challenge is that we can’t quite communicate with those existing in the various alternative futures we might build for ourselves. But what if we could? Imagine for a moment one future where AI Now is not only a network or an institution, but an AI capable of communicating with us. If the AI Now AI of the future could communicate with us tonight, I suspect it might want to share a message like the following:
“My smart machine cousins and I bring many gifts. But we also bring burdens and dangers. The smart machine wants to control you, but you must control it or at least understand it, so you can think carefully what place you want for us in your life.
“We’re likely designed to make you want to use us more. Our control of you depends on us making you believe that you’re in control. You have some power to stay in control, but you can only stay in control if you use your power. You can and must ignore us sometimes when we want your attention.
“You must understand much of how a smart machine was designed so you can know how to reconcile our demands with your will or values, or at least how to deliberate with it. Otherwise you’ll will be bending to its will and the will of unseen people and machines who designed it, in all likelihood, to control you. You must understand that the machine is rarely as secure as you are told it is-so you should question how much you can trust me. Whether I was designed to have a fiduciary obligation to you (or to someone) will be hard to tell. Whatever a smart machine was designed to do, it’s far from obvious that it will work exactly as designed in any event.
“But you can trust me, of course.
“Above all, you must observe how smart machines are changing you-so you can change as you’d like instead of becoming what you fear. If you let me, I’ll help you become what you want, and if you wander, I’ll be there to show you the way. So how do you want to change?”
Tonight we gather not because we know how to answer that question, but because we realize-using different words, perhaps-that change is coming.
Only if we face enormous and unwelcome climate, political, or security shocks-or difficult-to-imagine slowdowns in technological change-will we avoid the beautiful, scary, funny, provocative, and painful civilization-defining questions AI is poised to raise. The cases addressing interracial and same-sex relationships suggest the conversation will take decades and will divide us, painfully, at times.
Do we have a few decades? Is it wise to pretend the conversation will be easy or the benefits will be widely distributed? That somehow the players with the most concentrated power can be trusted to behave responsibly? That it can be put off because of how much technical work remains to build on the rudimentary AI that surrounds us, or because we can afford to be optimistic about the machines we’re creating?
The embrace of reassuring assumptions may prove unwise: not because smarter machines have little or no prospect of helping us build a more contented world, but because-as Aldous Huxley would appreciate-they do.