First, we need to worry about what the humans values are that we hope to align.
The Future of Life Institute has a noble mission: To catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.
The volunteer team is full of brilliant scientists and thinkers, and I admire the organization's effort to shape the future of humanity through facilitating responsible policy and hosting events like the Asilomar conference to generate AI principles. In all fairness, the organization does have the word "optimistic" in its mission statement, and as a futurist who specializes in thinking imaginatively about unintended consequences, I am a pragmatist, not an optimist. Through that prism, I responded to those principles.
Through that same prism, I am now going to respond to Future of Life Founder Max Tegmark's short video, "Myths and Facts about Superintelligent AI," based on his book, Life 3.0.
Never in the history of humanity has there ever been one binary answer to this question. Every day, people suffer. Every day, lives improve. Life spans increase, but people are not immune to hazard. Overall, life has been improving for a large percentage of the earth's population, but many, countless millions, billions of people, are still suffering, despite the fact that we currently have the technology and ability to prevent a lot of that suffering. What we lack, and this is at the heart of my objection to Tegmark's optimism, is an alignment of human values.
The issue, Tegmark notes, isn't malevolence, but competence. If humans choose to build a structure on an ant hill, and we are more competent than the ants, then it doesn’t matter whether the destruction of their habitat was motivated by malevolence. I agree with this point, although I’m not sure that humans are more competent than ants. Certainly, the tasks we undertake are far more complex, but ants are extremely competent.
“By definition,” Tegmark says, “superhuman AI is very competent at attaining its goals, so the most important thing is making sure its goals are aligned with ours.”
Who is included in “our” goals? Every day, “our” goals are more and more divisive. Anyone who doubts this can simply look at the political climate in any number of countries, including the United States, and realize that there is very little agreement on human values and goals, to say the least.
The video has little drawings next to the human stick figure to symbolize love, happiness and what appears to be a planet not engulfed in flames, weapons or floods. I wish I believed that peace, love and a healthy planet truly were our human goals, because they should be. But are they? If you’re on the fence about this question, I highly recommend reading Sapiens and asking yourself some tough questions about what it means to be human.
"Better yet,” Tegmark adds, “we'll figure out how to ensure that AI adopts our goals rather than the other way around."
Will we?
If we get it right, says Tegmark, AI might be the best thing ever to happen to humanity. I agree. If. If we get it right. And I get that the entire point of the Future of Life Institute is to try and get it right, which is exactly why I would greatly prefer a pragmatic approach to an optimistic one when it comes to this issue.
“Everything we love about civilization is the product of intelligence,” Tegmark notes. As an optimist, he is not obligated to mention the many imperfections within civilization, including the inherent biases and neuroses that our technology often has at its core, because humans who haven’t yet mastered their own minds are inventing it.
If AI manages to solve some our biggest problems, Tegmark points out, humanity might flourish like never before. Our biggest obstacle to problem solving is usually people not being on the same page. And this is a big problem, if the entire premise of controlling the future of AI hinges on alignment between us and our inventions. On the other hand, if AI is able to autonomously work around us in order to improve life for us, then what is stopping it from continuing to make decisions for us, including paths forward that might not be in our best interest?
We need to make sure machines learn, adopt and retain the collective goals of humanity, says Max Tegmark. But again: What are those goals?
What do we do when those goals disagree?
Should we vote, Tegmark asks? Should we do whatever the president wants (at this point, you may have feelings when you see a little red top hat being drawn on the president stick-figure while this question is being asked), do whatever the creator of the superintelligence wants, let the AI decide? It is a question, Tegmark says, of what sort of future we want to create for humanity. This isn’t just a question, he says, for AI researchers. The Future of Life Institute has created a questionnaire that you can answer here.
I am a believer in mindfully and imaginatively designing the future of humanity to whatever extent we can, which is why I took the time to respond to Tegmark. I also believe that technology is on a forward march, and that no matter what we do, it will continue to progress. We will capture value for humans in the near term. The longer term, however, is a mystery, and there isn’t a person in this world right now who can accurately predict the reality of what will happen when intelligence evolves past our ability to control it. In the meantime, I share the opinion that we should try, and try hard, including figuring out what our most cherished human values and goals are, and should be.