For decades, humans have fantasized about the idea of an artificial intelligence (AI) takeover. Science fiction movies like “2001: A Space Odyssey,” “The Terminator” and “I, Robot” enthralled viewers for years, mostly because they teased the chance of an AI-driven, dystopian future.
But today, that dystopian future may be too close for comfort.
Before his passing, Stephen Hawking famously warned humanity by saying, “Once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever increasing rate.” He even suggested it could spell the end of the human race.
Since then, many AI experts have continued to preach such warnings. Elon Musk, the CEO of Tesla, SpaceX and Neuralink, has even gone as far as to say that AI is more dangerous than nuclear weapons.
But many others dispute such views. John Giannandrea, Apple’s senior vice president of Machine Learning and AI Strategy, has described such doomsday rhetoric as fear-mongering. And Facebook CEO Mark Zuckerberg has called it “pretty irresponsible.”
Where AI currently stands
Before diving into the likelihood of an AI apocalypse and the methods to prevent one, it’s important to differentiate the types of AI.
What the public is most familiar with is called “artificial narrow intelligence” (ANI). This is what is used to steer self-driving cars and power voice recognition systems like Apple’s Siri and Amazon’s Alexa. ANI is powerful, but it works in a very limited context.
“I am not really all that worried about the short-term stuff,” Musk said at a 2018 South by Southwest (SXSW) tech conference. “Narrow AI is not a species-level risk. It will result in dislocation, in lost jobs, and better weaponry and that kind of thing, but it is not a fundamental species level risk.”
Instead, Musk and others are most worried about the development of “artificial general intelligence” (AGI). Presumably, if AGI is fully achieved, machines will be capable of understanding or learning any task that a human can.
Already, AI can spot cancer better than humans can, beat the world’s best chess and Go players, build its own algorithms and examine pictures on social media sites to detect humans’ expressions and clothing, among other things.
However, these are all examples of AI completing human-esque feats. As to if and when full AGI becomes a reality, that remains uncertain.
In a phone interview, Nisheeth K. Vishnoi, a computer science professor at Yale University, said that he doesn’t believe we will achieve AGI. To Vishnoi, AGI seems like a speculation without existing evidence.
Yet, nations will likely try. At just over 60 years old, AI is still a very young field. Its accelerated advancement is unpredictable and seemingly inevitable.
The race to AI dominance
Currently, nations are racing to AI dominance, fueled by a desire to become the global economic and military leader.
In 2017, Russian President Vladimir Putin said in a conversation with students: “Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”
Currently, the United States and China — not Russia — are leading that race, and neither country is showing any sign of slowing down. In the summer of 2017, China pledged to become the world’s primary AI innovation center by 2030. And earlier this year, President Donald Trump issued an executive order to maintain American leadership in AI.
Already, in July, researchers from China’s Tsinghua University created a new kind of computer chip that they believe will “stimulate AGI development.” Yet, it is just the first of many steps.
The future of AI
Up to this point, AI has mostly benefited society. It has made medical diagnosis quicker and more accurate, reduced the amount of tedious tasks workers have to do on a daily basis and assisted in disaster relief and preparation.
Most of the negative impacts of AI as it currently stands, like Musk said, are not species-level risks. There will be some potential job loss, data privacy concerns and political campaign influence, among other things, which are worthy of serious concern, but they do not necessarily spell the end of humanity.
What’s most concerning, however, is that AI advancement will only speed up, and society has a slim idea for where it’s going.
“The rate of change of technology is incredibly fast,” Musk said at the World Artificial Intelligence Conference (WAIC). “It’s outpacing our ability to understand it.”
Already, AI experts suggest that “lethal autonomous weapons” are around the corner. Such weapons could independently decide when and where to fire their guns and missiles. They are — quite literally — something out of a science fiction movie.
Musk and others worry that humans are quickly losing their grip on AI technology and giving it power, intelligence and abilities that humans might not be able to control or reclaim.
“I think, generally, people underestimate the capability of AI,” Musk said at WAIC. “They sort of think like it’s a smart human. But, it’s going to be much more than that. It will be much smarter than the smartest human.”
“The biggest mistake that I see artificial intelligence researchers making is assuming that they’re intelligent,” Musk continued. “They’re not, compared to AI. A lot of them can’t imagine something smarter than themselves, but AI will be vastly smarter.”
Although some still suggest that that AI will never achieve or surpass human intelligence, most would agree that it’s better to be safe than sorry. Therefore, AI experts, universities, think tanks and others have developed suggestions for how to manage the development of AI.
Avoiding an AI apocalypse
One of the most intriguing ideas comes from none other than Musk. Through his Neuralink device, Musk takes an “if you can’t beat them, join them” approach to dealing with AI.
Neuralink is a brain-machine interface device that would allow humans to connect to and control their smartphones, laptops or other computer devices with their brain, removing the need to use fingertips, which slows humans down.
“It’s like a tiny straw of information flow between your biological self and your digital self,” Musk said on the Joe Rogan podcast. “We need to make that tiny straw like a giant river, a huge, high-bandwidth interface.”
“It will enable anyone who wants to have superhuman cognition,” Musk added. “How much smarter are you with a phone or computer or without? Your phone is already an extension of you. You’re already a cyborg.”
A more digestible idea that Musk and others share is calling for regulatory government oversight of AI development.
“It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely,” Musk said at the 2018 SXSW tech conference. “This is extremely important. I think the danger of AI is much greater than the danger of nuclear warheads by a lot and nobody would suggest that we allow anyone to build nuclear warheads if they want. That would be insane.”
Meanwhile, colleges and universities have mainly focused on teaching and developing a new generation of technologists and developers who consider ethics and the public good as much as they do innovation. Their goal is not to necessarily limit the advancement of AI and other technologies, but to make sure they are being created ethically and responsibly.
Virginia Tech University is one of the schools leading this charge. Through its new Tech for Humanity initiative, technologists in the College of Engineering will be taught to take a “human-centered approach” to developing and advancing technology. And students in the humanities will learn more about technology policy and AI ethics, among other things.
“Innovation is increasing the gap between what is technologically possible and what may or may not be in the best interest of human society and a viable future. We need human-centered approaches to guide the study, development, and use of technologies,” Sylvester Johnson, the founding director of the VT Center for Humanities, said in a statement.
Today, technology such as AI is inescapable. It has seeped into politics, law and all of the humanities. VT’s initiative aims to develop well-rounded professionals in both the humanities and technology fields who possess the intersectional thinking skills needed to make difficult decisions regarding the future of technology.
“Should we weaponize AI and continue to develop weapons in an attempt to have something close to human-level intelligence in a weapon system or machine system?” Johnson said. “That’s a very complicated question that needs to be studied, examined, debated and engaged by a range of people. And I think our universities have a very important role to play.”
VT isn’t the only school taking action.
Further back, in March, an organization consisting of 21 universities called the Public Interest Technology University Network was launched with the support of the Ford Foundation, New America, and the Hewlett Foundation to to achieve a similar goal. Together, they are working to make sure public interest and social good are prioritized in technology and computer science to the same degree as they are in law and the humanities.
Conclusion
Currently, AGI, which Musk and others are so worried about, is still surrounded with uncertainty. Even if it is created, most experts expect that to be at least 50-100 years down the road.
Regardless, existing AI and other technologies have already created a surplus of ethical questions, and one can only imagine what the future has in store. So, in preparation, universities, policymakers, activists and others have a responsibility to step in.