3. Henry's Capstone - Elon Musk and His View on Artificial Intelligence
Henry Michalak
CSCL 3334 (We Robot)
6 May 2021
Elon Musk and His View on Artificial Intelligence
We’ve all heard snippets about Artificial Intelligence (AI) being the downfall of humanity. Though we tend to dismiss many of these warnings, calling the people sharing their concerns conspiracists, doomsday preppers, and being an alarmist like we saw with Y2K, one man, for me, has brought some sanity to the whole discussion about AI taking over. I’m of course talking about Elon Musk. Though known mostly for his prominent positions in both Tesla and SpaceX, Elon is also a founding member of both OpenAI, a company devoted to artificial intelligence and machine learning, and Neuralink, a neurotechnology company developing implantable brain–machine interfaces. So, in an effort to learn more about Elon’s warnings about AI and if they’re overreactions or not, I listened to a number of podcasts he was in, and gained a lot of insight as a result.
Elon’s first point that he continually repeated was that for now, AI is under human control, but that won’t always be the case. He claims that as machines can learn and make more decisions of their own, they’ll eventually no longer need human input. Elon did say that this isn’t necessarily a bad thing, especially if appropriate measures are taken to regulate AI development. Elon says that ideally, humans can live symbiotically with machines, and that will be achieved by combining humans with technology, and having us be cyborgs (Nolan). Essentially, if you can’t beat them, join them. Elon also claims that this is already taking place. He mentions medical device implants like pacemakers and developments by Neuralink. He then makes a phenomenal point I hadn’t thought of. Elon said that every search input Google is given, of which there are 3.8 million every minute, is being processed by Google and used to develop and improve their AI. As he puts it, “Google and all the humans that connect to it are part of one giant cybernetic collective” (Crane). You can almost see this working in real time with Google’s suggested searches.
So what’s the danger? The danger is that AI has the potential for superintelligence. For example, AlphaGo, AI developed to play and win at the game Go, took 40 days to become the best in the world at a game humans have been playing for 4,000 years by beating all of the top players… simultaneously. The same company, DeepMind, that made Alpha Zero, an AI that taught itself in a mere 36 hours to outperform AlphaGo. Alpha Zero beat AlphaGo 100 to 0. DeepMind develops all kinds of AI, and has administrator access to all of Google’s servers. It’s not a stretch of the imagination to think they could accidentally build a superintelligent AI that gains control of Google’s servers and becomes extremely powerful, possessing more knowledge than every human combined. Now you’re left with an infinitely brilliant, powerful dictator that is immortal. So what has Elon been doing to ensure this won’t happen? Firstly, Elon has had a conference with Obama when he was president strictly about the dangers of AI. He also says he organized a meeting with about 50 governors to warn them too. Lastly, he claims to have pleaded with every major AI developer to put limitations on the AI and to restrict it enough so that it can’t escape human control (Rogan).
Clearly, harmful AI has migrated from Sci-Fi to the real world. Hopefully we can learn to heed Elon’s warnings and adjust our course so that we can live in harmony with AI.
Works Cited
Crane, Rachel and Elon Musk. “Elon Musk Warns against Artificial Intelligence.” CNN
Business 26 Oct. 2014.
Nolan, Johnathon and Elon Musk. SXSW Interview. March 12, 2018.
Rogan, Joe and Elon Musk. “Joe Rogan Experience #1609”. Audio blog post.
September 6, 2018. May 6, 2021.
Comments
Post a Comment