How AI (Artificial Intelligence) Can Get Out of Control and Take Over the World

 


7 Tech Experts Reveal Their Biggest AI Fears




Is it possible that robots will one day rule the world? The fear about what artificial intelligence can do to humanity has already been used as a theme for some books, like "I, Robot", movies like "The Terminator" and series like "WestWorld" and "Black Mirror". There are many doubts and afflictions generated about an AI superior to humans, from the possibility of losing jobs to the fear that machines will exterminate humanity.

And what do people who deal with AI on a daily basis have to tell us? What are the opinions of scientists, entrepreneurs and investors who have access to the latest technologies? Amazingly, many of them share laypeople's fears. Even admitting the countless benefits of technology, which has drastically changed the way we live, think and work, they believe it is necessary to create rules, processes and laws for AI to work for us and not against.



Below are seven personalities' opinions regarding the dangers of the AI, on how to prevent it from destroying humanity, compiled by CBInsights.


Elon Musk : “It's Essential to Make Rules”




Only a few tech names spoke as much about the dangers of AI as Elon Musk, the creator of Tesla. His Twitter posts tend to be extremely alarmist. In interviews, he doesn't mask his fear of technology, according to him, more dangerous than North Korea. “I think we need to be very careful with the AI. If I had to say what the biggest threat to humanity today, I would say it's probably this technology. The more time passes, the more scientists believe that there should be some kind of regulatory oversight, locally and internationally, to ensure that we don't do anything stupid. With the AI, it's like we're summoning the devil, not realizing that we don't have dominion over him.”


Tim Cook:  "Artificial intelligence must respect human values”



One of the not-so-discussed points about AI among experts is its ability to learn moral/ethical concepts. For Tim Cook, a lot of responsibility is needed when dealing with systems that may or may not identify human principles. “For artificial intelligence to really be intelligent, it must respect human values, including privacy. If we can't get her to do that, the dangers are enormous. It is our responsibility to ensure optimal privacy standards. In the pursuit of artificial intelligence, we should not sacrifice humanity, creativity, the ingenuity that defines human intelligence.”


Satya Nadella: "We shouldn't pass our prejudices on to the AI"




Microsoft's CEO believes AI and machine learning will change every aspect of modern life. “Digital technology is being integrated into all areas of life: all things, all people, all journeys are being fundamentally shaped by digital technology. It's amazing to imagine the world as if it were a computer. I think it's the perfect metaphor for us to use going forward.” But, he also warns about the risk of putting our prejudices in technology. “All the choices we make about AI must be based on principles and ethics. Only in this way will we guarantee the future we seek.”


Steve Wozniak:  "Technology Can Completely Replace Humans"




Apple's co-founder, like many Silicon Valley pioneers, is often more diligent about the potential of AI. For him, the dangers that surround humanity are many. “I agree with Stephen Hawking and Elon Musk: the future is scary and very bad for human beings. If we build these devices to take care of everything, eventually they'll think faster than we do. And then they're going to get rid of humans completely, to run companies more efficiently.”


Brian Chesky: “Automation with AI offers benefits for some, risks for others”




Airbnb's CEO says he is concerned about the effects of technology on workers. “The concept of automation worries me. Many tasks will be automated. This will bring benefits to people, but also a huge cost. I worry about the day Made in the United States’ becomes ‘Made by robots in the United States.”’


Reed Hastings: "I don't know if I'm going to entertain humans or robots"



The entertainment world has always been fascinated by automatons. However, Reed Hastings, CEO of Netflix, fears that technology will negatively influence the way people have fun. “I don't know what will happen in 20 or 50 years. I don't know if we can talk about entertainment, and what kind of fun will be common. I don't know if I'll be making programs for humans or for robots.”


Stephen Hawking: “It may be impossible to control AI technology”




The renowned astrophysicist (1942-2018) and author of "A Brief History of Time" believed that, in the long run, it would no longer be possible to control AI. Also that, given the chance, technology would control humanity. “It is possible to imagine this technology deceiving the financial market, outstripping human researchers, manipulating leaders and developing weapons that we are not even capable of understanding. In the short term, the impact depends on who controls the AI. In the long run, the question will be whether we can get any kind of control over it.”

Comentários

Postagens mais visitadas deste blog

SCIENTIFIC RESEARCH PROJECT IN ILHA DA TRINDADE

21 super cars that Cristiano Ronaldo sports in his garage Costing together over U$17.6 million together. Check out the records!