Tesla Motors Inc CEO: Artificial Intelligence Is Like Demons

9
Tesla Motors Inc CEO: Artificial Intelligence Is Like Demons
geralt / Pixabay

Tesla Motors Inc (NASDAQ:TSLA) CEO Elon Musk has marked artificial intelligence as the biggest threat to humans. On Friday, Musk, who is also the founder of SpaceX, addressed the MIT Aeronautics department’s Centennial Symposium for about an hour, mulling international oversight to “make sure we don’t do something very foolish.”

 

Tesla CEO warns again

Tesla’s CEO did not talk about any specific threat but was convincing in making his point. He said that with artificial intelligence, human beings are beckoning demons, adding, “In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”

Qualivian Investment Partners July 2022 Performance Update

stocks performance 1651757664Qualivian Investment Partners performance update for the month ended July 31, 2022. Q2 2022 hedge fund letters, conferences and more Dear Friends of the Fund, Please find our July 2022 performance report below for your review. Qualivian reached its four year track record in December 2021.  We are actively weighing investment proposals. Starting in November Read More

Artificial intelligence works with the help of computers for completing tasks that otherwise need human intelligence such as speech, recognition or language translation. All mega tech companies are excited about artificial intelligence and are betting on the usefulness of the technology when harnessed correctly. Companies such as Google Inc (NASDAQ:GOOGL) (NASDAQ:GOOG) and Facebook Inc (NASDAQ:FB) are keen on developing systems that work like the human brain.

In August, Tesla’s CEO tweeted: “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.”

According to a report from The Washington Post, Tesla’s CEO was so into artificial intelligence that he missed the question asked by someone in the audience and asked him to repeat the question.

Big tech firms developing AI

Back in January, Google acquired British startup DeepMind, which is an artificial intelligence company founded by a 37-year old former chess prodigy and computer game designer. Google’s London office did confirm that the deal was concluded but denied paying $500 million. The startup was founded by researcher Demis Hassabis, together with Shane Legg and Mustafa Suleyman. Hassabis has done research on the processes that underlie human memory.

Dr. Stuart Armstrong from the Future of Humanity Institute at Oxford University, has already cautioned that increasing use of artificial intelligence will result in massive unemployment as machinery would replace manpower. Armstrong also talked about the effects of uncontrolled mass surveillance if computers learn to recognize human faces. Now Musk’s warning cannot be ignored, considering his position in the tech industry.

Updated on

No posts to display

9 COMMENTS

  1. There are too many things that we still don’t understand like exactly how the brain works and yet we are talking about AI. Have you wondered why there are so many issues in the world (yes we all wanted simple solutions but generally lacking the understanding while searching for the simplest answers)? We have been embarking on making everything more complicated and at the same time making or sorting them as fast and as cost effective as possible. We should go back and understand the big or the bigger picture and come back with simple solutions dealing with the root cause. I would like to hear from the AI community their deepest understanding on relationship between human intelligence and machine when we don’t even have the full scope understanding ourselves!!

  2. Musk is probably right. Companies are all excited about AI because is another means of saving on labor costs in the future, and making businesses more efficient by the use of automation. A similar debate was brought a few decades ago when we where looking at the future of computers. They said it would create unemployment, and in a way it did, although computers also created employment in another way. However, when we look at the big picture, did computers create more jobs than those that it took away? I tend to think that computers created more unemployment in the long run. Well the same with AI. In the long term it will eliminate more jobs than it creates, and this will deeply affect our economy, and our quality of life. So technology is advancing at a rapid pace, but we are still working under an old economic ideology that no longer meets the needs and expectations of the majority of the world population. Instead, the current economic system only meets the needs of a smaller group of people, and this group is becoming even smaller as time goes by. So unless we modernize our economic system to improve the lives of everyone, while matching with the current changes in technology, then I would agree that massive unemployment and a breakdown in world society would be our future.

  3. After WW2, Karl Bath (theologian) said that the two greatest threats to humanity were nuclear bombs and massive unemployment. Of these two, he said, the greatest danger was from massive unemployment.

    Why do you think this might be? [Hitler?]
    Human behavior becomes very chaotic as people become very desperate.
    When that happens, everything that holds society and civilization breaks down.

    If AI could factor human happiness down to the individual level, there might be a chance.

    Ancient Egyptians had a very stable culture founded on the principle of “Hotep”
    one word that combines two goals — Food & Peace.

    Unfortunately we are too ingrained with the notion money
    that possession of it can buy happiness (for whom?)
    Money is A means of managing motivation. [Not the only one.]
    — Motivation to do what??

    AI is like a faster sports car. It can get you someplace very quickly
    but do we really know where we want to go?

    Without AI, we are forced to draw feedback from all participants in society.
    With AI, the computer will reign and we will ultimately draw from none.
    What are we measuring to maintain control
    Does a computer understand happiness? dignity, self-worth.
    Perhaps it might see (only) itself as worthy of those benefits
    Then what?