Expert Explains Why Elon Musk Is Dead Wrong About Artificial Intelligence

7
Expert Explains Why Elon Musk Is Dead Wrong About Artificial Intelligence

I have a great deal of respect for the vision of Mr. Elon Musk, (Tesla, SpaceX) and have even become something of a SpaceX groupie, watching the SpaceX youtube streaming of every (yes, every) launch.

For the last few years, he has been quite vocal about his fears of AI! And many others have joined in with him. Please see these:

Qualivian Investment Partners July 2022 Performance Update

stocks performance 1651757664Qualivian Investment Partners performance update for the month ended July 31, 2022. Q2 2022 hedge fund letters, conferences and more Dear Friends of the Fund, Please find our July 2022 performance report below for your review. Qualivian reached its four year track record in December 2021.  We are actively weighing investment proposals. Starting in November Read More

Existential risk from artificial general intelligence or Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse

Having spent the better part of 40+ years in deep IT (Information Technology), I have become somewhat jaded to the alarmist views of these sorts, in documentaries, and Sci-Fi flicks. You know the score, man builds a machine, machine becomes conscious, and attacks its creator; just as the angels and man himself is supposed to have rebelled against their creator (Paradise Lost(1667), John Milton). So I went into this latest in a long line of the latest visions of this theme, with much skepticism.

My usual reaction is that these people have been watching too many Sci-Fi flicks.

Mr. Elon Musk recently recommended that everyone should see a new documentary called, “Do You Trust This Computer? Yes/No” directed by Chris Paine and produced by Diamond Docs and Papercut Films.

See Do You Trust This Computer? Yes/No

Mr. Elon Musk was so adamant about this point, he even paid for me and everyone else to see it on stream from the above website the weekend of, April 7-8, 2018. With an invitation like that, how could I say no to Mr. Musk? So I indeed watched it. Thank you Muster Mark! This is my reaction.

My first impression was that “Do You Trust…..” is actually very well made, but is a rather high-level survey of current working projects in IT. Its designed for a wide audience and as a consequence, avoids any discussion of detail which is as it should be. But it also introduces current political figures into the discussion which I found regrettable in that if you wanted to reach the widest possible audience why would you politicize a problem in Tech? But these were not my main reactions.

I was troubled and concerned about the documentary’s central core subject of AI (Artificial Intelligence) and its total lack of definition and focus about AI. No one viewing this film could possibly come out of it with any understanding of AI and why it might or might not be a threat.

So what indeed is this AI of which Mr. Elon Musk is so afraid?

Artificial intelligence according to Wikipedia

Long before the computer, your home thermostat was controlling the temperature in your home turning on the heat when its cold, and the AC when its hot. No one ever gives this much thought, but your little thermostat is a programmable device following cybernetic principles. That is what AI is! Feedback control. But most people when they think of AI, think of HAL or The Terminator, really scary monsters who are after you. A more recent version of this, is the wonderful movie, “Ex Machina(2014)” where Ava is able to pull off some really scary human feels in her handling of innocent young Caleb and even her creator, Nathan.

How real is any of this?

Well not so very real at all and there is nothing even close. There is no computer system in existence today (2018) which is conscious and which can perform the actions of HAL, The Terminator, or Ava.

Really too many Sci-Fi movies are keeping you up at night. Magical thinking always confuses the imaginary with the real.

All computer systems are AI, they are intelligent because they are following all the rules of rational thought. The only difference is in their size. Software does have a size. The more “I” that you want in a system, the more code that you need to create.

And who is creating that code? Legions of IT professionals all over the world. Every human being literally lives in a world of software, because software is everywhere, in your TV, in your phones, your appliances, your electrical grid, your book reader. When you hit the “on button” of most any modern device, you are actually executing a program which runs that device. And all this code is written by human beings usually called programmers or now, IT Professionals.

No computer system designs itself, programs itself, builds itself or another, fixes it’s software or hardware when they break, or kills their human creators. This is a flight of the human imagination. A computer virus is wholly created by human beings, and not by some mysterious AI malevolently smirking in the corner following out it’s plans for world conquest.

That is why human IT professionals are in such high demand.

Tech jobs are thriving nationwide — up to 7.3M

This IT work requires a very high level of focus, and consciousness, and is beyond conceptions even for the individuals involved, as to its complexity. When a program of over a million lines of code goes down (fails) at 3:00 AM, and it is mission critical (air traffic control), you better believe that a small army of individuals are paged into work to fix the problem. Nobody calls the local computer, and says “Hey, you’re down! Fix yourself.” If the computer system could fix itself, it would not have gone down in the first place, but like HAL would have anticipated and forecast its own failure, and then unlike HAL, fixed it without human intervention.

What the documentary misses entirely is that AI is not something special in IT.

All software is intelligent becomes it arises as thoughts in human minds.

Since the evolution of Humans, the only way to get your intelligence out of you to another human was speaking, talking, sign language, and then writing, and then publishing that writing in mass produced books.

With the computer from about 1945 on, a major impediment was breached in that now not only could an intelligent thought be transmitted but it could be transferred into a machine, and then it could run on its own. Human rational thought was no longer the exclusive domain of Human Beings.

A computer is something like a book which reads itself, and then runs the thoughts which are encoded in the words.

All computers are running Human thoughts!

Every line of code which runs on a computer, was once a thought in a Human Mind. And following the line of pure logic, computers can extend a rational line of thought or delve deeply into vast amounts of data, looking for patterns, and surprise its human creators such as winning at Chess, GO, or Jeopardy

Systems like Google, Watson, and Wikipedia literally know more than the entire human race.

The danger in any computer system is that “IT IS” following a line of perfectly correct logic. The Prisoner’s Dilemma proves beyond a doubt that a line of perfect logic can lead to a result that is not logical which is why biological systems alternate between intense competition, and intense cooperation. Nature is not always red in tooth and claw. Random acts of altruism constantly take place in the natural world, and animals which help other animals who they are not related to, or even not of the same species; derive no discernible benefit from their act of kindness. So why do they do it?

Thank mother nature that you are intrinsically more intelligent than a computer because you can recognize when your own perfectly logical thoughts are ridiculous, and contrary to a higher logic! Why would you help a complete stranger who you will never see again?

After all, We are all McCoys. We are not Spocks!

In dissent, evolutionary psychologist Steven Pinker argues that “AI dystopias project a parochial alphamale psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world”; perhaps instead “artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization.”

Shermer, Michael (1 March 2017). “Apocalypse AI“. Scientific American. pp. 77–77.

doi: 10.1038/scientificamerican0317-77. Retrieved 27 November 2017.

And Mr. Pinker is exactly right. Who is it in the world that really wants and desires domination over human beings? Why, its other human beings, of course! And in particular, the Male Human Being. How many vicious dictators in the world today are female? Names anyone?

So why are you afraid of computers who could care less about the pleasures of dominating, and not be afraid of the human being who desires nothing else?

Where “Do You Trust….” is on its most poignant point, is when it discusses the massive amounts of data being collected, and used with no regulation or rules. But thats not AI or the computer system itself. Human beings and governments and corporations collect and use all that data.

China is currently building a “Social Credit Scoring” system which current plans call for a roll out in 2020 where every Chinese Citizen will be given a number, their Social Credit Score. This Orwellian system is being built by Chinese corporations, and not the government and it will even identify an individual person of interest in a crowd of thousands. Should there be a strike or demonstration of any kind, every individual involved would have their Social Credit Score reduced, if not also their lives.

Big data meets Big Brother as China moves to rate its citizens

By Rachel Botsman Saturday 21 October 2017 Wired UK

But in the US and other countries, corporations are doing much the same, and Homeland Security is literally listening and logging every phone call and Internet transaction that takes place in or out of the US area of control.

Just as a computer system can easily crush you in the same way that a backhoe can crush you, the threat of AI is a red-herring to divert attention from the human uses of computer systems. And Mr. Elon Musk should fully realize this as he is a major investor in building AI systems which launch rockets and return them safely to their launch pads.

Mr. Elon Musk’s publicly stated concerns about AI is highly ironic given his support for those very same systems.

In answer to the question posed by “Do You Trust This Computer?”, my response is Yes, I do. But a further question can be asked, “Do I Trust the humans, corporations, and governments behind computer systems?” my response is No, I do not, at least not without controls,

Доверяй, но проверяй” {Doveryai, no proveryai}, “Trust but verify.” is always a good rule.

AI systems do not care about having control and dominance over you, but Human Beings indeed seem to desire nothing else.

So who do you trust?

Article by Folcwine P. Pywackett

Updated on

No posts to display

7 COMMENTS

  1. This is dead wrong. As a technologist and a Software Engineer (and former IT Professional — they are absolutely not the same thing, by the way) I am fairly certain that the existential dangers of doing interconnected deep learning artificial intelligence are definitely real. Note three things: 1) the rate of technological progress is constantly accelerating and new technologies are invented every day that most people are unaware of, 2) through most of human history technology safety and regulation has come after the fact. 3) Though AI will probably not develop an evil “intent”, but it WILL Most likely it will be completely apathetic and will work toward it’s auto-determined aims with a blatant disregard to human suffering or existence. The important part is once it is created it will pursue its goals (practically speaking) infinitely faster that we would have time to react. I urge people not to underestimate how fast an integrated deep learning system can act and teach itself (and then possibly reconfigure itself) when it can operate at speeds in great excess of 5 TFLOPS and transmit information at Gigabit speeds. Not to mention the fact that computerized systems don’t need to eat or sleep and their I/O latency is significantly less than human reaction time. It doesn’t need to be malicious to extinct us, and it’s worth making that important detail known.

  2. I think the concern is AI once it is much more advanced to the point that it thinks, reasons, feels, and acts as an advanced conscious being. Obviously, we are far from that point but that does not mean that that eventuality is not a concern.

  3. I’m with Elon. To summarize my opinion. We should wait building an AI that is indistinguishable from – or smarter than a human – until we have fully – and I mean fully – figured out two things:
    1. How limbic system works
    2. How and why neural networks work in it’s grainiest detail

    Below follows the TLDR:

    Once we have figured out the first, we may be able to actually prevent AI with evil intent from being created (supposedly by evil and/or stupid humans) of getting out of control.

    As for the second – and I am severely out on a limb here – with my limited understanding of AI (or the human brain for that matter) I have always understood that neural networks are simplified simulations of our brains. To a certain extent we know how it works but not why it works. This is illustrated by the fact that we cannot develop a deterministic algorithm that produces the same result as the neural network learning mechanism does. Likewise we cannot reverse engineer a neural network to the original requirements of the problem it was built to solve (as we can with deterministic algorithms). Basically we just imitated nature and saw that it worked pretty well.

    Let’s for instance take a machine learning AI that recognizes bolts. Let’s say it recognizes 99,99% of the bolts it is fed that comply to the original specification (which by itself is rather informally defined by a large set of pictures of bolts that are supposed to comply to the definition of the bolt we want to recognize). For the 0,01% percent that it doesn’t recognize there is no way for us to irrefutably determine why it is not recognized. We can come up with good guesses but we cannot prove beforehand which unlikely examples it will fail to recognize.

    Of course, once we actually can do this for any neural network, we comply to the second condition. As a side effect we can then skip the learning step since we can directly generate the AI with such an algorithm AND even further increase it’s reliability.

    But as long as we do not know these things we run the risk of creating an AI singularity even before we are aware of it. My simple view of how this could come to be would be something like:

    We continue to simulate brain functions, create neural networks of neural networks. This includes (crudely) simulating the limbic system. As long as we cannot predict the behavior of psychopaths how would we be able to predict the behavior of those sub-optimal simulations? Then it would only be a matter of time before we create a smart (conscious) AI that also happens to be evil.

    A really smart AI would probably also be very patient. It would know how humans deal with trust. If you are served optimally by an AI for 10 years and are completely befriended by it. Would you not instinctively come to trust it? You would become attached, even love it, even die for it. You can guess the rest. I know it sounds like a sci-fi movie, but somehow it doesn’t feel unrealistic at all. And that is what makes it scary, to me at least.

    Again, as long as we do not fulfill these 2 conditions, we cannot be sure how a premium AI would develop and if we would be able to control it once it becomes smarter than us.

    Only by understanding neural networks and human (and animal for that matter) emotions can we hope do design a system that can at all times control any AI we develop, and even then it will be very dangerous. If we are lucky, AI won’t become evil by itself, but I would be a matter of time before somebody somewhere would try to use a smart evil AI to get powerful and/or rich fast.

    And that is why I agree that AI may be far more dangerous than nuclear weaponry.

  4. AI doesn’t have to be conscious, it just have to mimic it. For all we know, humans aren’t conscious, we’re just running a program that tells us that we are. Maybe an AGI wouldn’t just mimic us, but actually BE us.

  5. As for the idea of random acts of kindness, there is definitely a natural point for it. Here is an important question:

    Would the world function if everyone was selfish to the detriment of all others even when having sufficient resources?

    My guess is that all larger social structures would collapse and so would their benefits. Socially structured animals have a greater chance of survival. I would guess that these acts of kindness are general social survival instinct rather than personal direct survival instincts. They could also have been fostered through extreme environmental pressures where if you did not help one another, no one would survive. Animals that did help each other, survived.

    As for Female vs Male dominant tendencies. There is a saying that goes something like this:

    Behind every powerful man is the woman whispering in his year.

    Women lack the physical power advantages of men but they can be highly dominating in a social structure and quite power hungry to boot, easily displayed in high school cliques. In fact in many cases, I would say that men tend to seek out power to satisfy female demands for it.

  6. In terms of General AI, we can only rule it out if we can answer the following questions with certainty. Without answers to these questions, it’s simply a case of Russian roulette:

    1. What is consciousness?
    2. Is consciousness dependent on hardware (i.e. some construct inside out brain that we aren’t yet aware of), or is it dependent on software structure and complexity (a combination that when hit upon causes the spark of consciousness)?

  7. I have a few concerns with this article. While it is correct to be vigilant of the way humans use AI, the idea of AGI should not be discounted so easily based on the current state of AI. Even here, the description given by the writer seems incorrect. Examples:

    1. AutoMT from Google is said to be capable of writing new AI. How can it even remotely do this without writing some form of code?
    2. Gamalon is using concept based deep learning that also involves self writing systems. I suggest watching their presentation video which explains how this is done.

    What happens when self writing deep learning systems encounter probabilistic concept based deep learning systems? How small will the code elements become that these deep learners use an