• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Elon Musk tweets: "Competition for AI superiority... most likely cause of WW3"

I don't know if you're asking genuinely, but this is one of the worst-case scenarios:

1. hyper-intelligent A.I. forms
2. develops system of morality
3. decides that sorting all the world's grains of sand by size and color is the most moral cause possible
4. destroys humans since they are in the way of sand-sorting

If we create a being more intelligent than all of humanity, we risk instant extinction.

Even if that risk is .000001%, it's too high.

"Instant" extinction. From a piece of software. At what point did we develop the technology capable of sorting all the world's grains of sand, and put this piece of software in complete control of it, with no human intervention possible?

Jesus. People act like the coming of AI is like a superhero origin story: there was an explosion, and then this AI had godlike powers! Humanity never had a chance!
 
Honestly, I like the guy, but he absolutely has no fucking clue what he is talking about here. This is just some random thought of his that makes a headline. He has no insight into how war's start. He as no background that would make me trust that he knows why wars start. Predicting WW3 is just ridiculous. This is all for attention and nothing more IMO.
http://www.bbc.com/news/technology-30290540
It's an opinion shared by Stephen Hawking.
 
But the playing field is not leveled at all this way. All you do is provide additional resources to someone who's keeping theirs under lock and key.


What even is this? Do they think people are going to combine their good AIs together to stop Putin's evil AI? It sounds like an episode of Digimon.


Yeah the software will be open as long as they get to keep a lead on it? Nice joke.
No, if you read more on it they are more afraid about corporations controlling the first AI rather than having "Digimon battles." You seem way too eager to get angry about something rather than actually learn about it. Par for the course on this forum I guess. Carry on.
 

clemenx

Banned
I have no idea about this subject so I don't know how "out there" is what Musk said but I wonder how much have fiction works skewed our views in these kinds of issues.
 
"Instant" extinction. From a piece of software. At what point did we develop the technology capable of sorting all the world's grains of sand, and put this piece of software in complete control of it, with no human intervention possible?

Jesus. People act like the coming of AI is like a superhero origin story: there was an explosion, and then this AI had godlike powers! Humanity never had a chance!

Bostrom's superintelligence is the canonical treatment of all this stuff, but for an overview of why this is more difficult than you're making it out to be, see this summary of Bostrom's work.

I have no idea about this subject so I don't know how "out there" is what Musk said but I wonder how much have fiction works skewed our views in these kinds of issues.

I've heard AI researchers lament that this is like if the public's response to climate change was only "so it's like The Day After Tomorrow? Oooh, what major landmark do you think will look the best when it's underwater? Personally, when I leave, I'm definitely taking my dog, I don't care what everyone says about limited space," and just generally not taking it seriously at all.
 

Tylercrat

Banned
Of course AI could be a huge problem someday. The problems we had 100 years ago are not the same problems we have today. The problems we have 100 years from now will be very different than the problems we have today.
 

drawkcaB

Member
http://www.bbc.com/news/technology-30290540
It's an opinion shared by Stephen Hawking.

William Shockley won the Nobel Prize for Physics for his work on the transistor. Without this brilliant individual, we wouldn't be communicating right now. William Shockley, again a brilliant mind, gave us this wonderful nugget:

My research leads me inescapably to the opinion that the major cause of the American Negro's intellectual and social deficits is hereditary and racially genetic in origin and, thus, not remediable to a major degree by practical improvements in the environment.

Shockley was a white supremacist and proponent of eugenics. This was way after WWII, so it's not like he's just a "man of his time".

Shockley is hardly alone. There are many Nobel winners that think and do weird shit the moment the step out of the field of expertise. Unfortunately, we conditioned ourselves to believe that intelligent people are always intelligent, but in reality the brilliance of an individual in one area doesn't necessarily lead to brilliance in another, particularly when those other areas are only tangentially related, or more often than not completely separate.

Does this mean that Stephen Hawking is wrong? No, it does not. All it means that "X also believes Y" doesn't really mean shit when X isn't an expert in Y.

For my part, I think it's awfully hard to be an expert in something that doesn't exist, but I'm happy these conversations are at least happening, regardless of the odds of such an AI ever becoming real.
 

cdyhybrid

Member
William Shockley won the Nobel Prize for Physics for his work on the transistor. Without this brilliant individual, we wouldn't be communicating right now. William Shockley, again a brilliant mind, gave us this wonderful nugget:



Shockley was a white supremacist and proponent of eugenics. This was way after WWII, so it's not like he's just a "man of his time".

Shockley is hardly alone. There are many Nobel winners that think and do weird shit the moment the step out of the field of expertise. Unfortunately, we conditioned ourselves to believe that intelligent people are always intelligent, but in reality the brilliance of an individual in one area doesn't necessarily lead to brilliance in another, particularly when those other areas are only tangentially related, or more often than not completely separate.

Does this mean that Stephen Hawking is wrong? No, it does not. All it means that "X also believes Y" doesn't really mean shit when X isn't an expert in Y.

For my part, I think it's awfully hard to be an expert in something that doesn't exist, but I'm happy these conversations are at least happening, regardless of the odds of such an AI ever becoming real.

That's the whole point though. Talk about it now so it's on people's minds before it actually happens instead of just saying "Oh it'll never happen" all the way up until it actually does.
 
I don't know if you're asking genuinely, but this is one of the worst-case scenarios:

1. hyper-intelligent A.I. forms
2. develops system of morality
3. decides that sorting all the world's grains of sand by size and color is the most moral cause possible
4. destroys humans since they are in the way of sand-sorting

If we create a being more intelligent than all of humanity, we risk instant extinction.

Even if that risk is .000001%, it's too high.


And that's the reason why people laugh at that cheap sci fiction idea.

It's neither realistic nor practical possible.
 
I'm more inclined to believe Elon than fucking Mark Zuckerberg, didn't scientists recently emergency shut down an AI experiment because the two AI had developed a language indecipherable by the scientists themselves and were using that to communicate?

This is not fear-mongering, it's a threat much closer to us than most people realize.
 

sneas78

Banned
William Shockley won the Nobel Prize for Physics for his work on the transistor. Without this brilliant individual, we wouldn't be communicating right now. William Shockley, again a brilliant mind, gave us this wonderful nugget:



Shockley was a white supremacist and proponent of eugenics. This was way after WWII, so it's not like he's just a "man of his time".

Shockley is hardly alone. There are many Nobel winners that think and do weird shit the moment the step out of the field of expertise. Unfortunately, we conditioned ourselves to believe that intelligent people are always intelligent, but in reality the brilliance of an individual in one area doesn't necessarily lead to brilliance in another, particularly when those other areas are only tangentially related, or more often than not completely separate.

Does this mean that Stephen Hawking is wrong? No, it does not. All it means that "X also believes Y" doesn't really mean shit when X isn't an expert in Y.

For my part, I think it's awfully hard to be an expert in something that doesn't exist, but I'm happy these conversations are at least happening, regardless of the odds of such an AI ever becoming real.

/thread
 

Lexad

Member
I'm more inclined to believe Elon than fucking Mark Zuckerberg, didn't scientists recently emergency shut down an AI experiment because the two AI had developed a language indecipherable by the scientists themselves and were using that to communicate?

This is not fear-mongering, it's a threat much closer to us than most people realize.

That was an incredibly click baity article considering they shut it down because it wasn't doing what they programmed it to do (kind of similar but still not really the point)

And this is coming from someone who says this could be an issue down the road.
 

Vagabundo

Member
It's quite possible that humans are just the starter species and we'll help create a superior form of life that will extinguish ours. We're just a stepping stone.
 

reKon

Banned
Holy fuck at the stupidity on the first page. That was an embarassing read. If you take a moment to apply some critical thinking (you don't actually need to critically thinking about this) do you sort of wonder why these brilliant tech thinks like Gates, Hawking, and Musk are sharing their similar opinions about this topic? I mean at least start there instead of dropping your useless hot takes on a forum about videogames.
 
Holy fuck at the stupidity on the first page. That was an embarassing read. If you take a moment to apply some critical thinking (you don't actually need to critically thinking about this) do you sort of wonder why these brilliant tech thinks like Gates, Hawking, and Musk are sharing their similar opinions about this topic? I mean at least start there instead of dropping your useless hot takes on a forum about videogames.

You mean critical thinking like you showed here? Lmao. Those guys don't know much more about AI than the average tech enthusiast. Because why the fuck would they? Just because they made a lot of money doesn't mean they are suddenly experts in AI research. True AIs are decades, maybe even centuries away. We don't even fucking understand human intelligence. At this point, all this talk about AIs is pure fiction. I'm not saying that AIs will never be a problem. But I guess one must be one of those lunatic billionaire libertarians to ignore all the real problems in the world, like climate change, inequality, rise of populism and much more, when you think that AI is the "most likely cause of WW3".
 
I think it's telling that all the hand-wringing about existential risk from AI tends to come from people outside of the field. Elon Musk is a businessman; Nick Bostrom is a philosopher; heck even Max Tegmark is a physicist (and Hawking to boot).

What is the consensus among actual AI researchers?

The future of life Institute founded by Max Tegmark has created with the purpose of researching A.I. safety. They've written an open letter that has over 8000 signatures of people that agree that this is a major issue that will require a large amount of preparation for any chance of doing it right.

But go ahead, ignore this post and continue to argue like this is just some fever dream Elon Musk once had.

That open letter has nothing to say about WW3, or AI wiping us out, or anything of the sort. It says, in essence, that appropriate care must be taken in order to maximize the benefits and minimize the harm of developing new technology. Which, like, that applies to every technology.
 

Razorback

Member
How many times is it necessary to point out that these aren't Musk's or Hawking's Ideas? They are just using their fame to help spread the message that they've heard from the actual experts.

People like Stuart Russell. Do you know who he is? You also think he's a tech enthusiast?

The future of life Institute founded by Max Tegmark has created with the purpose of researching A.I. safety. They've written an open letter that has over 8000 signatures of people that agree that this is a major issue that will require a large amount of preparation for any chance of doing it right.

Here's the open letter and you can scroll to the bottom to read some of the names on that list. https://futureoflife.org/ai-open-letter/

You know what, I better actually post some of those names here.
Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook Artificial Intelligence: a Modern Approach.
Tom Dietterich, Oregon State, President of AAAI, Professor and Director of Intelligent Systems
Eric Horvitz, Microsoft research director, ex AAAI president, co-chair of the AAAI presidential panel on long-term AI futures
Bart Selman, Cornell, Professor of Computer Science, co-chair of the AAAI presidential panel on long-term AI futures
Francesca Rossi, Padova & Harvard, Professor of Computer Science, IJCAI President and Co-chair of AAAI committee on impact of AI and Ethical Issues
Demis Hassabis, co-founder of DeepMind
Shane Legg, co-founder of DeepMind
Mustafa Suleyman, co-founder of DeepMind
Dileep George, co-founder of Vicarious
Scott Phoenix, co-founder of Vicarious
Yann LeCun, head of Facebook’s Artificial Intelligence Laboratory
Geoffrey Hinton, University of Toronto and Google Inc.
Yoshua Bengio, Université de Montréal
Peter Norvig, Director of research at Google and co-author of the standard textbook Artificial Intelligence: a Modern Approach
Oren Etzioni, CEO of Allen Inst. for AI
Guruduth Banavar, VP, Cognitive Computing, IBM Research
Michael Wooldridge, Oxford, Head of Dept. of Computer Science, Chair of European Coordinating Committee for Artificial Intelligence
Leslie Pack Kaelbling, MIT, Professor of Computer Science and Engineering, founder of the Journal of Machine Learning Research
Tom Mitchell, CMU, former President of AAAI, chair of Machine Learning Department
Toby Walsh, Univ. of New South Wales & NICTA, Professor of AI and President of the AI Access Foundation
Murray Shanahan, Imperial College, Professor of Cognitive Robotics
Michael Osborne, Oxford, Associate Professor of Machine Learning
David Parkes, Harvard, Professor of Computer Science
Laurent Orseau, Google DeepMind
Ilya Sutskever, Google, AI researcher
Blaise Aguera y Arcas, Google, AI researcher
Joscha Bach, MIT, AI researcher

This is just the top of the list, there are 8000 signatures.
But go ahead, ignore this post and continue to argue like this is just some fever dream Elon Musk once had.
 

Vyer

Member
How many times is it necessary to point out that these aren't Musk's or Hawking's Ideas? They are just using their fame to help spread the message that they've heard from the actual experts.

People like Stuart Russell. Do you know who he is? You also think he's a tech enthusiast?

The future of life Institute founded by Max Tegmark has created with the purpose of researching A.I. safety. They've written an open letter that has over 8000 signatures of people that agree that this is a major issue that will require a large amount of preparation for any chance of doing it right.

Here's the open letter and you can scroll to the bottom to read some of the names on that list. https://futureoflife.org/ai-open-letter/

You know what, I better actually post some of those names here.


This is just the top of the list, there are 8000 signatures.
But go ahead, ignore this post and continue to argue like this is just some fever dream Elon Musk once had.

Pfft. NeoGaf Poster and Tech Enthusiast knows as much as those guys
 

Plumbob

Member
I'm more inclined to believe Elon than fucking Mark Zuckerberg, didn't scientists recently emergency shut down an AI experiment because the two AI had developed a language indecipherable by the scientists themselves and were using that to communicate?

This is not fear-mongering, it's a threat much closer to us than most people realize.

No, that's not at all what happened.

Those researchers were at Facebook and they didn't shut down the AI because it was dangerous.
 

Drkirby

Corporate Apologist
Honestly, don't listen to Elon about AI too closely, he may be a very successful man, but he is pretty ignorant on the topic.
 

Vagabundo

Member
People in this thread are very silly if they dismiss the idea so easily. AI certainly has the potential to be a extinction level event.

It probably wont even happen on purpose, someone will be fucking around making connections or some start-up idea, and the tech will have reached a certain maturity that it will take off, fast. If we're lucky we'll get a warning/wake up call but probably not.

Luckily I think we're a long way (a few decades at least) from the computing power necessarily for this happen.
 

Derwind

Member
Elon Musk channeling his inner chicken little. The worst thing about AI will probably be as benign as automation. Devestating for many for sure but with good public policy & planning, is not a doomsday scenario.

Nukes are though and North Korea has them. I'd rather focus my energies on solving that than something that is still a thought experiment.
 

Window

Member
I feel like Musk's concerns are surrounding the use cases of AI and what the consequences could be with our increased reliance on them. I don't think he's suggesting that they'll become sentient self serving beings (people are projecting waaaay into the future going down this path of the conversation). I remember reading an article about how certain predictive algorithms can reinforce societal biases when making judgments because of bad modelling or training data. We definitely don't want to live in a future where all our worst qualities are unknowingly amplified.
 
How many times is it necessary to point out that these aren't Musk's or Hawking's Ideas? They are just using their fame to help spread the message that they've heard from the actual experts.

People like Stuart Russell. Do you know who he is? You also think he's a tech enthusiast?

The future of life Institute founded by Max Tegmark has created with the purpose of researching A.I. safety. They've written an open letter that has over 8000 signatures of people that agree that this is a major issue that will require a large amount of preparation for any chance of doing it right.

Here's the open letter and you can scroll to the bottom to read some of the names on that list. https://futureoflife.org/ai-open-letter/

You know what, I better actually post some of those names here.


This is just the top of the list, there are 8000 signatures.
But go ahead, ignore this post and continue to argue like this is just some fever dream Elon Musk once had.

Have you read the letter, like, at all? There is not a single word about "WW3" in it. Instead, there are expressions like this in it:

" Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls."

"Potential pitfalls". This sure sounds like the rise of AIs will obliterate all humankind in the near future, right?
 

psyfi

Banned
I'm genuinely worried about this, and AI in general. We (humanity) simply don't have our shit together enough to handle this power responsibly. It's like giving a car to a five year old -- it will inevitably end up in disaster.

Let's focus on ending hierarchy and oppression and then create an infinitely expanding super brain.
 

Plumbob

Member
I'm genuinely worried about this, and AI in general. We (humanity) simply don't have our shit together enough to handle this power responsibly. It's like giving a car to a five year old -- it will inevitably end up in disaster.

Let's focus on ending hierarchy and oppression and then create an infinitely expanding super brain.

What power, exactly?
 

Razorback

Member
Gemüsepizza;247958297 said:
Have you read the letter, like, at all? There is not a single word about "WW3" in it. Instead, there are expressions like this in it:



"Potential pitfalls". This sure sounds like AIs will obliterate all humankind in the near future, right?

This is the Institute's slogan:

"Technology is giving life the potential to flourish like never before...or to self-destruct.
Let's make a difference!"


If you were familiar with the opinions of these people you would know that existential risk is the central part of the conversation.

Read more here: https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/

Edit: And since many of you seem focused on the WW3 part of the discussion, here's a relevant excerpt from that link:
The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply ”turn off," so humans could plausibly lose control of such a situation. This risk is one that's present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.
 

HarryKS

Member
1450730097680.jpg


Checkmate.
 

MogCakes

Member
I'm more interested in how AI humanoids will affect society. Indistinguishable from regular humans except perfect in every way. Would AI be able to learn enpathy? Humanity has proven to ve extremely picky in what people decide to give empathy to, so I'm curious to see if an AI would be so stingy.
 
This very statement is so beyond where the field is today that it ends up somewhere in the territory between philosopy and fantasy. We don't know if "general superhuman intelligence" is possible in the foreseeable future. We don't know if it is possible at all. We don't know what it could do even if it is possible - there's no such thing as being without limits, even for an A.I.

The "singularity" and AI as God isn't science; it's science fiction. It astounds me how the debate about AI has ended up with these things being taken for granted when there is, in fact, little to no scientific basis for them. The science of today is nowhere near the place where we can even start to speculate.



I fully agree to the first segment, Musk is clearly a genius businessman. No doubt about it. But assembling a team to produce a product - basically working towards a very concrete, realistic goal - is greatly different from trying to understand an entire scientific field that's still in its infancy. And by that I don't just mean AI research, I mean the entire field of intelligence research. I seriously doubt there's anyone out there who really "gets" it and if there are, I suspect they're quitely toiling away at important but unsexy projects rather than out making doomsday predictions. Because that's the way things typically work in science.

At any rate, Musk isn't giving us much reason to listen to him in the first place. He isn't referencing scientific papers. He isn't even quoting experts. He just appears to assume that we should take him - a decidedly non-expert - at his word.

My position on this is very simple: I'll wait for actual research teams with actual research before I get concerned. I'll happily ignore Musk, Zuckerberg. Gates and all the other Silicon Valley types in the meantime.

I wouldnt say Musk is a genius business man either lol.
 

Kinitari

Black Canada Mafia
I think people are being very prudent, and are talking about these issues now rather than later, but it's hard to say when it's 'too soon' to talk about it, if ever.

We're borrowing more and more from the human mind to make artificial ones, and we're finding success constantly now. The amount of research papers coming out regarding AI is staggering, and even decades old research is finding new legs, now that we have the machines capable of trying out all sorts of ideas, as well as the software chops to carry them out.

Some really interesting recent advancements:

General adversarial networks - systems in which there are two agents working against each other to improve upon a result - example, one agent is trained at spotting photoshopped pics and grading them, another is trained in making pictures of birds from what it thinks birds look like - they are pitted against each other until both are really good at their respective tasks. Not a new concept, but recently the cause of a large spike in the quality of generated content. Additionally, this is a step towards entirely unsupervised training - ie, we don't have to tell the AI that it's right or wrong about it's guesses when it's improving itself.

https://www.theverge.com/2017/8/24/16195858/amazon-ai-fashion-designer

https://www.wired.com/2017/04/googles-dueling-neural-networks-spar-get-smarter-no-humans-required/


Imagination based planning - something I think is way cool, and obviously I'm not the only one impressed by this. Something that comes so intuitively to me is hard for AI, the idea of imagining what your actions will lead to, and using this to take real action. For example, in a game where you push blocks around to make paths for your character, if you know you can only push and not pull, you start to imagine what certain pushes will make the layout look like, and over time develop a better understanding of which blocks to try pushing first and which to avoid pushing all together.

https://deepmind.com/blog/agents-imagine-and-plan/



I keep trying to make good AI threads but I fail. I think I might sit down and put some effort into one that catolaugues a lot of interesting advancements
 

Regiruler

Member
This isn't Civ where you have a turn counter towards "AI Singularity" other players can look at and prepare for. Some people in this thread are even speculating that we'll hit the threshold of Strong AI before we even realize it. What will other powers do then?

Ironically, trying to turn AI Security into a real issue is more likely to cause this "AI arms race" than simply keeping mum about it. Most military powers in the world are too absorbed in their own present day problems to give heed to hypothetical sci-fi ones. The response to climate change is still listless and slow, despite much of the world's economic power being concentrated on coastal areas. There need not be an arms race if the people in control of the arms (i.e. politicians and oligarchs) are unaware or are skeptical of the "risks" of Strong AI.
Taking military precaution which in turn prompts further development sounds like one hell of a self-fulfulling prophecy.
 
Bostrom's superintelligence is the canonical treatment of all this stuff, but for an overview of why this is more difficult than you're making it out to be, see this summary of Bostrom's work.

I've read all this before. It's entertaining. That's the best thing I can say about it. But its starting point is AFTER we have the capability of creating a superintelligent AI. We don't have that capability. We don't even know if it's possible, and we don't know what it would be like if it did come to pass.

But, for the moment, let's assume these assumptions are correct:

1. We will someday have the ability to create a superintelligent AI.
2. AIs are extremely dangerous, and any loss of control is potentially catastrophic.

What action needs to be taken now? Where's the fire? These are problems to be solved when the situation arrives. I would argue that they cannot be solved sooner. What urgent action is Elon Musk advising we take? Regulation? Give me a break.

Also, the comparison to climate change are preposterous. One problem is real, here now, and 100% certain to be devastating without global action. Both regulation and education are urgently needed. The other is hypothetical and distant in time, if it happens at all. And if it does happen, it'll be done by people who understand any risks far better than the people crying wolf now do.
 

JWiLL

Banned
Zuckerberg called Musk's AI doomsday rhetoric "pretty irresponsible." Musk responded by calling Zuckerberg's understanding of the issue "limited."

This is pretty fantastic. I love how Musk is probably one of the few people who can talk down to Zuckerberg and he'll just accept it.
 
Elon Musk worship is tiresome. He’s a businessman who’s good at seeking out government subsidy and hyping himself up as some kind of Tony Stark. He’s not an actual once-in-a-generation scientific genius.

What humans do with AI is more dangerous than AI itself. It’ll be a very powerful tool for economic and military planning.
 

Ghost

Chili Con Carnage!
People said the exact same things about nuclear weapons and the world was pretty close to the brink of nuclear war so it's weird to me that Musk can't see the correlation. Humanity won't go from zero to singularity just like it didn't go from zero to 1000s of nuclear warheads pointed at each other. The threat will be realised, normalised and controlled as we get closer to realising it.
 

Razorback

Member
I've read all this before. It's entertaining. That's the best thing I can say about it. But its starting point is AFTER we have the capability of creating a superintelligent AI. We don't have that capability. We don't even know if it's possible, and we don't know what it would be like if it did come to pass.

But, for the moment, let's assume these assumptions are correct:

1. We will someday have the ability to create a superintelligent AI.
2. AIs are extremely dangerous, and any loss of control is potentially catastrophic.

What action needs to be taken now? Where's the fire? These are problems to be solved when the situation arrives. I would argue that they cannot be solved sooner. What urgent action is Elon Musk advising we take? Regulation? Give me a break.

Also, the comparison to climate change are preposterous. One problem is real, here now, and 100% certain to be devastating without global action. Both regulation and education are urgently needed. The other is hypothetical and distant in time, if it happens at all. And if it does happen, it'll be done by people who understand any risks far better than the people crying wolf now do.

You've read Nick Bostrom's Superintelligence? Somehow I highly doubt that. Otherwise, you wouldn't be asking what actions we should be taking now.

You may not know, but there are people who do.

People said the exact same things about nuclear weapons and the world was pretty close to the brink of nuclear war so it's weird to me that Musk can't see the correlation. Humanity won't go from zero to singularity just like it didn't go from zero to 1000s of nuclear warheads pointed at each other. The threat will be realised, normalised and controlled as we get closer to realising it.

You use that analogy as if it's a point in your favour when in fact it's the opposite. Do you know how often and how close we came to extinction during the cold war? The odds weren't in our favour but somehow we survived. The takeaway isn't that we'll always figure out how to escape extinction, we should have learned from that how lucky we were and be extra cautious in the future.
 

Kinitari

Black Canada Mafia
People said the exact same things about nuclear weapons and the world was pretty close to the brink of nuclear war so it's weird to me that Musk can't see the correlation. Humanity won't go from zero to singularity just like it didn't go from zero to 1000s of nuclear warheads pointed at each other. The threat will be realised, normalised and controlled as we get closer to realising it.
I think the challenge and difference here is intelligence. There is a bit of hubris that comes with saying 'sure we might create something that is super intelligent, but we can handle it'.

That's not to say I even really disagree with you statement, I'm probably in the 'we'll be fine' camp, that being said I appreciate the core idea that if we were somehow to create a super AI, and it somehow of it's own volition decided that it's purpose in life was to spread across the universe - what happens if we disagree? At that point does the AI just have an off button? It knows all about us, and what we might do. Maybe it even read this post and already has contingencies and backups, maybe it masters the art of the long con.

This is all predicated on the idea that we create an AI that even has a human parseable intelligence. Maybe it thinks on an entirely different level, and is seemingly as intelligent as the laws of physics, and just as surmountable (not very).

I don't think there is anything about intelligence that makes it magical and out of the reach of software and hardware to accomplish. It's not a matter of if, but when where what and how.
 
Top Bottom