• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Elon Musk tweets: "Competition for AI superiority... most likely cause of WW3"

I don't know if you're asking genuinely, but this is one of the worst-case scenarios:

1. hyper-intelligent A.I. forms
2. develops system of morality
3. decides that sorting all the world's grains of sand by size and color is the most moral cause possible
4. destroys humans since they are in the way of sand-sorting

If we create a being more intelligent than all of humanity, we risk instant extinction.

Even if that risk is .000001%, it's too high.

"Instant" extinction. From a piece of software. At what point did we develop the technology capable of sorting all the world's grains of sand, and put this piece of software in complete control of it, with no human intervention possible?

Jesus. People act like the coming of AI is like a superhero origin story: there was an explosion, and then this AI had godlike powers! Humanity never had a chance!

And that's the reason why people laugh at that cheap sci fiction idea.

It's neither realistic nor practical possible.
AI would have godlike powers. Look how long it took us to go from thinking of a rock as a tool to thinking about an atomic bomb. Hundreds of thousands if not millions of years, right? We're human. We go slow.

AI could perform that same evolution in seconds. What will happen in minutes? What will happen in a day? The point is we can't imagine. It won't be as simple as "we have it isolated in this box so let's think about what we'll do for a while" because this being might figure out ways to jump the box that we can't possibly prepare for.

Instant extinction is a possibility. Again maybe that possibility is one in a thousand or one in a million. But because of how dire the consequence is, it's a necessity that we must consider it.
 
Perhaps you should spend more time learning about the human body then. As we all are, constantly discovering more and more layers in it. Functional medicine is a very interesting field of study, highly recommend it,

Maybe intern at a Biotech firm or a think tank, learn a little bit about the industry and the research that's been done
I'm pretty sure "biotech" firms don't practice pseudoscience.
 
If someone like Stephen Hawking says, let's be carefull, I think, we should just shut up and be carefull.

http://www.bbc.com/news/technology-30290540

Everyone with any understanding of the matter says that we should be careful, but no, it's the forum posters who know better and the science dudes need to get a grip.

This thread is like a bad apocalyptic movie where nobody listens to the "crazy scientist" about imminent danger then half the worlds population gets wiped out. Or like electing a bad business man as President.
 

Doikor

Member
I'm on this with Elon. Not really afraid of "true" general AI going out of control and destroying us that much (just don't give it direct free access to the physical world). But a general AI would pretty much be godlike for advancing ones technology and something I could see other countries fighting over as in US/Russia/China going "no we won't allow country X to rule the world trough being the first to have actual general AI working" (the first one who achieves a general AI and is not willing to share it will have a unimaginable advantage on the world stage)
 
Top Bottom