• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Elon Musk tweets: "Competition for AI superiority... most likely cause of WW3"

Lol, the arrogance here sometimes is funny.

Much of the comments here look like a lot of global warming comments. Obviously global warming has way more research into it, but there are similarities here all the same.

Like, ignoring Musk, there are a number of other experts saying the same.
 
One of my favorite explanations of how 'super AI' will appear is in AI software developed to participate in the Stock Exchanges and other financial systems.
At some point it will be only AI vs AI in that field, doing millions of operations per second and trying to predict the other AIs, and as time happens, more and more connected with the real world trying to make sense of it, so it can understand political and economical news that affect the stock exchange and win more profits.

Because greed is a damn good motivator for progress.


This is a similar explanation, so I dig it.

progress is a myth
 
This guy acts so stupid. Does he realize Terminator is just a movie? So what would we do if an AI would start doing some stupid shit? We switch it off. The guy isnt even a scientist and he is screaming OH GOD THE AI WILL KILL US ALL when its in its earliest form. Its just stupid to condem the most important new technology of our generation. What we should actualy think of is what should people do that got their job automated and should we do this universal basic income thing.
 
google maps or maybe specifically "routing" is still narrow intelligence by definition. That's the terminology used by researchers. Essentially it's a machine that is very intelligent about a very narrow set of variables. Has a very narrow perspective. But general intelligence is closer to us - we can learn Italian starting today if we wanted to. We can reason our way out of situations we've never seen prior.

You're right though that we are very much in the realm of educated speculation. I don't begrudge anybody's skepticism. What I do begrudge is hand waving and scoffing at the mere thought of giving this topic any attention.

It can be argued that humans are actually really good at one specific thing that has a wide variety of applications

( Say we;re really good at visiualization, which powers our imagination, which is what makes us highly adaptive in the first place)
 

Chairman Yang

if he talks about books, you better damn well listen
This guy acts so stupid. Does he realize Terminator is just a movie? So what would we do if an AI would start doing some stupid shit? We switch it off. The guy isnt even a scientist and he is screaming OH GOD THE AI WILL KILL US ALL when its in its earliest form. Its just stupid to condem the most important new technology of our generation. What we should actualy think of is what should people do that got their job automated and should we do this universal basic income thing.
Possibility one: Elon Musk and the smart people who agree with him were just too dumb to consider the "just unplug it lol" solution.

Possibility two: NeoGAF poster maniac-kun may not have considered that his kneejerk objections have already been thought about and addressed.

Which one seems more likely to you?
 
You've read Nick Bostrom's Superintelligence? Somehow I highly doubt that. Otherwise, you wouldn't be asking what actions we should be taking now.

You may not know, but there are people who do. .

No, I've read the link you posted before. Articles about how AI is going to kill us all are great, I love them.

And my "what action should we take" question was rhetorical. Because the answer is "none". Not for you or I. The problems related to containing a hypothetical AI will be addressed by whomever creates it, not by the general population, and certainly not by politicians.
 
Are they? Do you really think this or just wish it was true? Show me a concrete example of how AI research is helping to "solve famine". There was lot of hullabaloo about GMO crops "solving famine" as well and I don't know where that ended up but at least it's in the public consciousness. I've never heard of any AI solutions applied to the problem of global food distribution, which, from my limited perspective, is mostly a political problem rather than a technological one.

That sounds like kool-aid but I'll humor you. What has Musk done, in terms of influencing public policy, to ensure that automation doesn't destroy the foundations of our society? The thing with "long-term" thinking is that it has a tendency to overlook the short term. Things are about to get bad really quick, much faster than the time it'll take for Strong AI to be an existential threat. Does Musk care at all about this or does he think we'll just get through it magically?

Eh GMO's have contributed to the problem of over fed and undernourished people in developing countries. This is by design because people decided getting rich and famous was more important than their health.

All the modifications to food in the world dont matter if your body is incapable of processing it. Its like fish who eat plastic lol

Musk isnt a politician.Doubt he's concerned about those things
 

jillytot

Member
Isn't most of this discussion going out of context?

Let's break it down.

He is not saying WWIII is going to happen because of AI.

He said IF there is a WWIII, it will likely be the result of a global cold war to become the dominate power in this arena.

This is much like how the advent of the nuclear bomb and it's clear power potential became obvious to everyone around the world, and nations knew this was going to be the new key to global power in the future.

Now imagine that we are in the 1930's pre-WWII era.
Imagine the extent of destructive power anyone had seen to that point were conventional bombs and munitions.

Imagine someone came along and said that within the next 10 to 20 years, humanity will attain so much destructive power, that a single nation would have the ability to wipe an entire city with a single bomb, and in 20 to 30 more years, they could wipe out all the life on earth a thousand times over.

This is the current state of nuclear weapon technology.

The nuclear arms race that followed WWII could have indeed and almost did cause WWIII at some points.

Atomic bombs not only increased in power by thousands of times over the next few decades (evolving into H-bombs) but our capacity to to use them in increasingly more effective ways also grew just as drastically.

Before the first atomic bomb exploded, no one knew what was going to happen. There was even a crazy theory that the bombs could cause a chain reaction and ignite the atmosphere. Even after that, no one knew how the world could continue to operate once these bombs proliferated to other nations. It was impossible to predict what was going to be the result of world superpowers building thousands of bombs that could be delivered anywhere in the world in minutes. Each of these bombs are equipped with multiple warheads that can destroy multiple cities with blast zones large enough to flatten hundreds of square miles.

There is little doubt from anyone in the field that AI has a chance to be equally disruptive to human civilization as the advent of the atomic bomb.

Unlike the race to build the first atomic bomb, the race to the first real AGI is being developed by multiple competing world powers.

Like atomic bombs, the capacity for AI to be exponentially more disruptive over short timelines is very real.

Unlike atomic weapons, AI may no longer require human input to rapidly develop and proliferate globally. These systems even in their most benign forms will create new power dynamics and change the global political landscape.

Just like with atomic weapons, it was really impossible for few to see this coming until the first atomic bomb was exploded. Once the principal of fission was demonstrated, the development and spread of the atomic bomb happened much quicker than anyone anticipated.

The point is this will be globally disruptive, and it's safe to say that IF there is ever a reason for WWIII to happen, this seems likely to play a strong roll.

I do not understand how this statement sounds so crazy once you understand the basic components.
 

Kyzer

Banned
This guy acts so stupid. Does he realize Terminator is just a movie? So what would we do if an AI would start doing some stupid shit? We switch it off. The guy isnt even a scientist and he is screaming OH GOD THE AI WILL KILL US ALL when its in its earliest form. Its just stupid to condem the most important new technology of our generation. What we should actualy think of is what should people do that got their job automated and should we do this universal basic income thing.

This has to be some of the dumbest shit ive ever read
 
And once again, the sensible "this is already happening" post gets ignored in a thread on 'AI' when in reality modern AI really just means 'machine learning + models + vast amounts of data'. We've had models for a long time, we've also had machine learning for quite some time before it tipped over into the main computing paradigm. What nobody had before, was vast amounts of data. That is what Facebook (and others) have wrought upon human history (if it is still 'human' history that is): the explosions of data on human behavior that was previously hard to get. Now that we have it, we don't need quantum computers to extract patterns for the flimsiest trace of data, because with this amount, even a modest model would be able to get an accurate answer over a wider range than any human could ever produce.

I get that the philosophy of science isn't everybody's jam, but considering the ideas of accuracy, bandwidth, and data are not new, you should all make an effort to wrap your heads around these things. The world isn't linear, it's complex. It is vital that you understand the difference and how that affects us.


Also, there is a stunning denial in this thread about all the astroturfing operations that got us to the Trump presidency and then going "well it's not happening today...". Some of you are really, well, dense in some ways. Sorry not sorry. Who do you think would start WW3 anyway?
And while we're on this: why do you think the alt-right loves tech so much? (and will suddenly bring up 'philosophy' when declaring said term to be bullshit when talking to people who won't accept message A --hate-- but will accept message B -- popular thing is totally bullshit, man--. Think about it, that's all I ask. )

Quoting for new page ( btw what are you suggesting in your last paragraph there?

I mean people on Neogaf act like Bernie Bros were a real thing rather than bullshit peddled by the media lol.
 

KHarvey16

Member
Eh GMO's have contributed to the problem of over fed and undernourished people in developing countries. This is by design because people decided getting rich and famous was more important than their health.

All the modifications to food in the world dont matter if your body is incapable of processing it. Its like fish who eat plastic lol

Musk isnt a politician.Doubt he's concerned about those things

I can see scientific literacy is important to you.
 
This has to be some of the dumbest shit ive ever read

The guy is talking about Skynet all the time. And real scientists in the field mostly dont share his opinion and are not worried about anything like this which could happen in a few decades or century. There are also AI inspection tools beeing developed at the moment so that you can see why a AI made a specific decision. The technology is in it infancy and he talks about stuff that is far far far off. If you actualy know how the current stuff works you find it laughable what he is suggesting. And he goes from one extreme to the next with his fearmongering.
 
It's also a bubble where he has dedicated his life to saving our species even though most are as ignorant as some of the people posting in this thread.

I'm sure the guy exploiting his workers has all our best interests at heart with stories about the future of AI that most scientists in the field would laugh at.
 

capslock

Is jealous of Matlock's emoticon
Maybe if the entire world got off Elon's jock for a minute he'd stop spouting this nonsense.
 
The guy is talking about Skynet all the time. And real scientists in the field mostly dont share his opinion and are not worried about anything like this which could happen in a few decades or century. There are also AI inspection tools beeing developed at the moment so that you can see why a AI made a specific decision. The technology is in it infancy and he talks about stuff that is far far far off. If you actualy know how the current stuff works you find it laughable what he is suggesting. And he goes from one extreme to the next with his fearmongering.

Computers are still doing only dumb calculations just way faster than 20x years ago. All the deep learning and neural networks aren't changing that. It's like expecting your calculator starts calculating stuff on its own.

In fact we are still from the AI Musk is talking about so far away as 20 years ago. We are just getting better at using computers for dumb calculations.
 
I can see scientific literacy is important to you.

While GMO's are considered safe for consumption there is a point where our food mutates faster than humans do. Getting nutrients into the cell is imperative here.

Where do you think all these food allergies came from? Recent genetic mutations.

Imagine the following what would the world look like if everyone was lactose intolerant and due to inequality, only the 1% could eat food that wasnt modified to create lactose ?

We are still learning a lot more about DNA than we did prior, modifying DNA in the fashion we are is uncharted territory, and we'll learn a lot more as we continue diving in and doing the basic research necessary here.

I mean, even Steve Jobs died without getting what he was looking for in life . As rich as he was, he really wasnt informed when it came to his health anyhow.

The human body needs to remain balanced to function optimally. Our environment isnt the best at helping us do that anymore since we've done messed it all up.
 
Why is it nonsense...

Wtf is with all the hate on Musk?

Because AI is so far away from what Musk is insinuating that it's laughable.

Computer science students like me are literally learning methods of AI programming that are decades old and never got replaced, only got more efficient. Doesn't matter how big you make your neural network.
 

Razorback

Member
Because AI is so far away from what Musk is insinuating that it's laughable.

Computer science students like me are literally learning methods of AI programming that are decades old and never got replaced, only got more efficient. Doesn't matter how big you make your neural network.

How is this an argument against A.I.? Neural networks work. What the fuck do you think your brain is?
 

KHarvey16

Member
I'm sure the guy exploiting his workers has all our best interests at heart with stories about the future of AI that most scientists in the field would laugh at.

Ignorant drive-bys are a real hallmark of these kinds of threads.

While GMO's are considered safe for consumption there is a point where our food mutates faster than humans do. Getting nutrients into the cell is imperative here.

Where do you think all these food allergies came from? Recent genetic mutations.

Imagine the following what would the world look like if everyone was lactose intolerant and due to inequality, only the 1% could eat food that wasnt modified to create lactose ?

We are still learning a lot more about DNA than we did prior, modifying DNA in the fashion we are is uncharted territory, and we'll learn a lot more as we continue diving in and doing the basic research necessary here.

I mean, even Steve Jobs died without getting what he was looking for in life . As rich as he was, he really wasnt informed when it came to his health anyhow.

The human body needs to remain balanced to function optimally. Our environment isnt the best at helping us do that anymore since we've done messed it all up.

Lol, what is this nonsense?

Where do you folks come from? Yikes.
 
How is this an argument against A.I.? Neural networks work. What the fuck do you think your brain is?

You tell me, as far as I know neuroscientists don't know exactly how the brain forms self-awareness to this day ;)

The definition of self-awareness itself is not set in stone. What it encompasses etc.

Ignorant drive-bys are a real hallmark of these kinds of threads.

The post I answered was pretty ignorant, yeah.
 

Chairman Yang

if he talks about books, you better damn well listen
The guy is talking about Skynet all the time. And real scientists in the field mostly dont share his opinion and are not worried about anything like this which could happen in a few decades or century. There are also AI inspection tools beeing developed at the moment so that you can see why a AI made a specific decision. The technology is in it infancy and he talks about stuff that is far far far off. If you actualy know how the current stuff works you find it laughable what he is suggesting. And he goes from one extreme to the next with his fearmongering.
I'm not sure where you're getting this data about what "real scientists in the field" think.
In a paper's survey from this year, 31% of the AI researchers surveyed thought the AI risk issue was "a moderately important problem". 39% thought it was even more important than that.
 

reckless

Member
Because AI is so far away from what Musk is insinuating that it's laughable.

Computer science students like me are literally learning methods of AI programming that are decades old and never got replaced, only got more efficient. Doesn't matter how big you make your neural network.

I wouldn't say 30 years is laughable when talking about a problem of this magnitude.
https://nickbostrom.com/papers/survey.pdf
The median estimate of when there will be 50% chance of human–level machine intelligence was 2050.
Getting more efficient is a pretty important part.
 

Mr.Mike

Member

There are plenty of risks with AI. AI reinforcing discrimination, AI being attacked. The sci-fi human extinction fears are silly.

From the very paper you cite.

2.
Explosive progress in AI after HLMI is seen as possible but improbable.
Some
authors have argued that once HLMI is achieved, AI systems will quickly become vastly
superior to humans in all tasks [3, 12]. This acceleration has been called the ”intelligence
explosion." We asked respondents for the probability that AI would perform vastly better
than humans in all tasks two years after HLMI is achieved. The median probability was
10% (interquartile range: 1-25%). We also asked respondents for the probability of explosive
global technological improvement two years after HLMI. Here the median probability was
20% (interquartile range 5-50%).
3.
HLMI is seen as likely to have positive outcomes but catastrophic risks are
possible.
Respondents were asked whether HLMI would have a positive or negative impact
on humanity over the long run. They assigned probabilities to outcomes on a five-point
scale. The median probability was 25% for a ”good" outcome and 20% for an ”extremely
good" outcome. By contrast, the probability was 10% for a bad outcome and 5% for an
outcome described as ”Extremely Bad (e.g., human extinction)."
 
I wouldn't say 30 years is laughable when talking about a problem of this magnitude.
https://nickbostrom.com/papers/survey.pdf

Getting more efficient is a pretty important part.

A machine responding to information with the intelligence level of a human does not automatically result in self-awareness. Just because a machine can tell me exactly what it sees on a picture I show it with human literacy doesn't mean it will suddenly become self-aware.

You should read more.

Sure, Jan.
 
A machine responding to information with the intelligence level of a human does not automatically result in self-awareness. Just because a machine can tell me exactly what it sees on a picture I show it with human literacy doesn't mean it will suddenly become self-aware.

The result is that we have dumb machines which are just pretty good at things. A selfdriving car would never drive somewhere because reasons just because it can better drive than the best race drivers.
 

reckless

Member
A machine responding to information with the intelligence level of a human does not automatically result in self-awareness. Just because a machine can tell me exactly what it sees on a picture I show it with human literacy doesn't mean it will suddenly become self-aware.

I mean you don't need AI to be self-aware in the case that Musk is talking about here. AI doesn't need to be self-aware to be a weapon.
 

Chairman Yang

if he talks about books, you better damn well listen
There are plenty of risks with AI. AI reinforcing discrimination, AI being attacked. The sci-fi human extinction fears are silly.

From the very paper you cite.
I'm not sure of the relevance of the quotes you posted. The first talks about the probability of 2-year intelligence explosion, which isn't a necessary condition for devastating AI. The second alludes to the fact that a lot of AI researchers believe AI research is probably worth the risk. That doesn't seem surprising to me.

There's no need to get at the question of existential AI risk indirectly. It's asked directly right there in the paper. You may think it's silly, but 70% of people in the field disagree with you.
 

mugwhump

Member
People just shooting from the hip on their sarcastic responses here about lolol shutup about terminator - do you actually know anything about this topic? Of all the reading I've done from experts in the field it has become clear that there is a clear danger that upcoming AI will be creating gods. It's an incredibly serious topic and there's a reason why nearly everybody close to the topic is alert. In fact, many believe we need to move on AI & digital security in a big way or face outright extinction. Sounds hyperbolic but the more you research the more you realize this is totally feasible.
It's been about 5 years since I worked in ML myself, but back then we were not remotely close to creating any kind of generalized, human-like intelligence. At this point a technological singularity remains firmly in the realm of science fiction, as far as I'm concerned.
 

Mr.Mike

Member
I'm not sure of the relevance of the quotes you posted. The first talks about the probability of 2-year intelligence explosion, which isn't a necessary condition for devastating AI. The second alludes to the fact that a lot of AI researchers believe AI research is probably worth the risk. That doesn't seem surprising to me.

There's no need to get at the question of existential AI risk indirectly. It's asked directly right there in the paper. You may think it's silly, but 70% of people in the field disagree with you.

The hysteria is utterly unjustified.

QQ92wC7.png
 

MogCakes

Member
I suppose graphically our technology-over-time line is going towards an asymptote. So what happens once a super AI that is sentient comes into existence? Everything doesn't immediately go down the gutter does it? I'm having trouble imagining how it would go down.
 

nomis

Member
I'm sure the guy exploiting his workers has all our best interests at heart with stories about the future of AI that most scientists in the field would laugh at.

yes, he’s a fucking asshole about the unionizing shit

still, he banked his entire fortune on two of the highest risk industries in the fucking world because he knew for instance that reusable rocketry was an inflection point on humanity becoming an interplanetary species

he’s ONE person. i’m pretty sure the other 7 billion can worry about social issues and poverty and human rights while one eccentric cunt myopically focuses on ensuring the propagation of our species.

people ITT think he’s just some delusional rich guy ranting about skynet or that shitty rogue AI movie with shia labeouf becoming real life next week. does he overlook other more pressing issues for our species like systemic inequality? sure he does. he’s also one fucking guy.
 
Ignorant drive-bys are a real hallmark of these kinds of threads.



Lol, what is this nonsense?

Where do you folks come from? Yikes.

Perhaps you should spend more time learning about the human body then. As we all are, constantly discovering more and more layers in it. Functional medicine is a very interesting field of study, highly recommend it,

Maybe intern at a Biotech firm or a think tank, learn a little bit about the industry and the research that's been done
 
Top Bottom