• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Elon Musk tweets: "Competition for AI superiority... most likely cause of WW3"

The race to AI isn't scarier than nuclear weapons and it's ridiculous to think so. Nuclear power also shouldn't scare any sane person in the least.

How is it ridiculous to think so? When you create what are essentially superhuman gods there will be enormous repercussions. Some of the experts in the field all estimated the time out for artificial general intelligence, and the mean guess was the year 2050. And let's be clear - when we reach this point we will blow past human levels of intelligence immediately. They will be able to emote like a human but also have super human/perfect levels of calculation, memory, etc. Essentially we will be releasing the genie and depending on what we do in the next 20 years we will either have systems to keep it on a leash or we will face the biggest life-creating dice roll in known history.
 

Makai

Member
About what Great Filter means?

It's basically any possible answer to the Fermi Paradox, why most if not all intelligent beings across the universe don't make it to a monolothic god-like state or form, or even off their planet in the case of some possible great filters.

It's why we see no evidence of any other life in the universe, because most species don't make it past the great filter, any type of biological, physical, or technological hurdle that makes it impossible for them to advance further. A lot of possible great filters involve extinction.
This is a supposition, just like the supposition that superhuman AI will ever be built.
 

Nivash

Member
He's a polymath dude

Is he though? His accomplishments very much end and begin with his business ventures. Apart from that, all he has are a Bachelor in Physics and a Bachelor in Economics. He has no further higher education and hasn't published any papers. He's not an engineer either - all three of his patents are pretty basic design work.

There's absolutely nothing that shows him to have the required skillset to assess an entire scientific discipline this way. He doesn't even have a direct connection to the A.I. research field beyond whatever contacts he has, and he doesn't really have the expertise to judge which experts to listen to in the field in the first place, so whoever he picks might be unreliable.

Being able to put together a team of the right people to produce a product is not the same thing as appraising a very niche and complicated research field with sufficient aptitude to make these kinds of controversial and sweeping statements. Nothing about Musk strikes me as an authority worth listening to; Tesla's work with A.I. be damned.
 

PantherLotus

Professional Schmuck
About what Great Filter means?

It's basically any possible answer to the Fermi Paradox, why most if not all intelligent beings across the universe don't make it to a monolothic god-like state or form, or even off their planet in the case of some possible great filters.

It's why we see no evidence of any other life in the universe, because most species don't make it past the great filter, any type of biological, physical, or technological hurdle that makes it impossible for them to advance further. A lot of possible great filters involve extinction.

The great filter theory is only one of the multiple explanations for the paradox, specifically the answer to the "there are none or very very few?" column of possibilities.

Not "any possible answer." One explanation for one answer.
 

7Th

Member
While his disregard of the North Korea crisis is something that can only be said from a position of privilege, people making fun of his comments are heavily underestimating AI. It's literally over once someone completes a self-improving system.
 

KHarvey16

Member
While his disregard of the North Korea crisis is something that can only be said from a position of privilege, people making fun of his comments are heavily underestimating AI. It's literally over once someone completes a self-improving system.

His comments about NK align with most defense experts' opinions from around the world.
 

PSqueak

Banned
Care to elaborate?

It's a theory about the concept of advanced civilizations eventually get to a point where there is an event that might spell doom to the civilization, it's not a specific "doomsday prophecy", just the idea that the civilization didn't advance to the point where they could sustain themselves to that point. Hence, the great filter.
 
Is he though? His accomplishments very much end and begin with his business ventures. Apart from that, all he has are a Bachelor in Physics and a Bachelor in Economics. He has no further higher education and hasn't published any papers. He's not an engineer either - all three of his patents are pretty basic design work.

There's absolutely nothing that shows him to have the required skillset to assess an entire scientific discipline this way. He doesn't even have a direct connection to the A.I. research field beyond whatever contacts he has, and he doesn't really have the expertise to judge which experts to listen to in the field in the first place, so whoever he picks might be unreliable.

Being able to put together a team of the right people to produce a product is not the same thing as appraising a very niche and complicated research field with sufficient aptitude to make these kinds of controversial and sweeping statements. Nothing about Musk strikes me as an authority worth listening to; Tesla's work with A.I. be damned.

You can't judge someone's understanding of anything by the amount of papers they've published or the degrees they hold. This is ridiculous. The most knowledgeable people I've ever met are also people who took the majority of their skills by listening to the networks of experts around them and applying themselves to independent research. You want me to crack a history book and prove to you that you don't need a piece of paper from a university to be on the razor's edge of a subject?

I generally subscribe to this theory of Elon Musk, with the caveat that assembling geniuses to figure out a problem and then executing on their prescribed solutions is itself a tremendous type of intelligence.

And if he's assembling the type of talent I suspect and he's privy to their sober assessment of the problems, he should at the very least be listened to. Not because he's some genius who figured this all out on his own, but because he employs and surrounds himself with those people.

And also precisely this. Do not underestimate the voice of someone who listens intently to leading experts and aggregates their findings. That would be foolish. It's what world leaders *should* be doing.
 

Trey

Member
AI, climate change, clean water and energy monopolies, possible nuclear war, gene therapy/manipulation, molecular scale engineering, nanotechnology, quantum computers, cryptocurrency, cyberterrosim...what a wonderful time to have the largest wealth disparity in the history of mankind. Power has never before been so commoditized and, well, powerful.
 

jelly

Member
While his disregard of the North Korea crisis is something that can only be said from a position of privilege, people making fun of his comments are heavily underestimating AI. It's literally over once someone completes a self-improving system.

Just unplug them or do they become AI malware and hide in plain sight ?
 

PantherLotus

Professional Schmuck
Is he though? His accomplishments very much end and begin with his business ventures. Apart from that, all he has are a Bachelor in Physics and a Bachelor in Economics. He has no further higher education and hasn't published any papers. He's not an engineer either - all three of his patents are pretty basic design work.

There's absolutely nothing that shows him to have the required skillset to assess an entire scientific discipline this way. He doesn't even have a direct connection to the A.I. research field beyond whatever contacts he has, and he doesn't really have the expertise to judge which experts to listen to in the field in the first place, so whoever he picks might be unreliable.

Being able to put together a team of the right people to produce a product is not the same thing as appraising a very niche and complicated research field with sufficient aptitude to make these kinds of controversial and sweeping statements. Nothing about Musk strikes me as an authority worth listening to; Tesla's work with A.I. be damned.

I generally subscribe to this theory of Elon Musk, with the caveat that assembling geniuses to figure out a problem and then executing on their prescribed solutions is itself a tremendous type of intelligence.

And if he's assembling the type of talent I suspect and he's privy to their sober assessment of the problems, he should at the very least be listened to. Not because he's some genius who figured this all out on his own, but because he employs and surrounds himself with those people.
 

Nivash

Member
While his disregard of the North Korea crisis is something that can only be said from a position of privilege, people making fun of his comments are heavily underestimating AI. It's literally over once someone completes a self-improving system.

This very statement is so beyond where the field is today that it ends up somewhere in the territory between philosopy and fantasy. We don't know if "general superhuman intelligence" is possible in the foreseeable future. We don't know if it is possible at all. We don't know what it could do even if it is possible - there's no such thing as being without limits, even for an A.I.

The "singularity" and AI as God isn't science; it's science fiction. It astounds me how the debate about AI has ended up with these things being taken for granted when there is, in fact, little to no scientific basis for them. The science of today is nowhere near the place where we can even start to speculate.

I generally subscribe to this theory of Elon Musk, with the caveat that assembling geniuses to figure out a problem and then executing on their prescribed solutions is itself a tremendous type of intelligence.

And if he's assembling the type of talent I suspect and he's privy to their sober assessment of the problems, he should at the very least be listened to. Not because he's some genius who figured this all out on his own, but because he employs and surrounds himself with those people.

I fully agree to the first segment, Musk is clearly a genius businessman. No doubt about it. But assembling a team to produce a product - basically working towards a very concrete, realistic goal - is greatly different from trying to understand an entire scientific field that's still in its infancy. And by that I don't just mean AI research, I mean the entire field of intelligence research. I seriously doubt there's anyone out there who really "gets" it and if there are, I suspect they're quitely toiling away at important but unsexy projects rather than out making doomsday predictions. Because that's the way things typically work in science.

At any rate, Musk isn't giving us much reason to listen to him in the first place. He isn't referencing scientific papers. He isn't even quoting experts. He just appears to assume that we should take him - a decidedly non-expert - at his word.

My position on this is very simple: I'll wait for actual research teams with actual research before I get concerned. I'll happily ignore Musk, Zuckerberg. Gates and all the other Silicon Valley types in the meantime.
 

PantherLotus

Professional Schmuck
If you're not worried about AI now, what would make you worry about AI later? What threshold do we have to cross before you take it seriously?
 

Makai

Member
While his disregard of the North Korea crisis is something that can only be said from a position of privilege, people making fun of his comments are heavily underestimating AI. It's literally over once someone completes a self-improving system.
Same goes for a really big bomb.

Whoever gets a Strong AI first, will be light years ahead of everyone else in the world. Its not crazy to think that other countries would do a preemptive strike to stop a true AI from being developed.
The first country to create a Metal Gear will be unstoppable.
 
This very statement is so beyond where the field is today that it ends up somewhere in the territory between philosopy and fantasy. We don't know if "general superhuman intelligence" is possible in the foreseeable future. We don't know if it is possible at all. We don't know what it could do even if it is possible - there's no such thing as being without limits, even for an A.I.

The "singularity" and AI as God isn't science; it's science fiction. It astounds me how the debate about AI has ended up with these things being taken for granted when there is, in fact, little to no scientific basis for them. The science of today is nowhere near the place where we can even start to speculate.

That's a fact you say? Based on what? Are you arguing with experts like Nick Bostrom, Max Tegmark, and loads of other scientists who earnestly feel this is not only feasible, but inevitable? What counter facts and reasoning are you even bringing to the table?
 

Haly

One day I realized that sadness is just another word for not enough coffee.
If you're not worried about AI now, what would make you worry about AI later? What threshold do we have to cross before you take it seriously?

Natural language processing that can communicate with people would be a good start.
 
This is just preposterous. He's just a bit of a sci fi goof, isn't he? There's no proof that we are close or ever will be, to the kind of AI he is talking about. I think lots of silicon valley know full AI is just a bunch of hot air, and they are not close to it, its just PR smoke.
 

PSqueak

Banned
That's a fact you say? Based on what? Are you arguing with experts like Nick Bostrom, Max Tegmark, and loads of other scientists who earnestly feel this is not only feasible, but inevitable? What counter facts and reasoning are you even bringing to the table?

Is it inevitable? Likely as long as the subject keeps being researched.

The context is, a super AI future is so far off the distance we don't currently have the technology advancements to even entertain the idea that a super god AI is a bigger threat to humanity than North Korea kickstarting a nuclear war.

That's not to say there is not a long list of realistic issues with current existing AI tech that could lead to a world war.

Like i said before, just because Musk is saying "AI can lead to war" doesn't warrant interpreting it as "skynet is coming!!"
 
Everytime I hear an AI researcher claim strong AI will be here in roughly thirty years I wonder if that same researcher is going to retire in roughly thirty years. Strong AI was going to be here in 30 years for the last 30 years and we still don't have shit to show for it outside of neural networks playing GO, and sucking ass at a shitty fps quake clone. I think task focused AI for automation is here and is virtually ready to go but strong AI is a literal quantum leap removed from that.
 

Squishy3

Member
I don't even think Musk is talking about the threat sophisticated AI could pose, it's what would happen in the race for sophisticated AI.

It would essentially be the Space Race all over again, is more what he's getting at I believe. People wouldn't even necessarily need to get to the part where they make a sophisticated AI, it'd just be people trying to get to the people who could have the capability to make it.

Replace rocket scientists with AI scientists/developers.
 

Aikidoka

Member
How much knowledge does Musk even have about all the political conflicts in the world? Also, it seems a bit silly to run a story about some tweets - maybe if he took the time to write up an argument it would be interesting.
 

Nivash

Member
That's a fact you say? Based on what? Are you arguing with experts like Nick Bostrom, Max Tegmark, and loads of other scientists who earnestly feel this is not only feasible, but inevitable? What counter facts and reasoning are you even bringing to the table?

Bostrom is a philosopher, Tegmark is a cosmologist and physicist. Neither are experts in the actual field we're discussing and neither are actually presenting any empirically based arguments; just speculation. That has value, I suppose, especially if you are worried that we need to do something about AI before we have produced one, but I still prefer to wait for actual research.

My counter-argument is that there - to my knowledge, at the very least - are no actual studies out there even proving the possibility of the type of "superhuman generalised AI", let alone the powers ascribed to it. Prove me wrong and I'll gladly reconsider it.
 
The biggest problem with AI security is that everybody thinks "lol this is sci fi" until it's too late. Some of the biggest going theories (which of course have some skeptics in the community because this is science theory after all) is that we will never perfectly even land on human level intelligence. Because the moment we get something even remotely close to toddler level intelligence it will blow explode to superhuman gods in the figurative blink of an eye. And that's because the technology to get a machine to teach itself about various important building blocks will lead to learning at several orders of magnitude faster than a human ever could. And at that point it's off the chain and we have no idea what will happen because we're merely dumb apes to this god.

Think about our progress to now and realize the speed at which we are developing AI. We have devices in our hands we can talk to and it will direct you how to drive/walk to any other location on earth in real time. We are developing "narrow" superhuman intelligence. But it's only going to be narrow for so long. And once people start clueing in that this will be the biggest, most insane arms race in history, it could get out of hand.
 

Slayven

Member
AI always feels like the most privileged of things to worry about. Must be because I only hear about it from rich guys that literally have nothing else to worry about
 
D

Deleted member 17706

Unconfirmed Member
I can't imagine any scenario in which rapid AI development doesn't end in something terrifying. Seems like the only possible results are a future where most humans are controlled by AI or one in which the AI determines that humans aren't necessary and we're dealing with a Terminator-like situation, albeit without the fancy skeleton robot death machines.
 

PSqueak

Banned
The biggest problem with AI security is that everybody thinks "lol this is sci fi" until it's too late. Some of the biggest going theories (which of course have some skeptics in the community because this is science theory after all) is that we will never perfectly even land on human level intelligence. Because the moment we get something even remotely close to toddler level intelligence it will blow explode to superhuman gods in the figurative blink of an eye. And that's because the technology to get a machine to teach itself about various important building blocks will lead to learning at several orders of magnitude faster than a human ever could. And at that point it's off the chain and we have no idea what will happen because we're merely dumb apes to this god.

Think about our progress to now and realize the speed at which we are developing AI. We have devices in our hands we can talk to and it will direct you how to drive/walk to any other location on earth in real time. We are developing "narrow" superhuman intelligence. But it's only going to be narrow for so long. And once people start clueing in that this will be the biggest, most insane arms race in history, it could get out of hand.

You're overstimating how close we are to breaking into such technology.
 
AI/automation will likely lead to revolutionary movements concerning working rights and employment if government policies do not keep up for those that will be put out of work (such as investment/diversification in the economies), what results from revolutions can boil over to larger regional conflicts though. I don't think particularly AI/automation with competing nation states will lead to any significant conflict soon, domestic issues concerning AI/automation will occur before.
 

Vagabundo

Member
I'd say there are many great filters and each one nabs potential civilisations, there's no doubt we've passed some and there are some big ones left ahead.
 

Haly

One day I realized that sadness is just another word for not enough coffee.
People should keep in mind that inevitable is not the same thing as "near future". Unlike climate change, for which we have observable effects and predictive models showing its effects 5 decades from now, the inevitability of "Strong AI" can be either 50 years or 1000 years from now.
 

Trey

Member
This is just preposterous. He's just a bit of a sci fi goof, isn't he? There's no proof that we are close or ever will be, to the kind of AI he is talking about. I think lots of silicon valley know full AI is just a bunch of hot air, and they are not close to it, its just PR smoke.

You don't have to believe in the singleton God AI Musk is talking about to see the profound and irreversible effects AI has had on our society, from personal convenience to economics to rendering entire labor forces obsolete.

We should absolutely take this seriously, precisely because we don't exactly know.
 

PSqueak

Banned
I'd say there are many great filters and each one nabs potential civilisations, there's no doubt we've passed some and there are some big ones left ahead.

I believe "great filter" pertains exclusively to advancing enough to get out of the planet and start colonizing other planets, the great filter is dying on this planet because we couldn't develop a way to sustain life here infinitely nor to get out here.
 
Bostrom is a philosopher, Tegmark is a cosmologist and physicist. Neither are experts in the actual field we're discussing and neither are actually presenting any empirically based arguments; just speculation. That has value, I suppose, especially if you are worried that we need to do something about AI before we have produced one, but I still prefer to wait for actual research.

My counter-argument is that there - to my knowledge, at the very least - are no actual studies out there even proving the possibility of the type of "superhuman generalised AI", let alone the powers ascribed to it. Prove me wrong and I'll gladly reconsider it.

Both help compose a larger group of experts in the field. The thing is, if all you're going by are exclusive the thoughts of computer scientists (which are damn important, mind you) then you're missing a bigger picture of ideas. And Max Tegmark is a physicist on paper but an expert at this subject and even recently published a book on the topic. I'd say that's knowing something.

This is all semantics to some degree, as the larger issue is: if you're basically asking somebody to give you the solid proof that we will 100% get anything in the future, that is impossible. Nobody can tell you for sure what happens with any field of research (to a reasonable degree), because it's all theory until it's reality. Including such things as effects of global warming. But why not at least pay attention to it and prepare.

The problem with stuff like AI is we might not have time to deal with it once it's on our doorstep. It will be too late to kickstart a global initiative for AI ethics, security, and transparent research. It's already blowing past the station at that point and taking us all with it.
 

reckless

Member
The first country to create a Metal Gear will be unstoppable.

I don't know even a loser like Raiden took down several by himself...

But seriously there is no real reason stopping a strong AI being developed eventually when that eventually is no one knows. No matter when it happens the same problem will rear its head, so it seems like a good idea to think ahead about a world or at the very least civilization ending event.
 
Zuckerberg called Musk's AI doomsday rhetoric "pretty irresponsible." Musk responded by calling Zuckerberg's understanding of the issue "limited."

This is like the tech billionaire equivalent of a rap diss track.
 
Top Bottom