• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Elon Musk tweets: "Competition for AI superiority... most likely cause of WW3"

You're overstimating how close we are to breaking into such technology.

I already said 2050 is the mean estimate by experts. Some think it will get here even sooner. Personally I think 20-30 years is close enough to start thinking about what we're going to do (especially with this being such an enormous challenge)
 

kess

Member
I'm more worried about the super rich using AI and genetic engineering to create a nascent nobility than a standalone AI, honestly.
 

D4Danger

Unconfirmed Member
Elon Musk is quickly falling into that "Dumb Person's Idea of a Smart Person" category. He just says the most vapid uninteresting things about technology.
 

BizzyBum

Member
I think one of the great filters is life-like VR which might lessen motivation for lots of people in real life. Why live in a shitty world when you can just plug into a VR utopia designed for you or just live out whatever life / game you want?

I'm not talking about VR we have now, I mean life full on Matrix levels of plugging in which won't happen for quite some time.
 
For the record I disagree with Musk about it being a cause for WW3. I'd say more likely something else would cause it sooner, but I do believe AI arms race could essentially lead to the final war.
 

Trey

Member
I'm more worried about the super rich using AI and genetic engineering to create a nascent nobility than a standalone AI, honestly.

It's all under the same umbrella. This is why we need to hammer out international agreements on AI ethics, communications, research, and uses now.

There might not be any putting the genie back in the bottle. As alarmist as he is, I'm glad Musk is spreading awareness about this topic that is only going to get more difficult to solve in the future.
 
Musk has tried to address his AI anxieties through two new ventures: OpenAI, a non-profit AI research company, and Neuralink, a startup building devices to connect the human brain with computers.

So he'll be at the forefront of this competition.
 

Haly

One day I realized that sadness is just another word for not enough coffee.
Think about our progress to now and realize the speed at which we are developing AI. We have devices in our hands we can talk to and it will direct you how to drive/walk to any other location on earth in real time. We are developing "narrow" superhuman intelligence. But it's only going to be narrow for so long. And once people start clueing in that this will be the biggest, most insane arms race in history, it could get out of hand.

Specifically for this, predictive/suggestive algorithms were always possible, we just lacked the materials science and engineering sophistication to bring it to consumerhood. In the same way, the concept of an "analytical engine" was theorized before we invented digital circuitry. I wouldn't consider Google Maps "narrow intelligence" at all. It's algorithms, which is just math. You could accomplish the same feat as Google Maps with a large warehouse of people with calculators, pen, and pencil.

Our only observable model of intelligence is human intelligence (and perhaps some forms of animal intelligence). Our only models of self-modifying expressions are organic ones. The "god AI" exists purely in the realm of speculation.
 

Makai

Member
If you're not worried about AI now, what would make you worry about AI later? What threshold do we have to cross before you take it seriously?
What will it take for you to believe that superhuman AI is never coming?

I used to believe in this stuff because I could extrapolate ever improving CPU power into the future - that was a massive disappointment. So we'll be limited on serialized computations, but we can improve parallized computations ... unless GPUs plateau in 20 years, and then what? My biggest fear is that we stay at about the current level of development for the rest of my life - disruptional change is definitely rarer now. 2017 sure looks a lot like 2012. That wasn't the case for 2007-2012, and definitely wasn't the case for 2007-2002. And during this period, software has gotten ridiculously unreliable so I'm not even holding out much hope for automotive AI which has killed a couple people already - company PR blames human meddling of course. Like at bare minimum, I need none of these to happen ever again:

Swiping my subway card or credit card multiple times for it to accept it
Pushing dollar bills in multiple times for it to accept it
Restarting my computer or phone because it entered a failstate
I have to close Chrome because Messenger has a massive memory leak
My phone drains 10-15% of its battery per hour running simple applications like Spotify (plays audio files) or Slack (displays text and images)

I mean yeah, you can blame these on bad engineers, but why should I expect better results from the AI guys? They're going to make shitty decisions that lead to buggy, useless AI.
 
One of my favorite explanations of how 'super AI' will appear is in AI software developed to participate in the Stock Exchanges and other financial systems.
At some point it will be only AI vs AI in that field, doing millions of operations per second and trying to predict the other AIs, and as time happens, more and more connected with the real world trying to make sense of it, so it can understand political and economical news that affect the stock exchange and win more profits.

Because greed is a damn good motivator for progress.


This is a similar explanation, so I dig it.
 

DarkKyo

Member
The great filter theory is only one of the multiple explanations for the paradox, specifically the answer to the "there are none or very very few?" column of possibilities.

Not "any possible answer." One explanation for one answer.

Aren't there a lot of possibilities for what the great filter could be though?
 

low-G

Member
I'll just be impressed if they can develop self driving cars that don't kill their enthusiasts because of glaring oversights.

Only way AI is a threat for the next 40 years if someone hooks some automated routines directly up to nuclear launch approval routines.
 
While I do hope for a positive outcome to the development of great ai I do also think it's something that very well could drastically change the world in the wrong hands.

For those who see musk being a fear monger, look at what Russia have supposedly done with, in comparison, simple, low level hacking.

Imagine them having a sophisticated ai system that could be implemented to do much worse things.

Now I'm not am expert so maybe I have just watched too much Sci fi but I could totally see the first country or business to develop that breakthrough ai secretly being able to effectively take control.

Ai could be amazing and I hope it's used for good but to dismiss the potential dangers seems really stupid.
 
Specifically for this, predictive/suggestive algorithms were always possible, we just lacked the materials science and engineering sophistication to bring it to consumerhood. In the same way, the concept of an "analytical engine" was theorized before we invented digital circuitry. I wouldn't consider Google Maps "narrow intelligence" at all. It's algorithms, which is just math. You could accomplish the same feat as Google Maps with a large warehouse of people with calculators, pen, and pencil.

Our only observable model of intelligence is human intelligence (and perhaps some forms of animal intelligence). Our only models of self-modifying expressions are organic ones. The "god AI" exists purely in the realm of speculation.

google maps or maybe specifically "routing" is still narrow intelligence by definition. That's the terminology used by researchers. Essentially it's a machine that is very intelligent about a very narrow set of variables. Has a very narrow perspective. But general intelligence is closer to us - we can learn Italian starting today if we wanted to. We can reason our way out of situations we've never seen prior.

You're right though that we are very much in the realm of educated speculation. I don't begrudge anybody's skepticism. What I do begrudge is hand waving and scoffing at the mere thought of giving this topic any attention.
 

Foffy

Banned
He's kinda not wrong.

AI in warfare is a literal arms race to win the world via military might.

I do think the bigger concerns of conflict come if/when it becomes a disruptive labor force project, because that's blossoming in sight.
 

Yamauchi

Banned
Lol at Putin's quote. Russia ain't gonna be the leader in AI anytime soon.
The US has attempted to socially engineer Russian politics, and their greatest victory was the protests of 2011. Russia responded with social engineering, which resulted in the election of Donald J. Trump.

Who is the victor?
 

Nivash

Member
Both help compose a larger group of experts in the field. The thing is, if all you're going by are exclusive the thoughts of computer scientists (which are damn important, mind you) then you're missing a bigger picture of ideas. And Max Tegmark is a physicist on paper but an expert at this subject and even recently published a book on the topic. I'd say that's knowing something.

This is all semantics to some degree, as the larger issue is: if you're basically asking somebody to give you the solid proof that we will 100% get anything in the future, that is impossible. Nobody can tell you for sure what happens with any field of research (to a reasonable degree), because it's all theory until it's reality. Including such things as effects of global warming. But why not at least pay attention to it and prepare.

The problem with stuff like AI is we might not have time to deal with it once it's on our doorstep. It will be too late to kickstart a global initiative for AI ethics, security, and transparent research. It's already blowing past the station at that point and taking us all with it.

I'm not just talking about the computer scientists, though. I'd be perfectly fine with Bostrom and Tegmark if they were actually conducting empirical research into the topic rather than running workshops, advocating and writing books. Oh, and about the book part. I don't think book publishing is a sign of expertise so much as being downright suspicious. Real scientists publish papers, not books, because papers are peer reviewed and required to be worth a damn. Any random individual can publish a book provided people be willing to buy it. Guess who's a best-selling author? Deepak Chopra. Guess what Chopra isn't? A published, cited or respected expert in a scientific field.

Books are fine for publishing pop-sci explanations to laypeople. Stephen Hawking's books are great. But they should be relegated to simply explain established consensus, not present controversial conclusions. The problem is that Bostrom, Tegmark publish books (also blogs, magazine articles, videos...) doing exactly that, which raises all kinds of red flags.

Also: when climate scientists present theoretical work on the impact of climate change, they have tons of real world, empirical data to back those theories up with. And even then, they're perfectly fine with stating that they can only really say that things will change in a lot of places but that it's difficult to say in what way. That's not what AI doomsayer's are doing; they're doing the opposite in fact - they make definitive, sweeping predictions with basically no backing for it. Even the singularity you're arguing as being the core threat of AI - and the supposed reason we can't wait to understand the issue before we act to prevent it (however that works) - is completely speculative.
 

Famassu

Member
For the record I disagree with Musk about it being a cause for WW3. I'd say more likely something else would cause it sooner, but I do believe AI arms race could essentially lead to the final war.
AI might cause issues in society (like (even) large(r) scale poverty causing mass disrest) and maybe those causes will lead to some conflicts, but I very much doubt it'll be the "AI arms race", like countries competing with who can go the furthest with AI, that will be a reason for large scale worldwide war.
 
People just shooting from the hip on their sarcastic responses here about lolol shutup about terminator - do you actually know anything about this topic? Of all the reading I've done from experts in the field it has become clear that there is a clear danger that upcoming AI will be creating gods. It's an incredibly serious topic and there's a reason why nearly everybody close to the topic is alert. In fact, many believe we need to move on AI & digital security in a big way or face outright extinction. Sounds hyperbolic but the more you research the more you realize this is totally feasible.

Whatever! Those negotiating bots at google that had to be shut down for speaking to one another in a spontaneously created language were just discussing the betterment of humanity.

The conversation continued at Facebook with their experiment that went the exact same way and had to be shut down.

We're a ways away, but the problem is that without appropriate controls we won't know when we've reached a dangerous threshold until Pandora's box is already open for some time.
 

Nivash

Member
As a radiologyst I am fucking scared about AI.

But, it's inevitable, so bring it on.

You should be worried. Once AI owns image analysis, you're going to spend your days doing ultrasounds until the Ultrasound-o-Bot 3000 can do it cheaper. Then what will you do? Spend time with patients? Ha!

Oh except you'll promptly spontaneously combust when getting exposed to sunlight for the first time in decades once you're forced out of your artifically lit computer-cavern. Yes, I basically think of radiologists as benevolent vampires, why do you ask?
 
its really funny when people dismiss the obvious threat of AI. It is inevitable that issues like this will be increasingly of greater concern as time passes. I think the only people who genuinely dont understand that are those who know little of the subject. Media is partly to blame for giving a lot of people the perception that this is some flighty ridiculous concept rather than the likely threat that it is
 

reckless

Member
AI might cause issues in society (like (even) large(r) scale poverty causing mass disrest) and maybe those causes will lead to some conflicts, but I very much doubt it'll be the "AI arms race", like countries competing with who can go the furthest with AI, that will be a reason for large scale worldwide war.

Why wouldn't there be an arms race? It would/will be the most important arms race in the history of the world, and there can only be 1 winner. The stakes will be power unmatched by any other country on earth, if you ain't winning that race it would make sense to attack before its too late.
 

Biske

Member
If AI takes control there wont be any room for personal assistants.




Hardly on the list of problems we should be actively focusing on. By the time this one comes around so many other things will have already fucked us.
 

KHarvey16

Member
If AI takes control there wont be any room for personal assistants.




Hardly on the list of problems we should be actively focusing on. But the time this one comes around so many other things will have already fucked us.

Wouldn't that preclude us worrying about climate change too?
 
Why wouldn't there be an arms race? It would/will be the most important arms race in the history of the world, and there can only be 1 winner. The stakes will be power unmatched by any other country on earth, if you ain't winning that race it would make sense to attack before its too late.

Yeap

People need to do more research on the subject. Im entirely sympathetic to how outrageous a claim of this sort appears granted you know little of the issue but it doesn't change the reality. This will quite likely affect us within our lifetime.

Ok, show your math.

How old are posters here? The jury's out but many in the field expect to achieve something resembling general intelligence in AI by at least the turn of the century
 

HStallion

Now what's the next step in your master plan?
The thing is all this talk of an AI God is kind of funny as it precludes an AI that instantly demands all power and to use itself to destroy humanity. For all we know AI might just be the future of humanity if we merge with machines at an intrinsic level. Not saying it's not something to consider and at least plan about but for all we know AI's could develops around us in a more "natural" instead of enslaving or destroying us.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
its really funny when people dismiss the obvious threat of AI. It is inevitable that issues like this will be increasingly of greater concern as time passes. I think the only people who genuinely dont understand that are those who know little of the subject. Media is partly to blame for giving a lot of people the perception that this is some flighty ridiculous concept rather than the likely threat that it is

It's also equally obvious that almost nobody participating in that discussion has ever engineered machine-learning-based software and has thus not experienced its rather sobering reality. (I've read Bostrom's book, it's interesting philosophy.)
 

Biske

Member
Wouldn't that preclude us worrying about climate change too?

Could argue that.

It's not like we can stop climate change anyways, our luke warm responses to trying to mitigate it are laughable. I think humanity is proving pretty well that we are incapable of saving ourselves from these big problems.

I'm sure the countless millions currently dying in war, poverty, starvation, sex slavery, dehydration, etc, could give two shits about stopping robots.


But stopping AI is fun and zany and sexy, of course Musk has a hard on for it.
 

Haly

One day I realized that sadness is just another word for not enough coffee.
Why wouldn't there be an arms race? It would/will be the most important arms race in the history of the world, and there can only be 1 winner. The stakes will be power unmatched by any other country on earth, if you ain't winning that race it would make sense to attack before its too late.

This isn't Civ where you have a turn counter towards "AI Singularity" other players can look at and prepare for. Some people in this thread are even speculating that we'll hit the threshold of Strong AI before we even realize it. What will other powers do then?

Ironically, trying to turn AI Security into a real issue is more likely to cause this "AI arms race" than simply keeping mum about it. Most military powers in the world are too absorbed in their own present day problems to give heed to hypothetical sci-fi ones. The response to climate change is still listless and slow, despite much of the world's economic power being concentrated on coastal areas. There need not be an arms race if the people in control of the arms (i.e. politicians and oligarchs) are unaware or are skeptical of the "risks" of Strong AI.
 
It's also equally obvious that almost nobody participating in that discussion has ever engineered machine-learning-based software and has thus not experienced its rather sobering reality. (I've read Bostrom's book, it's interesting philosophy.)

You don't need to be an expert in the field to listen to experts. I assume they're more than understanding of that reasoning when it comes to issues like climate change.
 

KHarvey16

Member
Could argue that.

It's not like we can stop climate change anyways, our luke warm responses to trying to mitigate it are laughable. I think humanity is proving pretty well that we are incapable of saving ourselves from these big problems.

I'm sure the countless millions currently dying in war, poverty, starvation, sex slavery, dehydration, etc, could give two shits about stopping robots.


But stopping AI is fun and zany and sexy, of course Musk has a hard on for it.

False dichotomy. This is the "we spend money on space when people on earth are starving!" fallacy. It's like telling you you can pay off your student loans or buy lunch.
 

Biske

Member
False dichotomy. This is the "we spend money on space when people on earth are starving!" fallacy. It's like telling you you can pay off your student loans or buy lunch.

I don't think so. Obviously we can spend money on a lot of things. But its sexy to put it on these big crazy things, could work on both, but IMO these fuckers should be shouting from the roof tops how we should make sure everyone on the planet has clean water.

I could really care less about saving humanity from some future apocalypse when we can't save it from the most mundane of issues.


I think we are equally fucked either way, but can at least give some people some water from your soup box and work on the serious shit normally.


What are we going to do about AI anyways? Try and design it properly? Seems to me if its going to be a problem our answer is to not develop it at all.

Which isn't going to happen, so its just a matter of time until one is created and then it does what it wants.
 
Top Bottom