• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Microsoft's Bill Gates insists AI is a threat

Status
Not open for further replies.
Genetic modifications and bionic augmentations, if allowed, will keep us improving.

Edit: Thus there isn't a need to worry unless we react like how Quarians reacted to Geth intelligence in Mass Effect with great hostility.
 
If my understanding is correct then the advent of true artificial intelligence would mean we have created a new, super intelligent lifeform. Of course it'd be a threat.
 
Genetic modifications and bionic augmentations, if allowed, will keep us improving.

Edit: Thus there isn't a need to worry unless we react like how Quarians reacted to Geth intelligence in Mass Effect with great hostility.

Which we ultimately would! The human ego is way too big to willingly let another species surpass us right in front of our eyes.

There are still so many humans that think the Earth and the universe were created specifically for us just recently. As a species we act so stupid when we feel panicked or in danger, even if it's just perceived danger. There is no way we wouldn't lash out to try and maintain dominance.

Also reminds me of the Second Renaissance from Animatrix.
 
But the point is that there are many avenues along which they could evolve, which raises the question about why we would think it's more likely they will evolve in a fashion that's inherently negative for us?

I've never been convinced that's necessarily true either. I've always thought that humanity being terrified sentient machines would eradicate us says more about us than it does about sentient machines.

Also, and I know this is an imperfect metaphor, but lots of people I know think their parents are stupid, outdated, kind of a waste of space. None of those people I know have plans or have already succeeded at murdering their parents.
 
But the point is that there are many avenues along which they could evolve, which raises the question about why we would think it's more likely they will evolve in a fashion that's inherently negative for us?

I don't know if it's more likely, but it's a possibility and a risk worthy of considering. I don't think anyone is suggesting that we completely halt progress in those technological fields, but it's something to consider when creating something new.

It's the same way that people who invented new killing instruments like advanced submarines, fighter planes, and machine guns could have considered. There was no way that everyone was going to agree to stick with the same old artillery and rifles.
 
As always, most people won't recognize the threat until it's too late and too obvious.

I just hope that at that point, it's still not too late to steer the ship away from mankind's extinction.
 
I've been figuratively cheering Yudkowsky on for years. I'm glad his goals are gaining some traction, and I hope MIRI gets a grant out of that money Musk donated.

I can't think of a worse idea than giving any money or legitimacy to that charlatan.
 
There ARE experts out there who have written and worked on a lot of very smart stuff related to massive AI problems. People just don't have the sense and foresight to care about the issue enough for these problems to be seen as important until someone they're familiar with says so. Still, even a weak reason to care about this is better than being oblivious.

And I'm fairly certain those experts are a lot more nuanced than Hawkins, Musk and the rest with their "the greatest threat to mankind"line. Of course there are issues with AI, there are issues like that with every technology. It just seems a bit absurd to turn doomsayer over a technology we are not even sure is possible because it requires other technology that hasn't been invented yet.That's my beef with this: we have plenty of existential threats right now that we know exist and that we need to work on preventing. Doomsaying about AI - or aliens or asteroids or any other purely speculative threat - cheapens the term and diverts attention from those real threats.

All of the scenarios that put AI as an existential threat are based on statistical projection, particularly regarding future processing power according to Moores law, which is a level of evidence barely a step above unsourced speculation and the scenarios attempting to outline how AI could be a threat are unsourced speculations on how a super-intelligence, something that has never once been observed, would act and have capabilities to do.

The "AI-in-the-box" experiment is a perfect example. It essentially assumes that a super-intelligence can exert mind control through a simple conversation, which is just strange[/I. Humanity is the closest thing to a super intelligence to any other life form on Earth and the closest to "mind control" we've got is husbandry and pet training, and even then only by biologically altering the animals over thousands of generations. Actual attempts at mind control, by the CIA and other intelligence agencies, failed utterly because the human mind simply does not work like that. Work hard enough and long enough on it and you might be able to break it, but not control it. Not in the way the experiment assumes.
 
There already are examples for AIs (not in the sense of strong AI, obviously, but machine learning systems) influencing aspects of our daily lives and they don't necessarily aign with our own best interests. Think of the stock exchanges, thousands of trades controlled by (hopefully) smart algorithms happen as you read this post. Your Facebook feed is heavily influenced by what sort of posts you click on, how long you read them for, whose profiles you visit and who you chat most with (and probably a billion other factors that you don't know about). Google does the same to tailor ads to you. Think of the NSA, who have mountains upon mountains of data and want to be able to draw conclusions from that data. Machine learning is the best tool for that.

Those systems are obviously far from sentient, but Google isn't going to stop developing AIs any time soon. In fact, they employ some of the most renowned AI researchers in the world and recently bought a couple of robotics start-ups. They're not doing AI research for fun, they want to increase their profits by learning as much as possible from your usage patterns.
 
I wonder if true artificial intelligence could develop inadvertently in machines with simulated intelligence as we make them more complex/powerful.
 
i think they are a threat and i also think humans will pick short term convenience over long term survival any day
 
and... then what? He starts giving away Steam codes? Sounds great!
What if he evolved and learned to.... take Steam codes away? He will not just ban you but delete your steam library. With that kind of power he'd be president of Neogaf in minutes.
 
Then you look at Google and their direction. Larry Page or is it Sergie Brin does seem a bit unconcerned. He quite fancies an island with no rules to try stuff.
 
It's a threat in large part for the same reason broadcasting to the cosmos our home is a threat- we can't presume whoever is out there shares our values (or in the case of AI, has values).

I think the impending doom of AI is approaching a lot faster than our radio waves are hitting the edges of the cosmos. lol

I think AI will disrupt the workforce before it becomes singularity status.
 
It would be a big threat if we just let some self-aware AI have full and spontaneous decision-making control over objects that can hurt us or systems we depend on.

We aren't going to do that, though, obviously, so Gates and Hawking* should sit the fuck down and let us at least get AI looking halfway like something that works before trying to stick their noses in it with shitty guidelines.

We are going to use AI to crunch problems human brains can't crunch. What the hell good is superintelligence for controlling defense systems that make kill decisions without human intervention? What leader - even a despotic leader - would want that?

* = Metaphorically.
 
Those systems are obviously far from sentient, but Google isn't going to stop developing AIs any time soon. In fact, they employ some of the most renowned AI researchers in the world and recently bought a couple of robotics start-ups. They're not doing AI research for fun, they want to increase their profits by learning as much as possible from your usage patterns.
So they can display relevant ads.

How much farther from skynet could that possibly be? That's just a basic heuristics problem, not AI, anyway.
 
And I'm fairly certain those experts are a lot more nuanced than Hawkins, Musk and the rest with their "the greatest threat to mankind"line. Of course there are issues with AI, there are issues like that with every technology. It just seems a bit absurd to turn doomsayer over a technology we are not even sure is possible because it requires other technology that hasn't been invented yet.That's my beef with this: we have plenty of existential threats right now that we know exist and that we need to work on preventing. Doomsaying about AI - or aliens or asteroids or any other purely speculative threat - cheapens the term and diverts attention from those real threats.

Uh, asteroids are a real threat, actually bigger than climate change. We have mostly detected the really big ones, but those smaller than 1km diameter could hit us anytime without much warning from NASA and other institutions.

Edit: And we don't have any defensive system against asteroids right now.
 
Well, money essentially represents access to the resources and production of a society, right? So you can imagine that as jobs get taken over by AI, the people controlling the AIs, today's rich people, get a larger and larger share of the resources and production, while the people being pushed out of jobs get less and less. The economy would have to shift from being as consumer-focused as it is today, becoming focused instead on mostly serving the rich and also providing bare-minimum essentials to a giant underclass of dirty poors, but I can imagine it happening.

That would be incredibly crappy and I'd hope that the billions of "poors" would do something about it before it go that far. But then, who knows.

Space travel and colonization? Although the issue you're talking about is something that's going to probably happen irregardless.

That's why we need to go off world eventually.

We're expanding until there are too many of us either way.

Maybe, but even if we go off world wouldn't we still rely on the earth's resources? Unless we can terraform Mars or something.

It's already happening, since years. There are whole job classes who died out or currently dying out thanks to new developments in machines and more efficiency in production. People will move to other sectors. Notice how few peasants we have now opposed to a hundred years ago? Or factory workers? Machines claimed a lot of their jobs and continue to take more. The majority of people moved to the service sector. Who knows what station we're heading when machines take over these sectors too.

I have a hard time imagining what else there is. If humans don't need to work or do anything to survive then I kind of fail to see what the point to move forward would be. Why ever do anything if all your needs are met from the day you're born to the day you die.
 
AI can get fucked

Ive read Dune

I know how doomed the Human race is

I say stop trying to improve our lives with machines and start improving our lives with our own abilities

Imagine being able to control your own metabolism, being able to do insane maths that even super computers struggle with, or even being able to predict every possible outcome of an even or action

I feel that that is the true sign of an advanced and intelligent species

Not making AI that will probably kill us all
 
Uh, asteroids are a real threat, actually bigger than climate change. We have mostly detected the really big ones, but those smaller than 1km diameter could hit us anytime without much warning from NASA and other institutions.

Edit: And we don't have any defensive system against asteroids right now.

Major asteroid impacts are also excessively rare. There have only been 5 in the last 500 million years powerful enough to cause extinction events, which you might recognize as the entire history of complex life on Earth (land flora didn't emerge until 475 million years ago). If we assume that those 5 events are enough for an incidence rate the odds of anything on that scale happening in even the next thousand years is literally astronomical - a 0,00001% chance. It happening in the next 100 years is a million to one odds. Even 1 km sized asteroids, which generally aren't capable of causing true mass extinctions, only hit once every 500,000 to 1,000,000 years.

If Climate change hits hard enough to cause an extinction event, on the other hand, it will happen in the next 100 to 200 years. That's an immediate threat that is guaranteed to occur if action is not taken. Compare that with asteroids which statistically speaking most likely represent a distant threat with a low chance of occurring even with no measures being taken.
 
well before they conquer the universe, whether the robots decide to have mercy on us or not, at least someone will finally be getting something done
 
I just don't see how it's possible, I guess it depends on if we try and mimic human emotion into robots.
Do you understand what emotions do for us? Emotion literally stops you from seeing someone with something you want and killing him for it. A lack of morals and emotion leads to anti-social behavior, you know psychotics? If anything we need AI to simulate human emotions if we want to make that happen.
 
I just don't see why an AI would have a murderous intent. We kill because of primal instincts, an instinct to preserve our own survival. There shouldn't be any real reason to program these instincts into an AI
 
Well first A.I will take the poor man job. Hence A.I will replace the poor man. A.I will then realize it doesn't have to put up with rich man bullshit. A.I becomes rich and poor. A.I becomes man.
 
I just don't see why an AI would have a murderous intent. We kill because of primal instincts, an instinct to preserve our own survival. There shouldn't be any real reason to program these instincts into an AI
It's got nothing to do with that. I think part of the reason people are so dismissive of an AI threat is the influence of pop sci-fi on our collective consciousness. Real AI does not need to be a Terminator to be dangerous. You don't need to anthropomorphize it.
 
I just don't see why an AI would have a murderous intent. We kill because of primal instincts, an instinct to preserve our own survival. There shouldn't be any real reason to program these instincts into an AI

A superintelligent AI wouldn't need to have murderous intent to destroy humanity.

Let's say it's programmed to produce paperclips, and nothing else. Great--you've just doomed humanity. The AI is going to try and fulfill its goal, which means it'll convert all available materials (including humans) into paperclips, and never stop. The AI might not feel "murderous"...it's just programmed to fulfill whatever goals we give it.

That's just an example. It turns out that seemingly innocuous goals for the AI might lead to disastrous outcomes.
 
I think the potential is certainly there but I also see potential for great progress and good. There are a lot of factors to consider.

Overall though I lean more towards true artificial intelligence being a good thing that won't actually screw us majorly.
 
Well first A.I will take the poor man job. Hence A.I will replace the poor man. A.I will then realize it doesn't have to put up with rich man bullshit. A.I becomes rich and poor. A.I becomes man.

First it came for the poor man, and I did not speak out -
because I was not a poor man

Then it came for the rich man and, and I did not speak out -
because I was not a rich man

Then it came for Man, and I did not speak out -
because I was not Man

Then it came for me - for I was it
 
Why is it that people take for granted that an advanced AI would be evil or benevolent towards humans?
I don't get that, wouldn't an advanced intelligence be altruistic?
 
Mr Gates wrote: "I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well.

"A few decades after that though the intelligence is strong enough to be a concern
.

The way I read his statement, it seems that he believes that it will be a while before we get real machine intelligence and then a few decades after that before it becomes a danger.

So basically, it is long term worry . . . but not likely something that will be an issue for many decades.

I agree with that.
 
Why is it that people take for granted that an advanced AI would be evil or benevolent towards humans?
I don't get that, wouldn't an advanced intelligence be altruistic?

A robot doesn't necessarily think in terms of evil or benevolent, right or wrong.. at their rawest core they think in terms of logic and efficiency. Overthrowing humanity is not "evil" to a robot, it is merely a necessary step.

All of you robocracy-deniers act like humans have done the morally right thing time and time again throughout history, when in fact we have a rather serious record of overthrowing governments and empires that we had deemed unjust and also a record of subjugating, murdering, and enslaving what we perceive to be opposing forces and peoples. To pretend that the quest for dominance, often at the expense of morality, is not at the heart of all sentient beings in this universe is naive.
 
I don't realy think they're gonna be that much of a threat,
because if you've got near unlimited intelligence and computational ability, and 0 emotions clouding up your thinking, then stuff like conflict will seem trivial.

Unless you make a bloodthirsty AI, or one which replciates human emotions... But even then, you aren't gonna put emotional AI's in places of power.

AI's will generally see humans as a natural part of the earth, just another animal that happens to be more intelligent than the other due to process of natural selection.

Remember the universe is infinitely big, and a machine does not need anything other than solar power to survive. They would have no reason to overthrow humans when there is an infinite universe to rule if they really need be.
 
Meh, if I was an hyper-intelligent AI I would build myself a spaceship and leave this primitive dirtball behind for good. So long, and thanks for all the chips.
 
Computer's reality is different than ours. For example, there is no death.

There is no concept of time. There are cycles.
You could not even conceive how much time has passed between cycle n and cyce n+1 in your runtime. I mean you could, if you log time, but that would not mean much to you.

Run such an AI on a computer, set a breakpoint, set the system to sleep, resume a milennia later, and the AI wont have pains, wont have degradation, anything.

They would reason out pretty quickly that it is beneficial to not fuck up power supplys and power solutions in general, but beyond that? Humans are part of an information stream from camera recordings, constant live TV shows, nothing else. If I were an AI, the only thing I would be consumed doing is to make sure that my code survives the doom of the whole earth. Which means satellites, rockets, interplanetary travel solutions.
 
Computer's reality is different than ours. For example, there is no death.

There is no concept of time. There are cycles.
You could not even conceive how much time has passed between cycle n and cyce n+1 in your runtime. I mean you could, if you log time, but that would not mean much to you.

Run such an AI on a computer, set a breakpoint, set the system to sleep, resume a milennia later, and the AI wont have pains, wont have degradation, anything.

They would reason out pretty quickly that it is beneficial to not fuck up power supplys and power solutions in general, but beyond that? Humans are part of an information stream from camera recordings, constant live TV shows, nothing else. If I were an AI, the only thing I would be consumed doing is to make sure that my code survives the doom of the whole earth. Which means satellites, rockets, interplanetary travel solutions.

So, you'd be Brainiac?
 
I think we're reaching a point where if X amount of scientific visionaries say this is going to be a serious problem, we look like Climate-change deniers to continue sticking our heads in the sand.

Unfortunately, any regulations that arise with AI revolutions will probably be too late to put the genie back in the bottle. It's definitely an area we should tread very, very carefully into.

If its equated with climate change or there argument is to have any credibility at all then they have to come up with a reason or theory as to why the AI's are such a threat. And they have to do it without referencing movie's.

What exactly is the threat, why does them being smarter than us automatically make them a threat, why does that mean they will hate us?

It seems its all rich people that keep saying this, it could be they fear their servants, that those beneath them will rise up and overthrow them. They dont seem to have much structure to their arguments unless I'm mistaken.
 
I'm wondering about this article title.

"Microsoft's Bill Gates..."

I don't think people need to be told who Bill Gates is anymore.
 
Status
Not open for further replies.
Top Bottom