• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Elon Musk tweets: "Competition for AI superiority... most likely cause of WW3"

Haly

One day I realized that sadness is just another word for not enough coffee.
You don't think Elon Musk taking a job telling Trump he's wrong about climate change despite it making him unpopular and doing damage to his brand was a humanitarian cause? I'd say it's more important than politics.

He would've done the same thing in the interest of his businesses. And didn't Musk quit the council?
 

Kyzer

Banned
He would've done the same thing in the interest of his businesses. And didn't Musk quit the council? It's collapsed like every other Trump council that isn't filled with his stooges.

Yes he left once Trump made a final decision on climate change because it was the only reason he was even there. And even if it was, which is something we wouldnt be able to even know for a fact, only in the interest of his business, if his business is accelerating the worlds adoption of sustainable energy, what is the effective difference? Especially when it actually made him less popular, and he continued until we withdrew from the climate agreement anyways? Either way it benefits humanity.
 

qcf x2

Member
People are in deep shit now. What is Elon doing for that? He going to deliver food via Hyper Loop?

My problem isn't these causes, its us treating science like TMZ and getting a boner every time assholes like Elon tell us what we should be focusing on, while doing jack shit for any problem that is a problem now.

Wanna change shit? Go vote in a local election. Then worry about fucking AI.

Bro you sound like a climate change denier, do you realize that? "We got problems now, don't talk to me about 100 yrs from now!"

Also any post dismissing the threat of killer AI that mentions something irrelevant about Musk (his hair, his success, etc) is pretty embarrassing. Those details don't contribute to the topic at hand.

Edit: AI is an existential threat. Not something like poverty which has existed throughout human history. Not a president that you don't like that will be gone in 3.5 years. Something that could potentially destroy civilization.
 
The thing is though, AI would think beyond our capacity and who knows what it comes up with that are beyond us in understanding and application. We maybe can't wrap our heads around that happening but it's possible AI could create something that no human application could defend against.

And what would that application work on? Would the current OS systems around the world work with it?
 

Aizo

Banned
Calm down everyone, you have nothing to worry about.


0101010110 11001 1001010 1100111101 01010101 101010010111100111111 010101 0101010101 01011
101
01011 101010
1 1010 10 10
01001001 00100000 01100100 01101111 01101110 00100111 01110100 00100000 01110010 01100101 01100001 01101100 01101100 01111001 00100000 01100111 01100101 01110100 00100000 01110111 01101000 01100001 01110100 00100000 01111001 01101111 01110101 00100111 01110010 01100101 00100000 01110100 01110010 01111001 01101001 01101110 01100111 00100000 01110100 01101111 00100000 01110011 01100001 01111001 00101110 00100000 01010111 01101000 01100001 01110100 00100000 01100100 01101001 01100001 01101100 01100101 01100011 01110100 00100000 01101001 01110011 00100000 01110100 01101000 01100001 01110100 00111111
 
All of Musks companies have only lost shit loads of money every year. Only reason they exist is due to billions in grants from the government. He seems like a bit of a loon.

Why will the leading nation in AI 'rule the planet'? The hunt for AI will lead to WWIII? That's some far fetched paper back sci fi shit. Musk seems to be that sort of particular goofy nerd who believes certain sci fi ideas are only inevitable.

Although it does seem like the basis for a cool sci fi movie, a country and AI in cohorts to rule the planet with a brutal robotic fist.
 

E-Cat

Member
You think the "puzzle" is an interplanetary human civilization. I think the "puzzle" is a world where people don't need to be hungry or discriminated against for the circumstance of their birth. In your scenario, human welfare is a side effect of technological and economic expansion, and not a requirement. In mine, humanitarianism is the sole goal, everything else comes second.
Well, none of that will matter if we don't make our civilization redundant by inhabiting multiple planets before there's some inevitable extinction scenario.

I'm not saying we shouldn't invest in ending hunger and discrimination right now. In fact, we should probably allocate more money to that than interplanetary travel -- which we are, in fact, doing.

Elon put it well:
"If we think it's worth buying life insurance on an individual level, then perhaps it's worth spending more than - spending something on life insurance for life as we know it, and arguably that expenditure should be greater than zero. Then we can just get to the question of what is an appropriate expenditure for life insurance, and if it's something like a quarter of a percent of the GDP that would be okay. I think most people would say, okay, that's not so bad. You want it to be some sort of number that is much less than what we spend on health care but more than what we spend on lipstick. Something like that, and "I like lipstick, it's not like I've got anything against it."
 

Haly

One day I realized that sadness is just another word for not enough coffee.
Yes he left once Trump made a final decision on climate change because it was the only reason he was even there. And even if it was, which is something we wouldnt be able to even know for a fact, only in the interest of his business, if his business is accelerating the worlds adoption of sustainable energy, what is the effective difference? Especially when it actually made him less popular, and he continued until we withdrew from the climate agreement anyways? Either way it benefits humanity.

This was the start of this whole tangent:

Yet, I cannot even begin to understand the mindset and general philosophy of someone who comes into this thread and their first impulse is to type "fuck Elon Musk". Truly baffling.
He's a technocrat and emblematic of some of the problems currently plaguing our generation, vis a vis our society being overturned by the shift towards automation and goods-as-service economy with no adequate mechanisms for a smooth transition. He doesn't seem to care about anything beyond his business ventures, and indeed this tweet itself reads like a business venture from a certain angle.

Zuckerberg gets a lot of flak for much of the same reasons, and now that he's gunning for a presidential run certain people hate him more than ever. These are not the people that should be leading public opinion, insofar as they have very little compassion or concern for societal problems outside the tech-bubble.

It's not very complicated.

And in the end my mind remains unchanged. He's interested in his businesses first and foremost, many of you even seem to support this. If it turned out his business was no longer "benefiting humanity", do you think he would change course? I'm going to lean no. I'm not interesting in leaving my future to someone who thinks human welfare is a secondary concern to profit, even if for the time being our goals align.

Well, none of that will matter if we don't make our civilization redundant by inhabiting multiple planets before there's some inevitable extinction scenario.

And none of Musk's ventures will matter if social upheaval destroys the fabric of society that allows techno-libertarians like him to do his thing before they can colonize Mars. We can keep going back and forth like this.

When he says "AI superiority most likely cause for WW3", the unspoken words were "there is nothing else that's more likely to cause WW3 than AI", which I think is a bold guess. Wars over dwindling resources like water, terrorism, the rise of totalitarianism, Russia, China's bubble, civil wars caused by economic collapse due to automation, all these things carry within themselves the potential for large scale conflict but Musk doesn't seem to see any of it.
 

qcf x2

Member
All of Musks companies have only lost shit loads of money every year. Only reason they exist is due to billions in grants from the government. He seems like a bit of a loon.

Why will the leading nation in AI 'rule the planet'? The hunt for AI will lead to WWIII? That's some far fetched paper back sci fi shit. Musk seems to be that sort of particular goofy nerd who believes certain sci fi ideas are only inevitable.

Although it does seem like the basis for a cool sci fi movie, a country and AI in cohorts to rule the planet with a brutal robotic fist.

Whoever mass produces semi autonomous robotic soldiers first suddenly has an army capable of taking over the world, no sci fi books necessary to deduce that. Currently AI police and AI drone swarms are in the testing phase.

As for AI itself, imagine a computer that does what people do for machines: maintains them and comes up with ways to make them faster, more capable, longer lasting, more efficient. A computer that essentially comes up with new ways to better itself, and then instructs other computers to build the newer/better version(s). If the computer becomes fully automated, even from a 1's an 0's perspective why would it remain beholden to the interests of humans? It would determine for itself the best course of action and no plea to emotion would be able to stop it. That's where you're getting into "sci fi" I suppose, but the same could be said for the microchip, the internet, etc. The IoT will soon become the IoE, that is fact. There will be no turning the machine(s) off at that point.
 

E-Cat

Member
This was the start of this whole tangent:



And in the end my mind remains unchanged. He's interested in his businesses first and foremost, many of you even seem to support this. If it turned out his business was no longer "benefiting humanity", do you think he would change course? I'm going to lean no. I'm not interesting in leaving my future to someone who thinks human welfare is a secondary concern to profit, even if for the time being our goals align.
If Elon's main motive was profit, then why would he invest all of his net worth in startups in the two industries most likely to ruin him -- commercial space and automotive? He came incredibly close to bankruptcy, causing him unimaginable psychological pain. No one goes through that without a purpose that is bigger than mere profit.

Your goals may not align with his, but do not think for a second he's doing what he's doing solely for the money.
 

Haly

One day I realized that sadness is just another word for not enough coffee.
It's good PR, I mean some of you treat him like the techno-messiah so I think it was a good call.
 
I don't think it would be WW3, it would just be the systematic extermination of our species by a superior mechanized race. The Terminator is a bit dumb because it implies we have a chance because we are able hijack a time machine to travel back in time and prevent the war from even occurring. No time machines exist, so in reality we would just be exterminated.

Also LOL at GAF's continuing hate-boner for Elon Musk.
 
AI always feels like the most privileged of things to worry about. Must be because I only hear about it from rich guys that literally have nothing else to worry about

tumblr_mpbpmmpDUm1r19x0ko1_250.gif
 

Neith

Banned
Honestly, I like the guy, but he absolutely has no fucking clue what he is talking about here. This is just some random thought of his that makes a headline. He has no insight into how war's start. He as no background that would make me trust that he knows why wars start. Predicting WW3 is just ridiculous. This is all for attention and nothing more IMO.
 
I'm more worried about climate change, wealth equality, and automation.

Shit, we can't even treat people who are a different color, gender, or sexual orientation with human decency.

I'm not hopeful for the future like I used to be.
 
You should do research on Cambridge Analytica and Palantir. Cambridge Analytica has become increasingly successful in manipulating people through social media. They were key players in Brexit and Trump's election. They also just recently worked on Kenya where the citizens are currently rioting and demanding a re-election as they can't believe the results. Sound familiar? They've also been involved in many other countries such as Russia, Latvia, Lithuania, Ukraine, Iran, and Moldova. Mercer owns Cambridge Analytica and a significant investor to Breitbart. Bannon owns Breitbart and also holds chief role on the Board of Cambridge Analytica. Cambridge Analytica ties Mercer, Bannon, Putin, Trump, Sessions, Flynn and Farage together among others. Their first project in Trinidad they partnered with the Government and Palantir in recording all browsing history and phone calls of the citizens as well as geomapping all crime data. An AI was able to give the police rankings on how likely a citizen was to commit crime through using a langauge processor for recorded conversations and all other stored data on the individual. Keep in mind Palantir has since moved on and is working with a lot of large US cities such as LA and Cambridge Analytica is now scoring tons of contracts in the Pentagon.


Everyone should read this article and others by the Guardian:

https://www.theguardian.com/technol...eat-british-brexit-robbery-hijacked-democracy

And once again, the sensible "this is already happening" post gets ignored in a thread on 'AI' when in reality modern AI really just means 'machine learning + models + vast amounts of data'. We've had models for a long time, we've also had machine learning for quite some time before it tipped over into the main computing paradigm. What nobody had before, was vast amounts of data. That is what Facebook (and others) have wrought upon human history (if it is still 'human' history that is): the explosions of data on human behavior that was previously hard to get. Now that we have it, we don't need quantum computers to extract patterns for the flimsiest trace of data, because with this amount, even a modest model would be able to get an accurate answer over a wider range than any human could ever produce.

I get that the philosophy of science isn't everybody's jam, but considering the ideas of accuracy, bandwidth, and data are not new, you should all make an effort to wrap your heads around these things. The world isn't linear, it's complex. It is vital that you understand the difference and how that affects us.


Also, there is a stunning denial in this thread about all the astroturfing operations that got us to the Trump presidency and then going "well it's not happening today...". Some of you are really, well, dense in some ways. Sorry not sorry. Who do you think would start WW3 anyway?
And while we're on this: why do you think the alt-right loves tech so much? (and will suddenly bring up 'philosophy' when declaring said term to be bullshit when talking to people who won't accept message A --hate-- but will accept message B -- popular thing is totally bullshit, man--. Think about it, that's all I ask. )
 

jillytot

Member
Im a bit surprised at the negativity thrown at the idea that AI could potentially be a very real problem in the future. It's already pretty unsettling the things that it's capable of, and the field is advancing at a furious pace.

There is a public perception that because AI feels like a benign curiosity now, it will continue to be just as harmless in the future.

It's not a certainty that things will go south, but the threat of a problem is real. The main thing is it's impossible to predict how these systems will behave once they pass a certain threshold. It doesn't take much investigation to realize we are speeding towards that threshold faster than most people are aware. I actively follow the field and i am floored by all the progress made even in the last year. It's progressing unlike anything else.

Even if computational power were to stop today, it's very unlikely it would have much effect at the progress AI is making at the moment. It's more a question of marshaling resources and hooking up these systems to together in more clever ways.

Here a couple of youtube channels that have been pretty good at keeping up with the current research:

https://www.youtube.com/user/keeroyz/videos

https://www.youtube.com/user/carykh
 
That sounds like kool-aid but I'll humor you. What has Musk done, in terms of influencing public policy, to ensure that automation doesn't destroy the foundations of our society? The thing with "long-term" thinking is that it has a tendency to overlook the short term. Things are about to get bad really quick, much faster than the time it'll take for Strong AI to be an existential threat. Does Musk care at all about this or does he think we'll just get through it magically?

This just in: Lambda Legal does nothing to help eradicate Malaria, the assholes.
And the NAACP isn't doing anything about the National Debt!
Greenpeace isn't solving the Israel-Palestine conflict!

We can walk and chew gum at the same time. I'm glad there are some charities focusing on environmental issues, some on social justice issues, some on saving the most lives with the fewest dollars. And I'm glad that we have some charities focused on the short term and some on the long term.
 

Haly

One day I realized that sadness is just another word for not enough coffee.
Musk is involved deeply in the push towards automated infrastructure, is the key difference here. He is a huge actor in this space. As far as I can tell his stance is "yeah it's going to be a problem and someone else will have to solve it".

Like even from the OP, Musk, with one hand, proselytizes about the existential crisis of AI, and with the other hand, launches two new AI-related ventures.
 
Honestly, I like the guy, but he absolutely has no fucking clue what he is talking about here. This is just some random thought of his that makes a headline. He has no insight into how war's start. He as no background that would make me trust that he knows why wars start. Predicting WW3 is just ridiculous. This is all for attention and nothing more IMO.

It's not hard to follow his train of thought. Once you create a general intelligence exceeding human capabilities in all areas (obviously existing AIs greatly exceed humans in some areas but not all), it will in principle be capable of recursive self improvement. If one country - or even a private corporation - is able to break through that barrier notably in advance of others, then that creates an extremely serious power imbalance. The worry is that by allowing that country to just keep on trucking while the rest of the world plays catch up, it will leave them in the dust. Russia being 4 years behind America in nuclear technology didn't make a big difference in the long run. Russia being 4 months behind America in creating a Strong AI could mean unchallenged American hegemony over Earth in perpetuity.

Even if the capabilities of an artificial super-intelligence turns out to be greatly exaggerated, or the rate of its increase is far slower than anticipated, the risk of causing a war in the pursuit of this technology is based on how serious governments take it. If Vlad is pretty confident that President Chelsea Clinton has begun construction on a viable super-intelligence, and he believes it is an existential threat to him and his nation, that's what Elon is worried about.
 

mokeyjoe

Member
Musk is involved deeply in the push towards automated infrastructure, is the key difference here. He is a huge actor in this space. As far as I can tell his stance is "yeah it's going to be a problem and someone else will have to solve it".

Like even from the OP, Musk, with one hand, proselytizes about the existential crisis of AI, and with the other hand, launches two new AI-related ventures.

It's all publicity I guess. After the whole 'universe is a simulation' thing I don't tend to take anything he says seriously.
 
It's not hard to follow his train of thought. Once you create a general intelligence exceeding human capabilities in all areas (obviously existing AIs greatly exceed humans in some areas but not all), it will in principle be capable of recursive self improvement. If one country - or even a private corporation - is able to break through that barrier notable in advance of others, then that creates an extremely serious power imbalance. The worry is that by allowing that country to just keep on trucking while the rest of the world plays catch up, it will leave them in the dust. Russia being 4 years behind America in nuclear technology didn't make a big difference in the long run. Russia being 4 months behind America in creating a Strong AI could mean unchallenged American hegemony over Earth in perpetuity.

Even if the capabilities of an artificial super-intelligence turns out to be greatly exaggerated, or the rate of its increase is far slower than anticipated, the risk of causing a war in the pursuit of this technology is based on how serious governments take it. If Vlad is pretty confident that President Chelsea Clinton has begun construction on a viable super-intelligence, and he believes it is an existential threat to him and his nation, that's what Elon is worried about.

When you put it like that, yea, I can see why Musk is worried.

Because in all honesty, what Putin said is 100% true. The country that first develops Strong AI will become 'the ruler of the world' as you and other posters have already mentioned.
 

Malvolio

Member
Attacking the messenger is not disputing the message. It's not like he is the first to deliver this type of warning.
 

Lexad

Member
Gemüsepizza;247883630 said:
giphyy9syz.gif


AI is a wonderful topic to make stupid predictions about, because we are still light-years away from even understanding the concept of true AIs, or having anywhere near the necessary amount of computation needed. Also lol at his comments to Zuckerberg, Musk should stop talking so much shit, he is just a business man and not a scientist. Maybe he has read to much tabloid articles calling him a "genius", alternatively doing less coke might do the trick. There are enough other areas which have the potential for conflict in the future, for example the rising inequality even in western countries. But I bet guys like him don't want to talk about those very real problems, which already exist, because then he would have to take a critical look at himself.

Ironic you used Stark for that GIF.

And just because something isn't "Real" yet, doesn't mean it can't be a concern
 

smurfx

get some go again
It's not hard to follow his train of thought. Once you create a general intelligence exceeding human capabilities in all areas (obviously existing AIs greatly exceed humans in some areas but not all), it will in principle be capable of recursive self improvement. If one country - or even a private corporation - is able to break through that barrier notably in advance of others, then that creates an extremely serious power imbalance. The worry is that by allowing that country to just keep on trucking while the rest of the world plays catch up, it will leave them in the dust. Russia being 4 years behind America in nuclear technology didn't make a big difference in the long run. Russia being 4 months behind America in creating a Strong AI could mean unchallenged American hegemony over Earth in perpetuity.

Even if the capabilities of an artificial super-intelligence turns out to be greatly exaggerated, or the rate of its increase is far slower than anticipated, the risk of causing a war in the pursuit of this technology is based on how serious governments take it. If Vlad is pretty confident that President Chelsea Clinton has begun construction on a viable super-intelligence, and he believes it is an existential threat to him and his nation, that's what Elon is worried about.
they won't just rule the earth but will also rule space and what wealth can be drawn from it. space colonization won't ever truly begin until robots can do much of the heavy lifting. deep space exploration will likely only be possible with robots.
 

Haly

One day I realized that sadness is just another word for not enough coffee.
Attacking the messenger is not disputing the message. It's not like he is the first to deliver this type of warning.

Bradbury delivered a warning on messing with time machines; doesn't mean I'll take time travel as a credible existential threat and a likely cause of WW3. The main difference between time travel and Strong AI is that time travel breaks known laws of physics and Strong AI doesn't, but that doesn't stop people from speculating about how the laws of physics might not hold in special circumstances.

Let's approach this from another direction. Strong AI is an inevitability. The first actor to create Strong AI will have some degree of influence over the ensuing singularity. Thus, it's in our favor to be the ones to create the first Strong AI. The alternative, halting, delaying, or regulating AI research will only empower foreign researchers like those of Russia or China, and as TDM has explained, a time factor of months might be the difference between total annihilation and eternal hegemony. If we accept that a Russian or China backed Strong AI is "worse" than a US backed one, then we must accept responsibility for creating the first Strong AI at all costs.

In this case, Musks' "warning" basically amounts of "WW3 is inevitable because AI is inevitable and we should go all in on AI anyway".
 

trixx

Member
I think one of my philosophy professors subtly mentioned the threat of AI in one of my lectures

The point isn't to undermine or halt technological progress. Progress is great but sometimes questions need to be asked. Anyways we're quite a whiles away. Anyways every country id pursuing
 

F!ReW!Re

Member
How do you even know you're prepared? What does being prepared entail?

What are you even talking about?
How is;
"No, not happening any time soon, everything is fine, let's not worry to much"
A better stance than;
"Well this bad thing may not happen, but we could think about solutions or prevention of such scenarios because they could be an existential risk to us"

Can you be fully "prepared"?
I don't think so; too much unknown variables / outcomes.
But having thought about potential ways to prevent a catastrophic war / AI trying to kill us event, sounds like a smarter way to approach it then:
"NOT HAPPENING! AND ELON MUSK SUCKS"


Bradbury delivered a warning on messing with time machines; doesn't mean I'll take time travel as a credible existential threat and a likely cause of WW3. The main difference between time travel and Strong AI is that time travel breaks known laws of physics and Strong AI doesn't, but that doesn't stop people from speculating about how the laws of physics might not hold in special circumstances.

Let's approach this from another direction. Strong AI is an inevitability. The first actor to create Strong AI will have some degree of influence over the ensuing singularity. Thus, it's in our favor to be the ones to create the first Strong AI. The alternative, halting, delaying, or regulating AI research will only empower foreign researchers like those of Russia or China, and as TDM has explained, a time factor of months might be the difference between total annihilation and eternal hegemony. If we accept that a Russian or China backed Strong AI is "worse" than a US backed one, then we must accept responsibility for creating the first Strong AI at all costs.

In this case, Musks' "warning" basically amounts of "WW3 is inevitable because AI is inevitable and we should go all in on AI anyway".

The problem is;
- Sure the one that builds one the first will have a considerable advantage.
- If we don't setup procedures, rules and boundaries for creating one, an AI that's beyond our intelligence level is something we can't comprehend and can't control, which is dangerous on it's own.

Just because you assume foreign researchers won't abide by these rules, doesn't mean it's not a good idea to come up with these rules, regulation, procedures and boundaries.

One of the boundaries I can imagine;
Hooking up your self-learning AI to the internet without restrictions is gonna be asking for problems.

Again, have a look at these links from waitbutwhy;
The Artificial Intelligence Revolution: Part 1 - Wait But Why
The Artificial Intelligence Revolution: Part 2 - Wait But Why
It's a nice analysis based on ideas, thoughts and discussions from people like Bostrom, Musk, etc. Not necessarily on the topic of a war started because of AI, but a good read nonetheless..

I still don't get why you think being cautious is a bad thing.
 

Haly

One day I realized that sadness is just another word for not enough coffee.
I would be interested in rules, regulations and boundaries insofar as they camouflage serious research on AI. I mean it's going to come down to espionage anyway, with people saying they won't do such and such while doing such and such behind the scenes. Cybersecurity will play a big factor here and we're lagging on that front compared to Russia/China actually.
I still don't get why you think, being cautious is a bad thing.

I don't understand what "being cautious" entails. There's a world of difference between saying "let's be cautious" and creating contingency plans for impending disaster. Until someone outlines what it is we should be doing, "being cautious" is just blowing smoke.
 

F!ReW!Re

Member
I don't understand what "being cautious" entails. There's a world of difference between saying "let's be cautious" and creating contingency plans for impending disaster. Until someone outlines what it is we should be doing, "being cautious" is just blowing smoke.

I once again implore you to read the articles I shared (your specific question about what "being cautious" entails is the focus of part 2.

Here's an excerpt;

So given the combination of obsessing over a goal, amorality, and the ability to easily outsmart humans, it seems that almost any AI will default to Unfriendly AI, unless carefully coded in the first place with this in mind.
Unfortunately, while building a Friendly ANI is easy, building one that stays friendly when it becomes an ASI is hugely challenging, if not impossible.

It's clear that to be Friendly, an ASI needs to be neither hostile nor indifferent toward humans.
We'd need to design an AI's core coding in a way that leaves it with a deep understanding of human values. But this is harder than it sounds.

For example, what if we try to align an AI system's values with our own and give it the goal, ”Make people happy"?
Once it becomes smart enough, it figures out that it can most effectively achieve this goal by implanting electrodes inside people's brains and stimulating their pleasure centers.
Then it realizes it can increase efficiency by shutting down other parts of the brain, leaving all people as happy-feeling unconscious vegetables.
If the command had been ”Maximize human happiness," it may have done away with humans all together in favor of manufacturing huge vats of human brain mass in an optimally happy state.
We'd be screaming Wait that's not what we meant! as it came for us, but it would be too late. The system wouldn't let anyone get in the way of its goal.

If we program an AI with the goal of doing things that make us smile, after its takeoff, it may paralyze our facial muscles into permanent smiles.
Program it to keep us safe, it may imprison us at home. Maybe we ask it to end all hunger, and it thinks ”Easy one!" and just kills all humans.
Or assign it the task of ”Preserving life as much as possible," and it kills all humans, since they kill more life on the planet than any other species.

Goals like those won't suffice. So what if we made its goal, ”Uphold this particular code of morality in the world," and taught it a set of moral principles.
Even letting go of the fact that the world's humans would never be able to agree on a single set of morals, giving an AI that command would lock humanity in to our modern moral understanding for eternity.
In a thousand years, this would be as devastating to people as it would be for us to be permanently forced to adhere to the ideals of people in the Middle Ages.

No, we'd have to program in an ability for humanity to continue evolving. Of everything I read, the best shot I think someone has taken is Eliezer Yudkowsky, with a goal for AI he calls Coherent Extrapolated Volition.
The AI's core goal would be:

Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together;
where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.
Am I excited for the fate of humanity to rest on a computer interpreting and acting on that flowing statement predictably and without surprises? Definitely not.
But I think that with enough thought and foresight from enough smart people, we might be able to figure out how to create Friendly ASI.

And that would be fine if the only people working on building ASI were the brilliant, forward thinking, and cautious thinkers of Anxious Avenue.

But there are all kinds of governments, companies, militaries, science labs, and black market organizations working on all kinds of AI.
Many of them are trying to build AI that can improve on its own, and at some point, someone's gonna do something innovative with the right type of system, and we're going to have ASI on this planet.
The median expert put that moment at 2060; Kurzweil puts it at 2045; Bostrom thinks it could happen anytime between 10 years from now and the end of the century, but he believes that when it does, it'll take us by surprise with a quick takeoff. He describes our situation like this:

Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb.
Such is the mismatch between the power of our plaything and the immaturity of our conduct.
Superintelligence is a challenge for which we are not ready now and will not be ready for a long time.
We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.
Great. And we can't just shoo all the kids away from the bomb—there are too many large and small parties working on it, and because many techniques to build innovative AI systems don't require a large amount of capital, development can take place in the nooks and crannies of society, unmonitored.
There's also no way to gauge what's happening, because many of the parties working on it—sneaky governments, black market or terrorist organizations, stealth tech companies like the fictional Robotica—will want to keep developments a secret from their competitors.

The especially troubling thing about this large and varied group of parties working on AI is that they tend to be racing ahead at top speed—as they develop smarter and smarter ANI systems, they want to beat their competitors to the punch as they go.
The most ambitious parties are moving even faster, consumed with dreams of the money and awards and power and fame they know will come if they can be the first to get to AGI.
And when you're sprinting as fast as you can, there's not much time to stop and ponder the dangers.
On the contrary, what they're probably doing is programming their early systems with a very simple, reductionist goal—like writing a simple note with a pen on paper—to just ”get the AI to work."
Down the road, once they've figured out how to build a strong level of intelligence in a computer, they figure they can always go back and revise the goal with safety in mind. Right...?

Bostrom and many others also believe that the most likely scenario is that the very first computer to reach ASI will immediately see a strategic benefit to being the world's only ASI system.
And in the case of a fast takeoff, if it achieved ASI even just a few days before second place, it would be far enough ahead in intelligence to effectively and permanently suppress all competitors.
Bostrom calls this a decisive strategic advantage, which would allow the world's first ASI to become what's called a singleton—an ASI that can rule the world at its whim forever, whether its whim is to lead us to immortality, wipe us from existence, or turn the universe into endless paperclips.

The singleton phenomenon can work in our favor or lead to our destruction. If the people thinking hardest about AI theory and human safety can come up with a fail-safe way to bring about Friendly ASI before any AI reaches human-level intelligence, the first ASI may turn out friendly.
It could then use its decisive strategic advantage to secure singleton status and easily keep an eye on any potential Unfriendly AI being developed. We'd be in very good hands.

But if things go the other way—if the global rush to develop AI reaches the ASI takeoff point before the science of how to ensure AI safety is developed, it's very likely that an Unfriendly ASI like Turry emerges as the singleton and we'll be treated to an existential catastrophe.

Stuff like Friendly/unfriendly gets talked about earlier in the article.
 
Musk is involved deeply in the push towards automated infrastructure, is the key difference here. He is a huge actor in this space. As far as I can tell his stance is "yeah it's going to be a problem and someone else will have to solve it".

Like even from the OP, Musk, with one hand, proselytizes about the existential crisis of AI, and with the other hand, launches two new AI-related ventures.

Uhh, Open AI is a non profit created with the sole purpose of ensuring AI is controlled and doesn't become dangerous.
 

Haly

One day I realized that sadness is just another word for not enough coffee.
Uhh, Open AI is a non profit created with the sole purpose of ensuring AI is controlled and doesn't become dangerous.

Yeah okay, let's open source AI so anyone can be the trigger for Strong AI. Seems good to me.

https://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/
But not everyone in the field buys this. Nick Bostrom, the Oxford philosopher who, like Musk, has warned against the dangers of AI, points out that if you share research without restriction, bad actors could grab it before anyone has ensured that it's safe. "If you have a button that could do bad things to the world," Bostrom says, "you don't want to give it to everyone." If, on the other hand, OpenAI decides to hold back research to keep it from the bad guys, Bostrom wonders how it's different from a Google or a Facebook.

I'm not even on #teambostrom or #teammusk, but I don't see OpenAI accomplishing its stated goals in any way that'll be different from anyone else to get to the Strong AI finish line first. It doesn't make sense to me at all, I just know that it's another Musk venture that'll accelerate the development of Strong AI rather than impede, retard or control it.
 

cdyhybrid

Member
I'm glad Musk is taking this seriously, because it's pretty clear from this thread that very few others are.

And I wouldn't trust Zuck to steward us into a new age of coffee makers, never mind AI.

But hey, let's just run wild with autonomous weapons technology, what's the worst that could happen right?
 
I'm glad Musk is taking this seriously, because it's pretty clear from this thread that very few others are.

And I wouldn't trust Zuck to steward us into a new age of coffee makers, never mind AI.

But hey, let's just run wild with autonomous weapons technology, what's the worst that could happen right?

What's the worst that can happen?
 
What's the worst that can happen?
I don't know if you're asking genuinely, but this is one of the worst-case scenarios:

1. hyper-intelligent A.I. forms
2. develops system of morality
3. decides that sorting all the world's grains of sand by size and color is the most moral cause possible
4. destroys humans since they are in the way of sand-sorting

If we create a being more intelligent than all of humanity, we risk instant extinction.

Even if that risk is .000001%, it's too high.
 
If i recall correctly, he's a hardcore opponent to investing in developing human like AI.

If he really believes what he is saying then it's not a stretch to assume that even if he is against the development, he is aware that he can't prevent the development on a global level. So better do it right yourself then let others do it wrong etc.
 

Haly

One day I realized that sadness is just another word for not enough coffee.
The idea behind OpenAI is to level the playing field. Countries would go to war over this technology if they thought they were behind. I don't know how far a corporation would go if it had the first AI.

But the playing field is not leveled at all this way. All you do is provide additional resources to someone who's keeping theirs under lock and key.

When Musk and Altman unveiled OpenAI, they also painted the project as a way to neutralize the threat of a malicious artificial super-intelligence. Of course, that super-intelligence could arise out of the tech OpenAI creates, but they insist that any threat would be mitigated because the technology would be usable by everyone. "We think its far more likely that many, many AIs will work to stop the occasional bad actors," Altman says.
What even is this? Do they think people are going to combine their good AIs together to stop Putin's evil AI? It sounds like an episode of Digimon.

The company may not open source everything it produces, though it will aim to share most of its research eventually, either through research papers or Internet services. "Doing all your research in the open is not necessarily the best way to go. You want to nurture an idea, see where it goes, and then publish it," Brockman says. "We will produce lot of open source code. But we will also have a lot of stuff that we are not quite ready to release."

Both Sutskever and Brockman also add that OpenAI could go so far as to patent some of its work. "We won't patent anything in the near term," Brockman says. "But we're open to changing tactics in the long term, if we find it's the best thing for the world." For instance, he says, OpenAI could engage in pre-emptive patenting, a tactic that seeks to prevent others from securing patents.

But to some, patents suggest a profit motive—or at least a weaker commitment to open source than OpenAI's founders have espoused. "That's what the patent system is about," says Oren Etzioni, head of the Allen Institute for Artificial Intelligence. "This makes me wonder where they're really going."
Yeah the software will be open as long as they get to keep a lead on it? Nice joke.
 
Top Bottom