• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Elon Musk tweets: "Competition for AI superiority... most likely cause of WW3"

Exactly.

GAF, on average, has a relatively low share of what I would consider weird people, for the lack of a better term; Even taking into account that the majority must be American. :p Nerds and socially awkward outcasts, yes. But no 'weird' weird, like truthers/preppers/gunslingers, that sort of people.

Yet, I cannot even begin to understand the mindset and general philosophy of someone who comes into this thread and their first impulse is to type "fuck Elon Musk". Truly baffling.

I think the fact the he was a trump adviser is enough to set off certain people.
 

Biske

Member
You realize these technologies also help working towards solving things like famine right?

How does making sure AI is benevolent help someone needing water in Africa right now?


I get that every dollar funding things like the space program resulted in many more gains in other areas. But at some point, maybe we should just focus on getting the world food and water and not whatever rich jerk off project.

I get that all this stuff builds up and contributes to other areas.

But Elon Musk mansplaining to the world "actually the real problem is..." is just his own personal masturbation.
 

KingK

Member
I'm no fan of Musk's anti-union stances and he seems like he's kind of a dick in his personal life, but all these posts dismissing potential threats posed by AI with "fuck Elon, that idiot. Lol terminator, yeah right," are downright embarrassing.

If anything I think he's underestimating, or leaving out, the potential for social upheaval due to severe AI labor disruptions exacerbating inequality issues, apart from the nation state level problems. Anyone dismissing out of hand taking any precautions regarding AI is being ridiculous. Like someone else said, it's reminiscent of the assholes who are against space exploration/research because "we've got enough problems down here." Like how short-sighted can you be.
 

E-Cat

Member
He's a technocrat and emblematic of some of the problems currently plaguing our generation, vis a vis our society being overturned by the shift towards automation and goods-as-service economy with no adequate mechanisms for a smooth transition. He doesn't seem to care about anything beyond his business ventures, and indeed this tweet itself reads like a business venture from a certain angle.

Zuckerberg gets a lot of flak for much of the same reasons, and now that he's gunning for a presidential run certain people hate him more than ever. These are not the people that should be leading public opinion, insofar as they have very little compassion or concern for societal problems outside the tech-bubble.

It's not very complicated.
Automation is coming. It's inevitable. It would happen without Elon's involvement, absolutely.

The whole raison d'être for Elon's businesses is that he does care. Maybe not for you, specifically, but humanity's survival. I understand that it can seem impersonal if you're not one for that sort of long-term thinking and perspective. He's solving more problems than anyone else right now; not creating them.

It fascinates me how polarizing Elon can be sometimes.
 

Haly

One day I realized that sadness is just another word for not enough coffee.
You realize these technologies also help working towards solving things like famine right?

Are they? Do you really think this or just wish it was true? Show me a concrete example of how AI research is helping to "solve famine". There was lot of hullabaloo about GMO crops "solving famine" as well and I don't know where that ended up but at least it's in the public consciousness. I've never heard of any AI solutions applied to the problem of global food distribution, which, from my limited perspective, is mostly a political problem rather than a technological one.
Automation is coming. It's inevitable. It would happen without Elon's involvement, absolutely.

The whole raison d'être for Elon's businesses is that he does care. Maybe not about you, specifically, but about humanity's survival.
That sounds like kool-aid but I'll humor you. What has Musk done, in terms of influencing public policy, to ensure that automation doesn't destroy the foundations of our society? The thing with "long-term" thinking is that it has a tendency to overlook the short term. Things are about to get bad really quick, much faster than the time it'll take for Strong AI to be an existential threat. Does Musk care at all about this or does he think we'll just get through it magically?
 

F!ReW!Re

Member
Lol no thats not it at all. There are plenty of people warning about plenty of things, I just don't get why people have such respect for "fixed my receding hairline wannabe scientist playboy" Elon Musk.

To me its incredibly indicative of humanities exact problem. We have many many problems, most people struggle having food and clean fucking water, but we all find rich assholes talking about killer robots so exciting!


We care so little for the actual goings on of real peoples lives, now, in this very moment, who gives a shit about rich peoples new pet project as humans suffer and die.

Whether its famine or robots, whats the real difference?

No offense but that seems like a very narrowing and dangerous view.
Just look at the here and now; don't look into potential future issues?

You do realise that the next big thing that'll really shake things up is going to be: Automation.
Basically anyone that does driving for a job:
Good bye job

Kurzgesagt video on automation;
https://www.youtube.com/watch?v=WSKi8HfcxEk

Should we just ignore it and see what happens?
That's like inviting chaos and be like: yeah I got problems right now, not gonna care what happens tomorrow.

We need people to think about solving these upcoming issues or we could be in deep shit (as a species).
Now that's not to say that other real world problems that are occuring now aren't important. But we got enough people on this globe to look into both occuring and future issues imo.
 

E-Cat

Member
I think the fact the he was a trump adviser is enough to set off certain people.
Elon's fault in this situation was that he is so logical, so lucid in his thinking that he thought he could get through even to Trump. Unfortunately, there is no curing that level of idiocy that sucks in everything in its vicinity like a black hole.
 

RoyalFool

Banned
You realize these technologies also help working towards solving things like famine right?

Why waste CPU time on something we already know? i.e. the 1% hoarding 98% of the world's wealth, and the countries tackling obesity whilst others are dealing with mass starvation.

The folks who fund these things would just pull the plug on whatever AI is brave enough to tell us to revolt
 

D4Danger

Unconfirmed Member
Elon's fault in this situation was that he is so logical, so lucid in his thinking that he thought he could get through to even Trump. Unfortunately, there is no curing that level of idiocy that sucks in everything in its vicinity like a black hole.

get through to him about what? the guy's a libertarian billionaire. You think he's looking out for your best interests?
 
Even if this is way off into the future;
Why is it a bad thing to think about it/prepare for already?

Better be cautious in advance than never seeing it coming.
How can we prepare against something when we don't even know what that something is? What methods it will use, what it will target and how it will target it, etc? Since we don't know when it will actually happen, what type of AI will be developed/weaponized first, what it will target and via what means, it seems pretty impossible to prepare for until whatever "it" is is actually developed. Even if you say just "cast a wide-net approach and protect everything," again the question remains how? From what? What will the AI use, what vulnerabilities will it target, what can it and can it not exploit and work around?

The only possible answer is "develop whatever 'it' is ourselves first and use the information gained from its development for protection" but even that is rather short-sighted and limited because this is such a broad topic that whatever we end up developing could be quite different from someone else to the point of being useless.

And on top of that, one of the frustrating things about this topic is that there seems to be a lack of standardized, operational definitions. Precisely because this type of AI doesn't exist yet, people come up with their own standards, criteria, and definitions for what does and does not qualify as a super-AI and that easily lets wires get crossed because people think they're each talking about the same thing, but it turns out they're using two very different definitions/set of criteria and thus that they're not. Just really hard to have conversations about this when everyone's on at least a slightly different page.
 
Can someone tell me how can a Super General AI destroy humanity without hacking a ICBM system that cannot be hacked because the circuit needs lever pulled to actually lunch?
Icbm? That's not it.. ai could basically control anything software related aswell. Could turn off or use: computers, internet, money, electricity, etc..
 

E-Cat

Member
get through to him about what? the guy's a libertarian billionaire. You think he's looking out for your best interests?
There are certain business cases to be made where, for example, investing in renewable energy over coal is the financially savvy thing to do.

Even if the chance of success of getting through to Trump was low, and indeed it failed, no harm was done by Elon participating in the two councils for the time that he did.
 
I'm no fan of Musk's anti-union stances and he seems like he's kind of a dick in his personal life, but all these posts dismissing potential threats posed by AI with "fuck Elon, that idiot. Lol terminator, yeah right," are downright embarrassing.

If anything I think he's underestimating, or leaving out, the potential for social upheaval due to severe AI labor disruptions exacerbating inequality issues, apart from the nation state level problems. Anyone dismissing out of hand taking any precautions regarding AI is being ridiculous. Like someone else said, it's reminiscent of the assholes who are against space exploration/research because "we've got enough problems down here." Like how short-sighted can you be.
But Elon is precisely avoiding these kind of discussions because he is involved with companies disrupting labor wth AI. He only wants to talk about existential demon in a bottle arguments because actual arguments about short term impact of AI threaten him. The majority of reputable AI researchers consider his opinion fringe. And he is far more efficient slave driver with yes, interesting and lofty visions of technology than the genius engineer image that he has so carefully crafted and so many people are falling for.
His ideas are far closer to running with the crankery shit that Yudkowsky writes than respected AI work.

See Etzioni's opinion on the matter (a more respected researcher) : https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html
 

Biske

Member
No offense but that seems like a very narrowing and dangerous view.
Just look at the here and now; don't look into potential future issues?

You do realise that the next big thing that'll really shake things up is going to be: Automation.
Basically anyone that does driving for a job:
Good bye job

Kurzgesagt video on automation;
https://www.youtube.com/watch?v=WSKi8HfcxEk

Should we just ignore it and see what happens?
That's like inviting chaos and be like: yeah I got problems right now, not gonna care what happens tomorrow.

We need people to think about solving these upcoming issues or we could be in deep shit (as a species).
Now that's not to say that other real world problems that are occuring now aren't important. But we got enough people on this globe to look into both occuring and future issues imo.

People are in deep shit now. What is Elon doing for that? He going to deliver food via Hyper Loop?

My problem isn't these causes, its us treating science like TMZ and getting a boner every time assholes like Elon tell us what we should be focusing on, while doing jack shit for any problem that is a problem now.

Wanna change shit? Go vote in a local election. Then worry about fucking AI.
 

F!ReW!Re

Member
How can we prepare against something when we don't even know what that something is? What methods it will use, what it will target and how it will target it, etc? Since we don't know when it will actually happen, what type of AI will be developed/weaponized first, what it will target and via what means, it seems pretty impossible to prepare for until whatever "it" is is actually developed. Even if you say just "cast a wide-net approach and protect everything," again the question remains how? From what? What will the AI use, what vulnerabilities will it target, what can it and can it not exploit and work around?

The only possible answer is "develop whatever 'it' is ourselves first and use the information gained from its development for protection" but even that is rather short-sighted and limited because this is such a broad topic that whatever we end up developing could be quite different from someone else to the point of being useless.

And on top of that, one of the frustrating things about this topic is that there seems to be a lack of standardized, operational definitions. Precisely because this type of AI doesn't exist yet, people come up with their own standards, criteria, and definitions for what does and does not qualify as a super-AI and that easily lets wires get crossed because people think they're each talking about the same thing, but it turns out they're using two very different definitions/set of criteria and thus that they're not. Just really hard to have conversations about this when everyone's on at least a slightly different page.

You hit the nail on the head;
There are no standards/definitions for AI and for developing them (it).
There should be; that's what a lot of guys like Bostrom are arguing for.

And how do we keep a hold/leash on AI once we found a way to develop it.
How do you control something that is so far past your own understanding that it seems alien. How do you control AI that is far past human understanding?

If we all go: Nah fuck that, sounds like sci-fi, don't worry about it.
That's when you are inviting a potential existential threat.

Again:
Better safe than sorry.
 

Haly

One day I realized that sadness is just another word for not enough coffee.
We can't even rally around climate change, do you think the political and social capital exists to tackle problems we have trouble even defining?
 

E-Cat

Member
That sounds like kool-aid but I'll humor you. What has Musk done, in terms of influencing public policy, to ensure that automation doesn't destroy the foundations of our society? The thing with "long-term" thinking is that it has a tendency to overlook the short term. Things are about to get bad really quick, much faster than the time it'll take for Strong AI to be an existential threat. Does Musk care at all about this or does he think we'll just get through it magically?
It's not Musk's job to influence public policy in regards to automation. Leave that to the public to elect in the right politicians to make the right decisions -- which they are ill-equipped to do, unfortunately, in part due to the kind of hostility toward long-term planning that we are seeing in this very thread.

Musk's greatest contributions have been:

1. Innovation in the creation and consumption of renewable energy
2. Dramatically lowering the cost of delivering goods to orbit
3. Advancing the plans to colonize and terraform Mars
 

Haly

One day I realized that sadness is just another word for not enough coffee.
It's not Musk's job to influence public policy in regards to automation. Leave that to the public to elect in the right politicians to make the right decisions -- which they are ill-equipped to do, unfortunately, due in part to the kind of hostility toward long-term planning that we are seeing in this very thread.

I'm sorry what is this "fearmongering" about the existential crisis of AI but influencing public opinion?

This is the problem with Musk and his ilk. They think every problem can and should be solved by application of disruptive technologies and/or startup ventures. Or rather, they have a marked disinterest in anything that's not a business venture, which is where people take issue with them as cultural leaders.
 
Nope. World War 3 will be a war between worlds. Most likely God and his angels from Nibiru vs Mankind. But I could see some humanoid alien invasion as another possibility.
 

F!ReW!Re

Member
People are in deep shit now. What is Elon doing for that? He going to deliver food via Hyper Loop?

My problem isn't these causes, its us treating science like TMZ and getting a boner every time assholes like Elon tell us what we should be focusing on, while doing jack shit for any problem that is a problem now.

Wanna change shit? Go vote in a local election. Then worry about fucking AI.

You mean the guy who basically kicked the slumbering automotive industry in the behind by forcing them to develop electrical alternatives to fuel based cars is not doing enough for the people on planet earth at the moment?
(Please don't start about; "it's not just him", it's not, lot of people involved, but Tesla was a major factor in the whole process).

You mean the guy who's pushing for more solar energy instead of fossil fuels is not doing a lot for the planet at the moment?
(Again not just him, but Tesla and Solar City are (again) big players in this field)

The guy who kickstarted a fucking rocket company that is out maneuvering the lumbering giants of the space industry and is innovating space travel/rockets is not doing enough?
The same guy who's gonna use his cheap (relative) rockets to supply internet to remote/isolated areas of the planet which will improve lives (yes improve them, increase access to the internet is a good thing for development and learning) is not doing enough for current problems?

- Yes the guy can be/is an asshole
- Yes his attitude on unions and worker laws are shit
- No you don't have to like him personally.

But don't state like the guy is just living a pipe dream and is some millionaire who just invests in random bullshit that doesn't benefit anyone.
 

E-Cat

Member
I'm sorry what is this "fearmongering" about the existential crisis of AI but influencing public opinion?

This is the problem with Musk and his ilk. They think every problem can and should be solved by strategic application of disruptive technologies and/or startup ventures. Or rather, they have a marked disinterest in anything that's not a business venture, which is where people take issue with them as cultural leaders.
I'm not saying that he's not trying to influence public opinion on this issue; but that it hasn't been his main contribution to humanity's long-term future.

Most of the world's problems arise from scarcity, which disruptive technologies are pretty much the only force to ever have a chance of solving it, realistically. Can't see how Elon is in the wrong here.
 

Haly

One day I realized that sadness is just another word for not enough coffee.
Most of the world's problems arise from scarcity, which disruptive technologies are pretty much the only force to even have a chance of solving, realistically. Can't see how Elon is in the wrong here.

We don't actually have a scarcity problem, I don't think. What we do have a distribution problem, one that, while Musk isn't solely responsible for, he's surely benefiting from.
 
Anticipating / preparing for possible eventualities =/= pretending you can predict the future.

Anticipating what? For all we know there's an actual AI out there but it came to the conclusion doing nothing is better. It's just him being paranoid of a future he knows nothing of. What if a dominant AI came out that purely just wanted to watch anime and troll online? No one knows what a sentient thing will do because no one has encountered one.
 

F!ReW!Re

Member
Anticipating what? For all we know there's an actual AI out there but it came to the conclusion doing nothing is better. It's just him being paranoid of a future he knows nothing of. What if a dominant AI came out that purely just wanted to watch anime and troll online? No one knows what a sentient thing will do because no one has encountered one.

Again:
Better to prepare for any possible issues than be sorry later.
What's the fucking problem with being prepared?
 

smurfx

get some go again
yeah a.i can potentially be a threat but humanity is facing many catastrophic events in its future. global warming might seriously damage many economies and leave many countries in ruins. i doubt they go away quietly and not try to steal their neighboring countries resources. there is also the threats of things like major volcanic events messing up the world in big ways or even asteroids.
 

Haly

One day I realized that sadness is just another word for not enough coffee.
Again:
Better to prepare for any possible be issues than be sorry later.
What's the fucking problem with being prepared?

How do you even know you're prepared? What does being prepared entail?
 

Biske

Member
You mean the guy who basically kicked the slumbering automotive industry in the behind by forcing them to develop electrical alternatives to fuel based cars is not doing enough for the people on planet earth at the moment?
(Please don't start about; "it's not just him", it's not, lot of people involved, but Tesla was a major factor in the whole process).

You mean the guy who's pushing for more solar energy instead of fossil fuels is not doing a lot for the planet at the moment?
(Again not just him, but Tesla and Solar City are (again) big players in this field)

The guy who kickstarted a fucking rocket company that is out maneuvering the lumbering giants of the space industry and is innovating space travel/rockets is not doing enough?
The same guy who's gonna use his cheap (relative) rockets to supply internet to remote/isolated areas of the planet which will improve lives (yes improve them, increase access to the internet is a good thing for development and learning) is not doing enough for current problems?

- Yes the guy can be/is an asshole
- Yes his attitude on unions and worker laws are shit
- No you don't have to like him personally.

But don't state like the guy is just living a pipe dream and is some millionaire who just invests in random bullshit that doesn't benefit anyone.

You give him far too much credit. And stuff like Solar City is inept in the face of many places fucking with net metering to the point where its no longer viable.

Cause its all big flashy shit and then "oh fuck better pull out of Nevada cause solar is going south, ah well NEW SOLAR ROOF GUYS!!"

I'm not saying he doesn't do any good, clearly he is brilliant and does a lot of good things, but he also has a long rich history of over promising exciting bullshit and then, HEY LOOK OVER HERE NEW COOL SHIT!
 

E-Cat

Member
We don't actually have a scarcity problem, I don't think. What we do have a distribution problem, one that, while Musk isn't solely responsible for, he's surely benefiting from.
Well, we do, actually. It's just that the scarcity is brought about by the uneven distribution of the world's resources. I said technology was the most realistic candidate to solve this, because let's face it: We in the first world countries aren't bloody likely to lower our standard of living in order to share our material wealth with the rest of the world, now are we?
 

Haly

One day I realized that sadness is just another word for not enough coffee.
We in the first world countries aren't bloody likely to lower our standard of living in order to share our material wealth with the rest of the world, now are we?
We don't have to lower our standards of living at all. I'll give you a hint, there's someone who we're discussing in this very thread that could singlehandedly lend their not inconsequential wealth to the problem of redistribution if they so choose and affect a larger shift than all of the rest of us put together would by lowering our standard of living.
 
Elon's fault in this situation was that he is so logical, so lucid in his thinking that he thought he could get through even to Trump. Unfortunately, there is no curing that level of idiocy that sucks in everything in its vicinity like a black hole.

spot on. I dont blame him for trying though. If ever there was a person in need of good advising...
 

G-Bus

Banned
Sounds like a good time to learn some survival skills and live off the labd, away from technology.
 

E-Cat

Member
Anticipating what? For all we know there's an actual AI out there but it came to the conclusion doing nothing is better. It's just him being paranoid of a future he knows nothing of. What if a dominant AI came out that purely just wanted to watch anime and troll online? No one knows what a sentient thing will do because no one has encountered one.
Sentience has nothing to do with it. Also, it is highly likely that as AI gets more sophisticated, even if it is of the narrow kind, that it will gradually take on a larger part of the operations running the financial markets, governments, etc. Such a scenario, where humans are out of the decision-making loop, is highly precarious. Seems like caution is the rational approach, nothing to do with 'paranoia'.
 
Again:
Better to prepare for any possible issues than be sorry later.
What's the fucking problem with being prepared?

We are already prepared with our poor infrastructure and physical tech. It's not like a sentient AI will just take over things, that requires rewriting a lot of code and could most likely brick things which isn't a good course for an AI unless it just wanted to brick systems.

Musk is a paranoid person.

Sentience has nothing to do with it. Also, it is highly likely that as AI gets more sophisticated, even if it is of the narrow kind, that it will gradually take on a larger part of the operations running the financial markets, governments, etc. Such a scenario, where humans are out of the decision-making loop, is highly precarious. Seems like caution is the rational approach, nothing to do with 'paranoia'.

That's a hypothetical that won't happen, though. Financial markets are still human driven today except for trading because it's easier to do with software. Humans will still make the decision because it's their money unless they want a program to do it. Your statements reads as if society would break down, it won't.

Like, what can an AI really do? Spam bots? Is it going to hijack all the world's nukes through cellular piggybacking on MCC member's phones to infiltrate outdated silos infrastructure?
 

E-Cat

Member
We don't have to lower our standards of living at all. I'll give you a hint, there's someone who we're discussing in this very thread that could singlehandedly lend their not inconsequential wealth to the problem of redistribution if they so choose and affect a larger shift than all of the rest of us put together would by lowering our standard of living.
The hypothetical impact of Elon donating away his net worth, though considerable, pales in comparison to the kind of grassroots-level, decentralized change to the infrastructure of developing countries that could be brought about by continuing to channel that money into the development of better solar panels and lithium-ion battery storage.

You could have used the same line of argument fifteen years ago, saying that Elon would be better to donate his couple hundred million dollars off the sale of PayPal to some third-world country, with there being a non-trivial chance of some of it ending up in the pockets of corrupt politicians or a drug cartel. However, in that case we would not have Tesla today which, when all is said and done, will have done more to change the world for the better than any such giveaway could ever have.
 

jelly

Member
We are already prepared with our poor infrastructure and physical tech. It's not like a sentient AI will just take over things, that requires rewriting a lot of code and could most likely brick things which isn't a good course for an AI unless it just wanted to brick systems.

Musk is a paranoid person.



That's a hypothetical that won't happen, though. Financial markets are still human driven today except for trading because it's easier to do with software. Humans will still make the decision because it's their money unless they want a program to do it. Your statements reads as if society would break down, it won't.

Like, what can an AI really do? Spam bots? Is it going to hijack all the world's nukes through cellular piggybacking on MCC member's phones to infiltrate outdated silos infrastructure?

The thing is though, AI would think beyond our capacity and who knows what it comes up with that are beyond us in understanding and application. We maybe can't wrap our heads around that happening but it's possible AI could create something that no human application could defend against.
 

Haly

One day I realized that sadness is just another word for not enough coffee.
The hypothetical impact of Elon donating away his net worth, though considerable, pales in comparison to the kind of grassroots-level, decentralized change to the infrastructure of developing countries that could be brought about by continuing to channel that money into the development of better solar panels and lithium-ion battery storage.

I don't need him to hand out his cash. But even in your scenario, the benefits of these technologies on developing countries is at best tangential to his primary objective of making money, in that poverty stricken areas are only valuable as potential marketplaces. This kind of ultra-libertarian approach to humanitarianism is extremely off-putting, because it treats human well-being as side products.

And what if those "grassroots" changes never come about? What if famine continues to ravage the world as we colonize Mars under the banner of Tesla? Do you think that'll faze Musk at all?
 

E-Cat

Member
I don't need him to hand out his cash. But even in your scenario, the benefits of these technologies on developing countries is at best tangential to his primary objective of making money, in that poverty stricken areas are only valuable as potential marketplaces. This kind of ultra-libertarian approach to humanitarianism is extremely off-putting, because it treats human well-being as side products.

And what if those "grassroots" changes never come about? What if famine continues to ravage the world as we colonize Mars under the banner of Tesla. Do you think that'll faze Musk at all?
Technological development, making money and humanitarianism need not be mutually exclusive.

"Yeah, so what if Musk solves the energy crisis and colonizes Mars? He's still a shit person for not ending famine, the scourge that has plagued Man since the beginning of time!"

He's doing his share in directing his energies to a considerable part of the puzzle. You can't expect one person to do everything; holding him up to that kind of standard is all sorts of ridiculous.
 
I'm not just talking about the computer scientists, though. I'd be perfectly fine with Bostrom and Tegmark if they were actually conducting empirical research into the topic rather than running workshops, advocating and writing books. Oh, and about the book part. I don't think book publishing is a sign of expertise so much as being downright suspicious. Real scientists publish papers, not books, because papers are peer reviewed and required to be worth a damn. Any random individual can publish a book provided people be willing to buy it. Guess who's a best-selling author? Deepak Chopra. Guess what Chopra isn't? A published, cited or respected expert in a scientific field.

Books are fine for publishing pop-sci explanations to laypeople. Stephen Hawking's books are great. But they should be relegated to simply explain established consensus, not present controversial conclusions. The problem is that Bostrom, Tegmark publish books (also blogs, magazine articles, videos...) doing exactly that, which raises all kinds of red flags.

Also: when climate scientists present theoretical work on the impact of climate change, they have tons of real world, empirical data to back those theories up with. And even then, they're perfectly fine with stating that they can only really say that things will change in a lot of places but that it's difficult to say in what way. That's not what AI doomsayer's are doing; they're doing the opposite in fact - they make definitive, sweeping predictions with basically no backing for it. Even the singularity you're arguing as being the core threat of AI - and the supposed reason we can't wait to understand the issue before we act to prevent it (however that works) - is completely speculative.

Quite simply, I just don't subscribe to your view that the only people worth listening to are publishing peer reviewed papers within (or hell, even without) academic circles. I think that's silly.

I'll grant you that my comparison to climate change was flawed, and that's because there is something to currently measure with climate change and global warming. We are in the thick of it. But what do you hope to see in "empirical research" for general super intelligence? I mean, we have points on a graph in our timeline showing our vector for technological progress, we have physicists saying it is totally possible, and we have computer scientists saying it's totally possible. Many even saying inevitable given all these data points. What sort of empirical data are you looking for?
 

Haly

One day I realized that sadness is just another word for not enough coffee.
Technological development, making money and humanitarianism need not be mutually exclusive.

"Yeah, so what if Musk solves the energy crisis and colonizes Mars? He's still a shit person for not ending famine, the scourge that has plagued Man since the beginning of time!"

He's doing his share in directing his energies to a considerable part of the puzzle. You can't expect one person to do everything; holding him up to that kind of standard is all sorts of ridiculous.
You think the "puzzle" is an interplanetary human civilization. I think the "puzzle" is a world where people don't need to be hungry or discriminated against for the circumstance of their birth. In your scenario, human welfare is a side effect of technological and economic expansion, and not a requirement. In mine, humanitarianism is the sole goal, everything else comes second.
 

Kyzer

Banned
You think the "puzzle" is an interstellar human civilization. I think the "puzzle" is a world where people don't need to be hungry or discriminated against for the circumstance of their birth. In your scenario, human welfare is a side effect of technological and economic expansion, and not a requirement. In mine, humanitarianism is the sole goal, everything else comes second.

You don't think Elon Musk taking a job telling Trump he's wrong about climate change despite it making him unpopular and doing damage to his brand was a humanitarian cause? I'd say it's more important than politics. That's what people don't understand. He didn't endorse Trump as President, he took a job advising him on an issue he disagreed with him on.
 
He's right, though a regional conflict going nuclear is the more present threat. Putin is also right in his statement... whoever gets there first wins.

Yeah and like other people have pointed out first, look how Putin framed it. That gives you pretty serious insight into what Putin wants.
 
Top Bottom