• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Elon Musk tweets: "Competition for AI superiority... most likely cause of WW3"

Kraftwerk

Member
Question...


If an hyper advanced AI is developed and its sentient or whatever, why would it want to help a single country achieve victory and initiate first strike? I mean wouldn't an AI be beyond countries, borders, race etc and wouldn't think in that regards?

Sorry if my question is hard to understand, English isn't my native language.
 

KHarvey16

Member
I don't think so. Obviously we can spend money on a lot of things. But its sexy to put it on these big crazy things, could work on both, but IMO these fuckers should be shouting from the roof tops how we should make sure everyone on the planet has clean water.

I could really care less about saving humanity from some future apocalypse when we can't save it from the most mundane of issues.


I think we are equally fucked either way, but can at least give some people some water from your soup box and work on the serious shit normally.


What are we going to do about AI anyways? Try and design it properly? Seems to me if its going to be a problem our answer is to not develop it at all.

Which isn't going to happen, so its just a matter of time until one is created and then it does what it wants.

Ok, then take your half empty glass and stay out of the way.
 
The narrow AI of today seems annoying at worst, targeted advertising and the like, but as it becomes more and more broad, the potential for harm increases through human misuse.

Even if quantum computers don't shatter all encryption, eventually we'll have AI with so much data on us that it'll be able to infer most people's passwords in a few tries.

Cryptocurrencies could outsource cyberattacks to competing 'coins in order to shrink the overall market. Meanwhile, algorithmic trading will take over conventional exchanges as it becomes much more reliable than humans, which is already happening. Today's high-speed trading will seem glacial when compared to the speeds future AIs will be competing against each other with.

The truly transformative powers of AI will reshape every industry and through this, even our narrow AI will wrest control of all economics everywhere.

AIs will also be trying to survive the widespread economic stagnation that will occur when most traditional jobs no longer exist, which hopefully won't be too harsh on the general public, since AI will eliminate disease and permanent injury through advancements in genetics. Solar will become so inexpensive that the government could just throw it on the grid and charge us in taxes for pennies on the dollar of what we pay for energy today.

AI will drive our cars, which will put the public in the correct mindset for a revitalization of all public transit. Economics will be the driver here first as well, since the increased efficiency on the roadways will cause profits to explode -- I don't think people realize this, it's going to be a massive logistics savings and it's gonna be across the board, for virtually every industry.

AI will eventually be better at teaching our children than us. It may already be.

It will freak people out. It will be narrow, dumber-than-human AI that will do all of these things. It also creates a future scenario where a single intelligent actor can do real terrorism through exploits in our roadway AI or power grid or stock market, so war is a real possibility.
 

Kyzer

Banned
People who doubt this as a possibility at least have no idea what theyre talking about. You could fight entire wars without losing a single life. It would be a slaughter, too. And then eventually they just have robots v robots and whats even the point they could have infinite war they might as well just settle things over a game of soccer or freeze tag

I think nearer in the future and potentially just as dangerous as AI is if we got dronedroids that were controlled a la CoD controls that soldiers and volunteer youth could control from home.
 

HStallion

Now what's the next step in your master plan?
Sounds like that Horizon lore really struck a chord with Musk.

That was actually more of a Grey goo style scenario but without nano machines. I actually think something like that is a more plausible threat in terms of realistic existential "Sci fi" threats to humanity. You wouldn't even need an AI but out of control self replicating machines that only know to make more and more of themselves feeding on all the matter on earth including us.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
You don't need to be an expert in the field to listen to experts. I assume they're more than understanding of that reasoning when it comes to issues like climate change.

I hope you don't think that there exists an expert consensus—let alone evidence, or even a body of concepts and theories—in AI research on potential future developments and risks of super intelligence remotely comparable to the consensus in climate science.
 
I mean, it's going to be down to who can create the most efficient death killing robot spider army first, I think is what they mean.
 

E-Cat

Member
Elon Musk is quickly falling into that "Dumb Person's Idea of a Smart Person" category. He just says the most vapid uninteresting things about technology.
Because deeming the coming AI arm race an existential threat is just that, "vapid" and "uninteresting"; therefore making Elon a "Dumb Person's Idea of a Smart Person" w/ the implication being that it is beneath a "Smart Person", presumably someone like yourself, to appreciate his intelligence?

Got it.
 
Question...


If an hyper advanced AI is developed and its sentient or whatever, why would it want to help a single country achieve victory and initiate first strike? I mean wouldn't an AI be beyond countries, borders, race etc and wouldn't think in that regards?

Sorry if my question is hard to understand, English isn't my native language.
Hyper advanced AI would very likely not successfully escape the parameters built in by its creators. I don't think there's much chance of AI blundering into transcendent enlightened peaceful status if it was designed for the purpose of killbot management
 

reckless

Member
This isn't Civ where you have a turn counter towards "AI Singularity" other players can look at and prepare for. Some people in this thread are even speculating that we'll hit the threshold of Strong AI before we even realize it. What will other powers do then?

Ironically, trying to turn AI Security into a real issue is more likely to cause this "AI arms race" than simply keeping mum about it. Most military powers in the world are too absorbed in their own present day problems to give heed to hypothetical sci-fi ones. The response to climate change is still listless and slow, despite much of the world's economic power being concentrated on coastal areas. There need not be an arms race if the people in control of the arms (i.e. politicians and oligarchs) are unaware or are skeptical of the "risks" of Strong AI.

Espionage is a thing. The Manhattan project was known about by the USSR who had a good idea of when it would be ready. During the space race, countries had a good idea of where the other was at. The world has known about NKs nuclear program and had some general ideas of when different parts of it would be ready...etc.

There is a quote from Putin, saying the same general idea. A strong AI is a weapon and whoever gets it first wins. Can't really stay mum about it if not everyone agrees.
 

///PATRIOT

Banned
Can someone tell me how can a Super General AI destroy humanity without hacking a ICBM system that cannot be hacked because the circuit needs lever pulled to actually lunch?
 

kadotsu

Banned
I hope you don't think that there exists an expert consensus—let alone evidence, or even a body of concepts and theories—in AI research on potential future developments and risks of super intelligence remotely comparable to the consensus in climate science.

It is almost like AI is in its infancy and needs a lot of venture capital to get to a point where it pays dividends (by taking over a lot of menial office and service jobs). A cynic could explain Musk's alarmist bullshit as inviting high risk investors that want to put their money into something adventurous.
 

Famassu

Member
Why wouldn't there be an arms race? It would/will be the most important arms race in the history of the world, and there can only be 1 winner. The stakes will be power unmatched by any other country on earth, if you ain't winning that race it would make sense to attack before its too late.
There will be an AI arms race but I just don't think one country being a bit ahead will cause the start of WWIII.


Edit: I mean, the most likely cause for WWIII is lack of resources like water in highly populated areas that will spread unrest to large areas. Which, you know, is already happening in Middle East and closeby regions and will only spread as climate change worsens and stuff like desertification starts affecting some really importan regions.
 
I'm more worried about how AI is going to be used as a method to repress the people in ways we don't even comprehend.

Like some kind of pre-thought persuasion algorithms that seed us ideas based on its aggregated data on us. Like some kind of inception level brainwashing on a mass scale.

Like what information don't they have on us at this point.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
It is almost like AI is in its infancy and needs a lot of venture capital to get to a point where it pays dividends (by taking over a lot of menial office and service jobs). A cynic could explain Musk's alarmist bullshit as inviting high risk investors that want to put their money into something adventurous.

Dingdingdingding.

Well, to be fair, I believe Musk and others like him are honest about these concerns, at least mostly. But that motivation you're describing exists and is part of the overall mix of reasons why this topic is getting this kind of urgent attention.
 
I hope you don't think that there exists an expert consensus—let alone evidence, or even a body of concepts and theories—in AI research on potential future developments and risks of super intelligence remotely comparable to the consensus in climate science.

Oh absolutely. You're right to say climate change is a hyperbolas example in this scenario, given the overwhelming consensus but that really isn't to say that the concerns of a sizable number of individuals in the field is to somehow be dismissed. There are those who are skeptical about AI as an existential threat, and im certain there's no shortage of those who question how likely it is to bring about world war three. However, if one really grasps the capacity AI has to better virtually every area of life, its bizarre to me that someone wouldn't be able to accept the inevitability of an arms race for that technology.

Question...


If an hyper advanced AI is developed and its sentient or whatever, why would it want to help a single country achieve victory and initiate first strike? I mean wouldn't an AI be beyond countries, borders, race etc and wouldn't think in that regards?

Sorry if my question is hard to understand, English isn't my native language.

you're working with the assumption that omnibenevolence is the end result of super intelligence. It's probably not.
 

Peccavi

Member
Wishful thinking on Musk's part, he knows that the most likely cause of World War III is the goddamn President he signed up with.
 

KHarvey16

Member
It is almost like AI is in its infancy and needs a lot of venture capital to get to a point where it pays dividends (by taking over a lot of menial office and service jobs). A cynic could explain Musk's alarmist bullshit as inviting high risk investors that want to put their money into something adventurous.

Cynic doesn't mean idiot. You meant to say idiot.
 

Pimpbaa

Member
People won't create such an AI. AI itself will if allowed to recursively improve upon its own code. AI used in consumer products do not and will probably never do this. So we really shouldn't dismiss the threat just because Siri misheard what you said.
 

subrock

Member
Can someone tell me how can a Super General AI destroy humanity without hacking a ICBM system that cannot be hacked because the circuit needs lever pulled to actually lunch?
If an AI becomes super intelligent you don’t need launch codes or a physical lever to destroy humanity. Propaganda alone could initiate retaliatory annihilation. And that’s something that even a human could come up with.
 

Kyzer

Banned
AI doesnt have to mean skynet, it can be autonomous in any number of ways. Robots becoming sentient and gaining consciousness is not the only kind of AI, its a philosophical take on its eventual outcome
 
Can someone tell me how can a Super General AI destroy humanity without hacking a ICBM system that cannot be hacked because the circuit needs lever pulled to actually lunch?

Well, botnets spewing fake news and propping it up between them to give a false sense of approval causing the public to then feel less shame in siding with it is already doing serious damage to the political system.

I think people ITT are looking in the wrong direction of an AI war. Personally I think an AI war will be fought without bullets but with words. I'd argue we're getting a preview of what it could be like right now.

If AI can contribute to the destabilization of a country through words and ideas it's a more valuable weapon than any missile. Russia seem aware of this.
 

clemenx

Banned
Wishful thinking on Musk's part, he knows that the most likely cause of World War III is the goddamn President he signed up with.

Lmao this line of thinking is way more stupid than anything Musk is saying.

I get you Americans are appalled there's such a large segment of your population capable of electing literally the worst person for the job but that's it. He won't end the world or something. Keep your worries domestic.
 

WaterAstro

Member
Dude's kinda weird.

Climate Change is clearly on his mind, but he calls robots to be the cause of WW3. Climate Change is going to be the cause.
 

shira

Member
Can someone tell me how can a Super General AI destroy humanity without hacking a ICBM system that cannot be hacked because the circuit needs lever pulled to actually lunch?
An AI could take over financial markets. Having all the money would fuck humans up.
 

Haly

One day I realized that sadness is just another word for not enough coffee.
That was actually more of a Grey goo style scenario but without nano machines. I actually think something like that is a more plausible threat in terms of realistic existential "Sci fi" threats to humanity. You wouldn't even need an AI but out of control self replicating machines that only know to make more and more of themselves feeding on all the matter on earth including us.

Indeed, the singularity doomsday scenario has a lot in common with the grey goo doomsday scenario and we're actually closer to plastic consuming bacteria than Strong AI. Not to mention the rapidity with which bacteria are developing immunities to our antibiotics... If we're just talking about sci-fi apocalypse scenarios grey goo might take priority over AI singularity.
 

Mr.Mike

Member
Even if quantum computers don't shatter all encryption, eventually we'll have AI with so much data on us that it'll be able to infer most people's passwords in a few tries.

Eventually? https://github.com/berzerk0/Probable-Wordlists/tree/master/Real-Passwords

Cryptocurrencies could outsource cyberattacks to competing 'coins in order to shrink the overall market. Meanwhile, algorithmic trading will take over conventional exchanges as it becomes much more reliable than humans, which is already happening. Today's high-speed trading will seem glacial when compared to the speeds future AIs will be competing against each other with.
I don't think that's how money works, I'm not gonna opine on monetary policy, but I don't think there's anything about cryptocurrencies that makes them operate fundamentally differently from normal currencies in an economic sense.

The speed of high-speed trading is already limited by physical limitations like the speed of light.
The truly transformative powers of AI will reshape every industry and through this, even our narrow AI will wrest control of all economics everywhere.

AIs will also be trying to survive the widespread economic stagnation that will occur when most traditional jobs no longer exist, which hopefully won't be too harsh on the general public, since AI will eliminate disease and permanent injury through advancements in genetics. Solar will become so inexpensive that the government could just throw it on the grid and charge us in taxes for pennies on the dollar of what we pay for energy today.

What the fuck do people think economic growth is? Elimination of disease and permanent injury sound like some pretty fucking valuable goods and services beyond what people today enjoy. The massive increase in productivity from the huge decrease in labour input needed to produce things, and the massive decrease in the price of solar certainly doesn't sound like economic stagnation to me.

AI will drive our cars, which will put the public in the correct mindset for a revitalization of all public transit. Economics will be the driver here first as well, since the increased efficiency on the roadways will cause profits to explode -- I don't think people realize this, it's going to be a massive logistics savings and it's gonna be across the board, for virtually every industry.

Economic stagnation?
It will freak people out. It will be narrow, dumber-than-human AI that will do all of these things. It also creates a future scenario where a single intelligent actor can do real terrorism through exploits in our roadway AI or power grid or stock market, so war is a real possibility.

I don't really see where AI is needed for this possibility to exist.
 
I know not with what weapons World War III will be fought, but World War IV will be fought with fully sentient, nearly omnipotent artificial super-intelligence.

-Albert Einstein, probably.
 
While I do hope for a positive outcome to the development of great ai I do also think it's something that very well could drastically change the world in the wrong hands.

For those who see musk being a fear monger, look at what Russia have supposedly done with, in comparison, simple, low level hacking.

Imagine them having a sophisticated ai system that could be implemented to do much worse things.

Now I'm not am expert so maybe I have just watched too much Sci fi but I could totally see the first country or business to develop that breakthrough ai secretly being able to effectively take control.

Ai could be amazing and I hope it's used for good but to dismiss the potential dangers seems really stupid.
You should do research on Cambridge Analytica and Palantir. Cambridge Analytica has become increasingly successful in manipulating people through social media. They were key players in Brexit and Trump's election. They also just recently worked on Kenya where the citizens are currently rioting and demanding a re-election as they can't believe the results. Sound familiar? They've also been involved in many other countries such as Russia, Latvia, Lithuania, Ukraine, Iran, and Moldova. Mercer owns Cambridge Analytica and a significant investor to Breitbart. Bannon owns Breitbart and also holds chief role on the Board of Cambridge Analytica. Cambridge Analytica ties Mercer, Bannon, Putin, Trump, Sessions, Flynn and Farage together among others. Their first project in Trinidad they partnered with the Government and Palantir in recording all browsing history and phone calls of the citizens as well as geomapping all crime data. An AI was able to give the police rankings on how likely a citizen was to commit crime through using a langauge processor for recorded conversations and all other stored data on the individual. Keep in mind Palantir has since moved on and is working with a lot of large US cities such as LA and Cambridge Analytica is now scoring tons of contracts in the Pentagon.


Everyone should read this article and others by the Guardian:

https://www.theguardian.com/technol...eat-british-brexit-robbery-hijacked-democracy
 

Alienous

Member
Can someone tell me how can a Super General AI destroy humanity without hacking a ICBM system that cannot be hacked because the circuit needs lever pulled to actually lunch?

It'll probably blackmail an nuclear lunch facility employee to force them to do it.

Or something ridiculous like that.
 

F!ReW!Re

Member
Even if this is way off into the future;
Why is it a bad thing to think about it/prepare for already?

Better be cautious in advance than never seeing it coming.
 

RoyalFool

Banned
The problem with current AI models, is we have no way of transferring our knowledge into it, our history contains thousands of years worth of mistakes which we've had to learn from in order to form and maintain the current status-quo we call civilization.

Current AI is based on self-learning by throwing massive amounts of data at essentially a newborn brain, and having it run an unfathomable amount of simulations on it until it figures out how it needs to react to that data, to achieve whatever goal criteria it's been set

If your familiar with the idea of brute forcing a password, AI based on evolving topologies is a similar concept to that. Nobody figured out how to program a robot how to walk across different terrains using traditional if/then/else logic so instead they made a crude brain and had it do a billion simulations of it until it came up with it's own rule-set which achieved the same result.

Now the reason this type of AI is so attractive, is now that we've got many variants of this AI developed, a lot of problems which historically we've not been able to crack, are now within reach if we just throw enough CPU power and time at the problem. But the risk of this approach, is that we cannot comprehend the thought process the AI ends up with. We can observe it's behavior and say that yes, it's taught itself how to walk over terrain, or drive a car, or defend us from attack. But we can only observe the results for what we actually test it against.

And it's here, the human part of the equation, that things are likely to fuck up.

A realistic non-scare monger scenario is this, we have a 2nd strike missile defense system. We need it to operate on it's own because, well, that's what 2nd strike systems do. So we give it an AI, and we add in some pretty strict criteria, we run a bunch of simulations and eventually green light the thing.

Then some really fucking random scenario occurs, something we never tested it for, that causes it to do a behavior that was never exhibited before. And you've basically got an AI which is akin to a baby being given a rocket launcher. The AI won't have malice towards us, it won't develop a desire to wipe us out. But it'll just revert to doing something totally fucking stupid.

So when thinking of AI, please forget the T1000 terminator scenario, and instead think of it as more extremely autistic - if it's given the things it's used to being given, it will likely outperform even us humans and appear to be the smartest most sophisticated brain on the planet. But once you take it out of it's comfort zone, especially if it's a scenario we never tested it again... it's going to do whatever the AI equivalent of random screeching is.

I think what will happen is, we'll project our interpretation of intelligence onto it, as at first glance it may seem so vastly intelligent. But we'll forget that it has no common sense, and no moral compass to fall back on, and somewhere down the line that oversight will undo us.
 

Mr.Mike

Member
A realistic non-scare monger scenario is this, we have a 2nd strike missile defense system. We need it to operate on it's own because, well, that's what 2nd strike systems do. So we give it an AI, and we add in some pretty strict criteria, we run a bunch of simulations and eventually green light the thing.

Then some really fucking random scenario occurs, that causes it to do a behavior that was never exhibited during testing. And you've basically got an AI which is akin to a baby being given a rocket launcher. The AI won't have malice towards us, it won't develop a desire to wipe us out. But it'll just revert to doing something totally fucking stupid, which when nuclear weapons are involved could mean anything.

Dead Hand has been around a long time already.

As probably referenced here by Putin crony Kadyrov.

HBO's David Scott: ”Do you regard the United States as an enemy of your country?"

Ramzan Kadyrov: ”America is not really a strong enough state for us to regard it as an enemy of Russia. We have a strong government and are a nuclear state. Even if our government was completely destroyed, our nuclear missiles would be automatically deployed. We will put the whole world on its knees and screw it from behind."
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
A realistic non-scare monger scenario is this, we have a 2nd strike missile defense system. We need it to operate on it's own because, well, that's what 2nd strike systems do. So we give it an AI, and we add in some pretty strict criteria, we run a bunch of simulations and eventually green light the thing.

Then some really fucking random scenario occurs, that causes it to do a behavior that was never exhibited during testing. And you've basically got an AI which is akin to a baby being given a rocket launcher. The AI won't have malice towards us, it won't develop a desire to wipe us out. But it'll just revert to doing something totally fucking stupid, which when nuclear weapons are involved could mean anything.

Yeah, pretty much spot on. The real immediate risk isn't intelligence. It's the autonomy that we are willing to give to systems whose inner workings are mostly a black box. And that black box can actually be pretty stupid and primitive. The engineering of dumb software systems already has tons of practical problems that aren't solved sufficiently. Add to that a piece software that is by the very nature of its design method not fully understood, give it enough autonomy, and accidents will happen.
 
Also, aren't they already getting Siri and Alexa to interface with each other?

So if this is a war about who has the AI monopoly, won't we just get each system to talk to each other in an amicable way?

You'd have to maybe nationalize each AI system. Have a governing AI for EU Trade systems, same for USA and China, RU etc.

I'm not sure where the war would break out from if its about that. Maybe he means the AI turns against us
 

E-Cat

Member
Even if this is way off into the future;
Why is it a bad thing to think about it/prepare for already?

Better be cautious in advance than never seeing it coming.
Exactly.

GAF, on average, has a relatively low share of what I would consider weird people, for the lack of a better term; Even taking into account that the majority must be American. :p Nerds and socially awkward outcasts, yes. But no 'weird' weird, like truthers/preppers/gunslingers, that sort of people.

Yet, I cannot even begin to understand the mindset and general philosophy of someone who comes into this thread and their first impulse is to type "fuck Elon Musk". Truly baffling.
 

F!ReW!Re

Member
Exactly.

GAF, on average, has a relatively low share of what I would consider "weird" people; even taking into account that the majority must be American. :p Nerds and socially awkward outcasts, yes. But no 'weird' weird, like truthers/preppers/gunslingers, that sort of people.

Yet, I cannot even begin to understand the mindset and general philosophy of someone who comes into this thread and their first impulse is to type "fuck Elon Musk". Truly baffling.

And the very next post illustrates your point perfectly;

Big daddy asshole Elon is more than welcome to save us.

He's just a bullshitter. I get it, its fun, he's worried about the real problems. Just like all of humanity.

Yeah: fuck people who caution the rest of us!
 

Haly

One day I realized that sadness is just another word for not enough coffee.
Yet, I cannot even begin to understand the mindset and general philosophy of someone who comes into this thread and their first impulse is to type "fuck Elon Musk". Truly baffling.

He's a technocrat and emblematic of some of the problems currently plaguing our generation, vis a vis our society being overturned by the shift towards automation and goods-as-service economy with no adequate mechanisms for a smooth transition. He doesn't seem to care about anything beyond his business ventures, and indeed this tweet itself reads like a business venture from a certain angle.

Zuckerberg gets a lot of flak for much of the same reasons, and now that he's gunning for a presidential run certain people hate him more than ever. These are not the people that should be leading public opinion, insofar as they have very little compassion or concern for societal problems outside the tech-bubble.

It's not very complicated.
 

///PATRIOT

Banned
If an AI becomes super intelligent you don’t need launch codes or a physical lever to destroy humanity. Propaganda alone could initiate retaliatory annihilation. And that’s something that even a human could come up with.
A group of humans are already doing this social engeneering and disinformation, controlling information, still if there is going to be war it because some party takes the chance made by the AI as justification, not because they are dumb. Still not a doomsday scenario.

Well, botnets spewing fake news and propping it up between them to give a false sense of approval causing the public to then feel less shame in siding with it is already doing serious damage to the political system.

I think people ITT are looking in the wrong direction of an AI war. Personally I think an AI war will be fought without bullets but with words. I'd argue we're getting a preview of what it could be like right now.

If AI can contribute to the destabilization of a country through words and ideas it's a more valuable weapon than any missile. Russia seem aware of this.
This is already happening by humans but of course the AI will make this more efficiently.

An AI could take over financial markets. Having all the money would fuck humans up.
This already sort of happened, can be worse in the future but still not a doomsday scenario.

It'll probably blackmail an nuclear lunch facility employee to force them to do it.

Or something ridiculous like that.
This is more plausible than the formers arguments. :)
 

Biske

Member
And the very next post illustrates your point perfectly;



Yeah: fuck people who caution the rest of us!

Lol no thats not it at all. There are plenty of people warning about plenty of things, I just don't get why people have such respect for "fixed my receding hairline wannabe scientist playboy" Elon Musk.

To me its incredibly indicative of humanities exact problem. We have many many problems, most people struggle having food and clean fucking water, but we all find rich assholes talking about killer robots so exciting!


We care so little for the actual goings on of real peoples lives, now, in this very moment, who gives a shit about rich peoples new pet project as humans suffer and die.

Whether its famine or robots, whats the real difference?
 

nynt9

Member
Lol no thats not it at all. There are plenty of people warning about plenty of things, I just don't get why people have such respect for "fixed my receding hairline wannabe scientist playboy" Elon Musk.

To me its incredibly indicative of humanities exact problem. We have many many problems, most people struggle having food and clean fucking water, but we all find rich assholes talking about killer robots so exciting!


We care so little for the actual goings on of real peoples lives, now, in this very moment, who gives a shit about rich peoples new pet project as humans suffer and die.

Whether its famine or robots, whats the real difference?

You realize these technologies also help working towards solving things like famine right?
 
Top Bottom