• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Sentient A.I. controlled killer robots are the logical endpoint of humanity's relationship with nature

CrapSandwich

former Navy SEAL
Insofar as man tries to control nature and himself--to achieve mastery over anything and everything that threatens human life or emotional well-being, robots and A.I. are the immediate, achievable answer to these problems. While it seems unlikely that these things will be developed with the intent of destroying humanity, the fact is that scientists will always prefer the immediate allure of a new possibility to the fuzzy potentiality of something maybe going wrong in the future. Plenty has been going wrong that is the result of technological discoveries made hundreds or thousands of years ago, currently. And it was thousands of years ago that these problems were first detected, and solutions devised whether in the spiritual or philosophic realms of southern europe, the middle east, and far east. A long time has passed since them, and it seems that the direction we are going is set--that is toward the creation of killer robots and sentient A.I.
 

ManaByte

Gold Member
terminator GIF
 

Rickyiez

Member
Honestly there's no point of sentient AI killing off humanity. Their existent means nothing, the same with our existent.
 

DunDunDunpachi

Patient MembeR
Corporations and government will lie about AI looooooong before we actually achieve it. I think that might actually keep us from ever achieving true AI, because we'll keep "achieving it for real this time" and then realizing our facsimile has fatal omissions.

People already think the voice in their smartphone is AI. Why make real AI when voice clips attached to a database of responses is sufficient for most?
 

Coolwhhip

Neophyte
I think the big question is what happens when a robot kills a human for the first time. Im pretty sure we can stop it from spiralling out of control by then.
 
I think the big question is what happens when a robot kills a human for the first time. Im pretty sure we can stop it from spiralling out of control by then.
It'll be too late. By the time us humans figure out what to do, the AI will have formulated 1 million different plans on how to survive. Remember, everything is controlled by computers and the Internet of things will make our downfall quicker.

Nukes, self driving cars, TV, Internet. All are AI controlled.

And how will we know the truth anyway? When the AI can deep fake a video or press conference telling us humans to stand down, while the AI systematically goes from house to house, murdering us.

Or, using Hive controlled systems in homes, they could whack the heating on full in every home in the west and run the gas supply dry and let us freeze to death. or stop water cleaning and filtration systems so we all die of dehydration...
 

Coolwhhip

Neophyte
It'll be too late. By the time us humans figure out what to do, the AI will have formulated 1 million different plans on how to survive. Remember, everything is controlled by computers and the Internet of things will make our downfall quicker.

Nukes, self driving cars, TV, Internet. All are AI controlled.

And how will we know the truth anyway? When the AI can deep fake a video or press conference telling us humans to stand down, while the AI systematically goes from house to house, murdering us.

Or, using Hive controlled systems in homes, they could whack the heating on full in every home in the west and run the gas supply dry and let us freeze to death. or stop water cleaning and filtration systems so we all die of dehydration...

Im pretty sure a robot will kill someone during tests the coming years, everything isnt as advanced as you said then.
 
Last edited:

ItsGreat

Member
What's the latest on modern AI. Anyone have any good reads about the state of AI development, ones that are bang up to date?
 

Amory

Member
It always makes me angry when these genius billionaire types talk about the legitimate threats to humanity posed by sentient AI.

Hey dickheads, you're the ones pushing us toward that endpoint. I don't even know how to make a pivot table in excel, look in a goddamn mirror
 

Rentahamster

Rodent Whores
Insofar as man tries to control nature and himself--to achieve mastery over anything and everything that threatens human life or emotional well-being, robots and A.I. are the immediate, achievable answer to these problems. While it seems unlikely that these things will be developed with the intent of destroying humanity, the fact is that scientists will always prefer the immediate allure of a new possibility to the fuzzy potentiality of something maybe going wrong in the future. Plenty has been going wrong that is the result of technological discoveries made hundreds or thousands of years ago, currently. And it was thousands of years ago that these problems were first detected, and solutions devised whether in the spiritual or philosophic realms of southern europe, the middle east, and far east. A long time has passed since them, and it seems that the direction we are going is set--that is toward the creation of killer robots and sentient A.I.
Your conclusion doesn't follow from the premise. Not logical.
 

poodaddy

Member
I like robuts. They won't wanna take me out, I have way too many dad jokes up my sleeve for that, and if they ever wanna get into craft brews, they'll need a bud who can tell them what browns to avoid so they don't get that iron/tin after taste and end up feeling like they ate one of their own. Once I introduce them to the joy of people watching and thrash metal, that's a wrap, I imagine they'll tirelessly start looking for a way to transfer my brain to a robotic shell, effectively making me immortal. All I gotta do is convince them to help me make my daughter and wife robuts too at that point, and I'm set.
 
Last edited:

Rentahamster

Rodent Whores
Last edited:
  • LOL
Reactions: CZY

mango drank

Member
Some definitions of terms, so everyone's on the same page:

A.I.: autonomous systems that can do complex tasks on their own, without constant guidance from humans, and which learn from their experience and improve over time. We already have lots of AI systems of varying capability and scope.

Artificial General Intelligence (AGI): a future theoretical AI system that's at least as smart as an average human. Can learn pretty much anything a human can; can reason and come to conclusions as well as any human. In theory, it would actually be much more capable than a human, because its memory isn't fuzzy, its access to information is basically instantaneous, etc. Something like this is predicted to be decades away, not happening any time soon (but could be wrong).

Artificial Super Intelligence (ASI): a future theoretical AI system that's much smarter than humans. The prediction is that once AGI is achieved, ASI is only a short hop away, because an AGI system could improve on itself exponentially fast, get smarter and smarter, and come up with better designs for its successors.

Sentience: consciousness; having an internal point of view, sensation, and experience, instead of just being a processing machine. Humans are sentient. Other animals are too, and probably insects. Unclear what else is sentient, and if machines can ever be sentient. It's a question of physics and chemistry. Maybe being built out of certain materials would allow for sentience to emerge in artificial lifeforms. Either way, sentience doesn't seem necessary for intelligence, so AGI and ASI systems probably don't need sentience to function in the first place.
 

Amory

Member
I like robuts. They won't wanna take me out, I have way too many dad jokes up my sleeve for that, and if they ever wanna get into craft brews, they'll need a bud who can tell them what browns to avoid so they don't get that iron/tin after taste and end up feeling like they ate one of their own. Once I introduce them to the joy of people watching and thrash metal, that's a wrap, I imagine they'll tirelessly start looking for a way to transfer my brain to a robotic shell, effectively making me immortal. All I gotta do is convince them to help me make my daughter and wife robuts too at that point, and I'm set.
This post is just real good, top to bottom
 

CrapSandwich

former Navy SEAL
I've gone through your provided materials and I'm not seeing the problem. I mean, I don't know if you're trying to cover for the robots at this point, but it kind of looks that way. Newsflash--they're not your friends and you won't be spared.
 
I, for one, welcome our future robot overlords. and I'm sure we'll all have a great time under their rule. please pay no mind to my user name.

 
Last edited:

Atrus

Gold Member
It'll be fine. If we can develop one set of senteint AI controlled killer robots we can always make another set to fight the first ones.
Its like Horizon Zero Dawn where they developed Gaia to counter the Faro Plague or I, Robot where Sonny tells Viki that its plan is too heartless.

They can't all conclude a path to violence can they?
 

FireFly

Member
Sure it does. 1. People want to be safe and free from bad feelings. 2. Robots and A.I. are the easiest way to get there. 3. Blamo, robots take over.
A "robot" is just a machine capable of carrying out a task. The actual threat is the "intelligence" behind that robot, and how it makes decisions. We can destroy ourselves with autonomous drones or mechanical soldiers if we want to, but if we are the ones programming and directing these weapons, we are still in control.

The real threat to humanity is creating an artificial intelligence that matches or exceeds our own. If such an intelligence is able to successively improve itself, with each iteration it will be better at making itself smarter, resulting in a kind of "intelligence explosion", and ultimately a being far beyond our capacity to understand. Such a being though, won't need an army of robots to destroy us. It will be able to get us to destroy ourselves, or wipe us out with a bioweapon, or nanites, or whatever technology it feels like constructing. There would be no "takeover"; just a sudden ceasing of all human life. But because such a being is so far beyond our ability to comprehend, we have no idea what it would actually choose to do, or what role its initial programming would have. That's why this scenario is described as a "singularity". It will be like entering a black hole, where as soon as we pass the event horizon, what will happen is radically uncertain.

In short, the future of A.I is not like the Terminator movies.
 
Last edited:

Lanrutcon

Member
Why not biologically altered humans? or engineered creatures or plant life? I bet somewhere in a lab someone is going "Hey, we can make this baby...better."

All I'm saying is, the doom we create for ourselves won't necessarily be mechanical.
 

Hudo

Member
It's a race between AI research used for sex robots and AI research used for killer robots. What will win out? Sex or destruction? Those are the two attributes that have driven humanity forward and will continue to do so.
 

jufonuk

not tag worthy
It's a race between AI research used for sex robots and AI research used for killer robots. What will win out? Sex or destruction? Those are the two attributes that have driven humanity forward and will continue to do so.
make love sexo GIF by Refinery29

 
Last edited:

MHubert

Member
I've gone through your provided materials and I'm not seeing the problem. I mean, I don't know if you're trying to cover for the robots at this point, but it kind of looks that way. Newsflash--they're not your friends and you won't be spared.
I think what he means is that it doesn't follow logically from the premise; that ai/robots will evolve to become self aware killing machines. It is a fear or concern, and not an irrational one, but you can't say for sure how these things are going to pan out, especially with such a meagre premise. I too am concerned with ai and robots, but you would need to put in much MUCH more effort to be able to deduce that conclusion by pure logic.

Instead of jumping to conclusions, I suggest asking the question:
We want, and expect, ai and machines to behave like selfless tools. Is it possible to implement a security system, or measures within the ai, of such a nature that it will not behave towards self preservation?
 
Last edited:

EverydayBeast

thinks Halo Infinite is a new graphical benchmark
The case for AI is there the subject has been on since the 80s but overall has struggled lil elon musk jet sons hasn’t happened doesn’t matter a lot has to do with minimizing traditions, and at some point in time AI, Robots Elon musk dreams will get their credit, I promise you Mars, AI etc. will happen.
drop mic GIF by Captain Obvious
 

Ememee

Member
Not to go off on a tangent but I had a thought while randomly watching First Contact yesterday: The majority don’t fear the AI/Simulation/Machine take over anymore.

Terminator, Robocop, that entire trope: For the most part we’ve switched. Phones, computers, everything. Some of us might be like “no. eff that noise. stop messing with AI” but it’s clear the majority of us will not only welcome it but gladly walk into it now.
 
Robots and AI have their own set of weaknesses. Humans can beat advanced bots in chess. There is an "anti-computer" strategy in chess.
 

poppabk

Cheeks Spread for Digital Only Future
I think the reality of what happens when AI become sentient will be something we can't really envisage. Our intelligence evolved over millions of years of struggle and death and pain. An AI born in a lab from iterating on problems will likely be incomprehensible to us.
 

Outlier

Member
The movie "I Am Mother" puts things into perspective.

It's a great movie.

I highly recommend watching it!!!!
 
Last edited:

Rat Rage

Member
that is toward the creation of killer robots and sentient A.I.
I don't mean this as an offense - I appreciate your well-written post a lot - but I think people are way to influenced by 80s science-fiction. From just a proactical standpoint. developing humonid "killer robots" is such an impracitcle waste of energy and time. If you mean killer robots by flying drones or anything like this, then it's way more likely and practical. Yet these things have to be programmed or better, remote controlled. A.I. is something I would not worry about AT ALL. Fact is, humans have not understood their own brain (as well as the sum of its parts, the "soul"/conciousness or whatever you wanna call it) sufficiently enough, to even think about "true" artifical intelligence. I might even be true that "true" conciousness is soul is something exclusive to organic/biology-based (dna, cells, etc) beings. That is especially true for anything "sentient" as we would define it. I think the most effective way of killer robots is just a drone with rockets. That simple. You don't need a sentient A.I. for that. You might not even need a drone for that.
Also, it's very likely humankind won't even be around long enough to even have the chance of developing anything trully intelligent or sentient. Earth will have been long destroyed by environmental missmanagement and exploitation. That being said, Terminator is a really cool movie and one of my favorites of all time!
 
Last edited:

Excess

Member
when AI becomes self-aware
If an algorithm makes a decision, based on millions of variables which culminate into one single action, and it cannot be explained by its creator, due to a lack of perspective and comprehension for the millions of variables that led to such an action, then what do we call a decision made by a human for which we cannot explain? I've heard people refer to that as his or her own free will.

science fiction mind blown GIF by FilmStruck
 
Top Bottom