• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

The reason that Artificial Intelligence should be outlawed is blindingly obvious

Status
Not open for further replies.
I somehow took all this time to read both these articles and I have no idea how to feel.
This is the first time I felt a true existential crisis on this level. Everything I've worked towards now just feels pointless.

All I know is that my anxiety definitely didn't need this. I should have just stopped after the first article. Quite possibly the most terrifying thing I've ever read if it turns out accurate.

It's all both exciting and terrifying, I just hope I am alive to see it happen.


This is pretty stupid right off the bat. Show someone from 1750 the modern world and "he'd die"? Really? That's fucking moronic. Take a person from the most primitive place on Earth right now, stick him in America, and he'll integrate in months. Probably weeks. Language, computers, cars and planes...he'll take it all in and be fine.

I also had a problem with that part, obviously nobody would die if they were shown technological advancements, sure, it would take a lot of time for that person to adapt but they would not just die, unless he meant it in a philosophical way, if that was the case he failed miserably at delivering the idea.
 
If you type UP UP DOWN DOWN LEFT RIGHT B A ENTER while browsing neogaf on non-mobile you'll wake up the NeoGAF AI.
 
Again, what is the motivation for AI to do anything in the first place?. Yes, it could figure the unified theory, but why would it do it? And more importantly, why would anyone build an AI that has motivation of some sort? Isn't that just incredibly dumb? AIs should do what we said to them and nothing more.

Or is the hypothesis that advanced AIs are going to develop a consciousness on their own?
 
This AI scare is all nonsense.

The future presented in Terminator and The Matrix is nonsense.

Being concerned about what a machine with the ability to make its own decisions might decide to do is not nonsense.

Human thought processes are extremely complicated and dependent on an infinite number of interconnected factors but importantly, they're mostly predictable because we are all humans. We all need food, clothing, shelter and social connections. We all enjoy entertainment, most of us need to work to earn the money society says we need to spend on commodities.

What would the thought process of a machine that requires and desires none of those things even look like? We have no idea, it would be completely alien to us and therefore unpredictable. Which means there is potential it would decide to do something we would perceive as negative or harmful.

Also I think true self-aware artificial intelligence may be impossible.
 
Not sure why an AI superintellegence that doesn't like us would bother with a conventional war. Couldn't it just wait us out? Seems like there's lots of ways a being like this could win without the effort of building an AI military. It would have an advantage of not perceiving time as we do or it could control its own perception of time.
 
I somehow took all this time to read both these articles and I have no idea how to feel.
This is the first time I felt a true existential crisis on this level. Everything I've worked towards now just feels pointless.

All I know is that my anxiety definitely didn't need this. I should have just stopped after the first article. Quite possibly the most terrifying thing I've ever read if it turns out accurate.

Get it right and you end up with a Mind, get it wrong and end up with AM. You (as a species) get one go at it. Good luck.
 
I also had a problem with that part, obviously nobody would die if they were shown technological advancements, sure, it would take a lot of time for that person to adapt but they would not just die, unless he meant it in a philosophical way, if that was the case he failed miserably at delivering the idea.

I don't even think it would take very long for that person to adapt. In fact, if you took an intelligent, educated person from that era - let's say Sir Isaac Newton, who's actually from slightly before then - I think he'd react like a kid in a candy shop. After a very short time, he'd probably understand all our "magical" inventions far better than 99% of the population today.

But that's off topic. On topic, his article is all alarm, sci-fi speculation, and assumption. He take a lot of words to say "this is inevitable, we can't really see it coming, we can't possibly react when it arrives, it will almost instantly be orders of magnitude smarter than we are, and we don't know what will happen then".

Not a single one of those is well supported by the practical facts of the situation.
 
Being concerned about what a machine with the ability to make its own decisions might decide to do is not nonsense.

That part isn't worrisome. What it decides to do is irrelevant. It's in a computer and can't get out. Let it think about whatever it wants.

The issue is what we decide to allow it to do. No AI can be a threat unless we specifically enable it to be, on multiple micro and macro levels.

AI being developed doesn't worry me in the slightest. What worries me is that humans will be in charge of it / them.
 
I don't even think it would take very long for that person to adapt. In fact, if you took an intelligent, educated person from that era - let's say Sir Isaac Newton, who's actually from slightly before then - I think he'd react like a kid in a candy shop. After a very short time, he'd probably understand all our "magical" inventions far better than 99% of the population today.

But that's off topic. On topic, his article is all alarm, sci-fi speculation, and assumption. He take a lot of words to say "this is inevitable, we can't really see it coming, we can't possibly react when it arrives, it will almost instantly be orders of magnitude smarter than we are, and we don't know what will happen then".

Not a single one of those is well supported by the practical facts of the situation.

It's a summary of the often mentioned book Superintelligence, written by Oxford professor Nick Bostrom. More detailed arguments lie within.

But that article in a broad manner makes intuitive sense to me. You look at things from a large perspective of technological advancement throughout human history and we observe the rate of technological improvement (or at least our view of technology) increasing exponentially. Computational power increases all the time. The human brain is studied intensely and understood a little more, bit by bit. More and more of our life is determined by automated processes. Machine intelligence is encroaching on the outer edges of many tasks before dominated by humans. We see improvement and progress on all these different fronts, all seemingly converging on a machine intelligence that would theoretically outstrip our biologically curated intelligence. Applied, purposeful construction vs. random, evolutionary-driven coincidence.

Now this is all inductive reasoning and has its fair share of "what ifs," though I'm not sure what you're getting out of all this information that makes you so skeptical. Mind sharing the fundamental reason why you don't think these concerns are warranted?
 
Now this is all inductive reasoning and has its fair share of "what ifs," though I'm not sure what you're getting out of all this information that makes you so skeptical. Mind sharing the fundamental reason why you don't think these concerns are warranted?

It's not that I'm skeptical that makes me criticize, though I am indeed skeptical. If you pressed me to answer whether advanced AI will happen, I'd say yes, though.

It's the alarmist tone that bothers me. Even in the context of the article itself, and the arguments it makes, it's dumb. It presents everything as inevitable, that this is coming, and we have a huge blind spot and can't possibly be prepared, blah blah blah.

If you buy into any of that, then just sit back and wait for it to happen. The article isn't a call to action, it's a call to worry.
 
So it can magically move around between all computers in the world, right?

A. There are rules about how data moves around. Why wouldn't it have to follow them?
B. When attempting to create true AI, humans are smart enough to take some precautions.
C. If all else fails, pull the plug.

The only thing that worries me about true AI being created is that I won't get one, and (scummy) rich people will.

Why would it have to move around
It could do a ton of harm and damage just by being connected to the internet.
Any harm any hacker is doing right now except thousands of times faster.

It's a superintelligence, it would do in minutes or seconds what would take weeks or months for people to organise/execute.
By the time you can react a lot of damage would already have been done.

My point was it doesn't need a body or arms or legs, just a connection to outside of its own network. It's not robot hulk smash...
 
Why would it have to move around
It could do a ton of harm and damage just by being connected to the internet.
Any harm any hacker is doing right now except thousands of times faster.

It's a superintelligence, it would do in minutes or seconds what would take weeks or months for people to organise/execute.
By the time you can react a lot of damage would already have been done.

My point was it doesn't need a body or arms or legs, just a connection to outside of its own network. It's not robot hulk smash...

Sure. And do you think the people working on this type of thing haven't thought of that?

Read the article. It presents silly science fiction as things we should actually be worried about, such as an AI that was connected to the internet for an hour resulting in the entire world (and potentially the solar system/galaxy/universe) being reconstructed at the atomic level into post-it notes.

That little short story was written by the article's author. No mention of how it pulled that off, except "nanites LOL". It's trash.

edit: He also says that spiders are insects, a personal pet peeve of mine.
 
What would the thought process of a machine that requires and desires none of those things even look like? We have no idea, it would be completely alien to us and therefore unpredictable. Which means there is potential it would decide to do something we would perceive as negative or harmful.

The thing is, why would we leave crucial decisions to a machine? And if we are crazy enough to do that, why would we not create a failsafe mechanism to shut the machine off in case it does something we don't want?
 
Its a human perspective of AI and its dangers that causes the fear in the first place. We assume that AI will just magically have all our qualities and perceive "life" the same ways.

There's a pretty long road ahead before we even get to the point where we assume AI will just take over and wipe us out. Prior to that it may be something that saves us. Banning it outright seems a bit hasty, in my opinion.
 
Status
Not open for further replies.
Top Bottom