• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

How you feelin' about A.I.?

I work for a global energy company and we're starting to do some things with AI/machine learning. Some applications include making our coal plants and battery storage arrays run more efficiently and using it for safety analysis and predictions.

I'm using Watson's conversation service and api.ai to build a conversation bot for end-users at Corp. where I work.

We have another team working on the same strictly for SAP.

Also building what we call a "watchtower" to crawl the internet for news that might impact or be of interest to our operations abroad (we're in 19 countries); analyzing geo-political news, news related to the energy industry, emerging technologies, etc.

Ton of cool applications and integrations.
 
Makes me swing closer to absurdism every day. Something smarter than us getting smarter faster than us. The end of the era of man for better or worse.
 
A.I. that was capable of human-like intelligence would be the most amazing thing ever, I feel. We could mass produce A.I.'s and instruct them to solve all of our problems. If there were a billion A.I.'s each with the intelligence level of Einstein, all working together to solve problems, I feel like anything could be achieved.

That said, I don't feel very optimistic about reaching that level of A.I. in my lifetime. The most advanced A.I. in existence has a fraction of the intelligence of a human baby (OK, I made that up, but I think it's fairly accurate to say). It just seems like we're not making much headway in that field.
Only 20 years ago it was thought to take 200 years for a computer to beat a human master at Go. http://www.nytimes.com/1997/07/29/s...uter-play-an-ancient-game.html?pagewanted=all
 
I'm not sure about my personal thoughts, but I know that Elizer Yudowski's MIRI thing is only half a step above being a cult, and seems to primarily exist to bilk dumb Silicon Valley venture people and his cultists out of their money.
 
zUkV9cUgZVE7i-S_pkYo8lvUmjQ=.gif


I'd be down for a benevolent AI ruler. Hell, I'd be down for a mostly not malicious AI ruler.
At this point that'd be an improvement.
 
It's equally fascinating and scary.

Like, what is the end goal? If human intuition can be simulated by an AI (and as I understand it, that's what DeepMind does), and with a higher success rate than humans. Where does that leave us?
 
Is this thread about Allen Iverson, Robots or a robot Allen Iverson?

All three are cool with me, whatever the case though
 
I'm 100% excited about AI in all its forms, and am long since in the position of erring on the side of caution as far as recognizing machine intelligence once that becomes an issue.

I will not be on the wrong side of history when it comes to giving rights to AI, should it come to that.
 
businesses should be taxed for a.i taking on a certain amount of work/replacing humans.

also, as awesome as A.I. is I feel like it is equally disruptive. it's dumbing down humans. its making them lazier. it's also confusing and complicating things unnecessarily. the more advanced it becomes the less the general populace will have an understanding and control over it vs. the private entities that control the a.i., in turn controlling the individual more than ever.

it's really feeling like companies are pulling a giant sheet over peoples eyes, distracting them with how they think they need to relinquish these burdens, just so they can zone out and fill their minds with media and advertisements. it's straight up Wall-E shit.
 
Top Bottom