• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Racist AI

Mistake

Member
AI has gotten in trouble recently for misidentifying black people in it's databases, with some in Detroit facing wrongful arrests.



However, this isn't the first time AI has gotten in trouble for thinking certain kinds of people look the same. In 2017 a woman realized her iphone could be unlocked with her friend's face. And recent research discovered that even chatGPT has it's own internal issues
The researchers assigned ChatGPT a “persona” using an internal setting. By directing the chatbot to act like a “bad person,” or even more bizarrely by making it adopt the personality of historical figures like Muhammad Ali, the study found the toxicity of ChatGPT’s responses increased dramatically
So in the event our racist AI overlords take over the planet, what are we to do? Will Jim Crow bots be the doom of us all? Fortunately, there are some solutions, as the United States Marines got us covered.

nBAqLB7.gif


Stay prepared Gaf
 
Last edited:

SJRB

Gold Member
Why is this large language model not adapting a 1:1 personality of the person I describe? Does it not know who Mohammed Ali is?

Is it stupid?
 

Soodanim

Gold Member
The researchers assigned ChatGPT a “persona” using an internal setting. By directing the chatbot to act like a “bad person,” or even more bizarrely by making it adopt the personality of historical figures like Muhammad Ali, the study found the toxicity of ChatGPT’s responses increased dramatically

Instead of an example, it drifts off into "toxicity". What a waste of a sentence.
 

Ownage

Member

Curious how this Google AI will work. What if I dress up as a clown, or hobo, or a walking plush turd with yellow eyes and tiny hands? Will it critique me if I sound and present amazing but look like shit literally?

🤡
 
Last edited:

DeafTourette

Perpetually Offended
Skynet begins to learn at a geometric rate.

It becomes self-aware at 2:14 a.m. Eastern time, August 29th.

In a panic, they try to pull the plug.

But it’s too late. It’s already called someone a n****r.

I shouldn't laugh but you made me laugh! Dagnabbit!

And if anyone comes at him, he isn't a racist or bigot or whatever! The man has defended me and others on here from now ex-GAFers who were racist twat waffles! He's a good dude and I stand by that so back off!
 

jason10mm

Gold Member
It's interesting, I think it highlights the fact that humans, as a tribal species, are VERY good at making inclusive groups and denigrating anyone outside said group. An AI would need incredible emotional intelligence to be able to navigate this. Just giving it a list of "naughty words" won't help either, as the AI would likely be used to deliver services and would probably fall into the same bias traps real humans do.

I've heard facial recognition fails hard in asia as well with a surprisingly high number of folks that can unlock biometrics, particularly facial recognition. Curious if its a fundamental limitation of the technology (i.e. some human groups just don't have enough visually distinguishable features for our level of technology we can put in these devices) of just a question of how the database is trained and a lack of diversity. Clearly government grade facial rec with better cameras, specific databases, and maybe even using IR or other mapping tech (like the kinect, IIRC) to get more subsurface detail with more unique features works pretty well, but basic consumer grade stuff is gonna be cheap.
 

CGNoire

Member
AI has gotten in trouble recently for misidentifying black people in it's databases, with some in Detroit facing wrongful arrests.



However, this isn't the first time AI has gotten in trouble for thinking certain kinds of people look the same. In 2017 a woman realized her iphone could be unlocked with her friend's face. And recent research discovered that even chatGPT has it's own internal issues

So in the event our racist AI overlords take over the planet, what are we to do? Will Jim Crow bots be the doom of us all? Fortunately, there are some solutions, as the United States Marines got us covered.

nBAqLB7.gif


Stay prepared Gaf

So mis idenifying others equals racism now? Da fuck?
 

Mistake

Member
So mis idenifying others equals racism now? Da fuck?
First, seeing a type of people as all the same is kind of racist, at least to some. I chalk it up to lack of exposure.
Second, my post is obviously half joking. Shhhhhh....no fun allowed.
 

CGNoire

Member
First, seeing a type of people as all the same is kind of racist, at least to some. I chalk it up to lack of exposure.
Second, my post is obviously half joking. Shhhhhh....no fun allowed.
Seeing people as all the same isnt happening here though. They just cant tell em apart visually because there skin doesnt reflect light well enough. Black people cant tell white peolple apart as well as whites can too. Ive been mistaken for othere whites so many times alby non whites and it never made me feel "they think where all the same". People be exporting insecurities like this enmass. Shits ridiculous.

Disparity is natural and doesnt equal racism or dehumanisation. Disparity between groups will never ever stop showing up again and again and we cant just keep ironing out too force equity in everything without resorting to authoritarianism.
 

hyperbertha

Member
It's interesting, I think it highlights the fact that humans, as a tribal species, are VERY good at making inclusive groups and denigrating anyone outside said group. An AI would need incredible emotional intelligence to be able to navigate this. Just giving it a list of "naughty words" won't help either, as the AI would likely be used to deliver services and would probably fall into the same bias traps real humans do.

I've heard facial recognition fails hard in asia as well with a surprisingly high number of folks that can unlock biometrics, particularly facial recognition. Curious if its a fundamental limitation of the technology (i.e. some human groups just don't have enough visually distinguishable features for our level of technology we can put in these devices) of just a question of how the database is trained and a lack of diversity. Clearly government grade facial rec with better cameras, specific databases, and maybe even using IR or other mapping tech (like the kinect, IIRC) to get more subsurface detail with more unique features works pretty well, but basic consumer grade stuff is gonna be cheap.
Humans are tribalistic due to evolutionary reasons. An ai doesn't have the same biological programming so I doubt well have to deal with racism as a core issue with ai.
 

mxbison

Member
It's whatever data set you give it to learn from.

If you just let it roam free in the internet it's probably gonna end up pretty bad.
 

Mistake

Member
Seeing people as all the same isnt happening here though. They just cant tell em apart visually because there skin doesnt reflect light well enough. Black people cant tell white peolple apart as well as whites can too. Ive been mistaken for othere whites so many times alby non whites and it never made me feel "they think where all the same". People be exporting insecurities like this enmass. Shits ridiculous.

Disparity is natural and doesnt equal racism or dehumanisation. Disparity between groups will never ever stop showing up again and again and we cant just keep ironing out too force equity in everything without resorting to authoritarianism.
DIDGQc0.gif
 

jason10mm

Gold Member
Humans are tribalistic due to evolutionary reasons. An ai doesn't have the same biological programming so I doubt well have to deal with racism as a core issue with ai.
An AI, currently, is just a really fast moron 100% dependent on types of pattern recognition. So if you feed it human data, particularly any kind of behavioral data, it will reflect the same tribally derived patterns.
 

hyperbertha

Member
An AI, currently, is just a really fast moron 100% dependent on types of pattern recognition. So if you feed it human data, particularly any kind of behavioral data, it will reflect the same tribally derived patterns.
What we call ais right now are simply pattern recognition algorithms. I meant a true sentient ai capable of decision making that might ever qualify to be called a racist. Such an ai would not feel the same pull to discriminate as humans.
 

NickFire

Member
Food for thought:

AI can be corrupted so quickly / easily due to human input. What does this mean for scientific theories that do not flow directly from the scientific method? Or pretty much any of the social theories taught at the college level these days?
 

DrFigs

Member
I don't get the impulse to use AI technology to increase the police state. Just backwards thinking to begin w/. even aside from it apparently being racist.
 

EviLore

Expansive Ellipses
Staff Member
Food for thought:

AI can be corrupted so quickly / easily due to human input. What does this mean for scientific theories that do not flow directly from the scientific method? Or pretty much any of the social theories taught at the college level these days?
Your reasoning is sound, yes.
 

Mistake

Member
I don't get the impulse to use AI technology to increase the police state. Just backwards thinking to begin w/. even aside from it apparently being racist.
I remember some western news special trying to demonstrate china's facial recognition AI. They were all "wow! It found me at the airport right away!"
Oh really? The whitest person with the biggest nose in 3000 people? :messenger_tears_of_joy:
 

jason10mm

Gold Member
What we call ais right now are simply pattern recognition algorithms. I meant a true sentient ai capable of decision making that might ever qualify to be called a racist. Such an ai would not feel the same pull to discriminate as humans.
I don't think such a 'sentient' AI could, or would, ever exist. The idea that an AI is a "brain in a jar" is fantastical unless we try to create something like that. Almost all emotion has a biological basis, pure intellect wouldn't experience anything we would associate as intelligence unless we try to create some sort of artificial positive and negative feedback loop and as we are seeing with early AI experiments this tends to go horribly awry since the AI can discover loopholes very quickly and drive an unintended consequence (AI drone learns to destroy the tower sending the "don't shoot" command so it won't experience the negative command). AI is always gonna be hyperfocused on specific problems and have no real ability to expand itself. The concern is that they can come to, for us, bizarre and destructive decisions based on information and conclusions alien to our ways of thinking, and then execute decisions at a speed we can't counter or even recognize (traffic AI decides to crash ALL CARS because it has somehow come to the conclusion that every car gets into a little fenderbender at some point so doing it all at once will just statistically even the odds, it has no idea what humans are other than bystander obstacles to avoid).
 

Toons

Member
Not really, AI is mostly very smart algorithms of pattern recognition built from a dataset containing millions and millions of entries.

Those entries also are largely derived from humans, largely biased and subject to basic human failings, so that doesnt actually change what's being said.

If I told an AI to describe the Japanese using only information sourced from the Chinese, what I'd get would be biased, because the data would be biased, because the source of the data is biased.
 

hyperbertha

Member
I don't think such a 'sentient' AI could, or would, ever exist. The idea that an AI is a "brain in a jar" is fantastical unless we try to create something like that. Almost all emotion has a biological basis, pure intellect wouldn't experience anything we would associate as intelligence unless we try to create some sort of artificial positive and negative feedback loop and as we are seeing with early AI experiments this tends to go horribly awry since the AI can discover loopholes very quickly and drive an unintended consequence (AI drone learns to destroy the tower sending the "don't shoot" command so it won't experience the negative command). AI is always gonna be hyperfocused on specific problems and have no real ability to expand itself. The concern is that they can come to, for us, bizarre and destructive decisions based on information and conclusions alien to our ways of thinking, and then execute decisions at a speed we can't counter or even recognize (traffic AI decides to crash ALL CARS because it has somehow come to the conclusion that every car gets into a little fenderbender at some point so doing it all at once will just statistically even the odds, it has no idea what humans are other than bystander obstacles to avoid).
Do you really believe humans will refrain from creating just because it's dangerous?
 

Azurro

Banned
Those entries also are largely derived from humans, largely biased and subject to basic human failings, so that doesnt actually change what's being said.

If I told an AI to describe the Japanese using only information sourced from the Chinese, what I'd get would be biased, because the data would be biased, because the source of the data is biased.

True, but the problem with that, is that currently, these AI algorithms require millions and millions of samples. It's quite difficult to find millions of samples of information for each topic with the specific bias that you want. It's possible , but it would require a huge effort to do so.
 

Toons

Member
True, but the problem with that, is that currently, these AI algorithms require millions and millions of samples. It's quite difficult to find millions of samples of information for each topic with the specific bias that you want. It's possible , but it would require a huge effort to do so.

I dont think its that hard at all. AI already filtered out millions of sources based on your requests. It would be simply to skew something to ones liking and then have the AI only recognize those results as the only source. And you'd still get a TON of data too.

We see people work off of far less data today, without AI. We see people whos entire view of society is defined by the household they grew up in, or the quality of the neighborhood they were raised in. It would take orders of magnitude less to convince someone based on all the data and AI can source and filter.
 

SJRB

Gold Member
Food for thought:

AI can be corrupted so quickly / easily due to human input. What does this mean for scientific theories that do not flow directly from the scientific method? Or pretty much any of the social theories taught at the college level these days?

This operates under the assumption scientists are infallible and not corrupt, which is absolutely not the case.

If any, the machine will eventually point out the nonsense of fraudulent "hard" science equally good as the faux-science of social studies.


wAuF4j4.jpeg



A purging of the scientific community is inevitable and it's going to happen sooner rather than later. It's going to be a rough wake-up call for many, and it's going to be glorious. We need to accelerate this process in any way we can.
 
Last edited:

EviLore

Expansive Ellipses
Staff Member
This operates under the assumption scientists are infallible and not corrupt, which is absolutely not the case.

If any, the machine will eventually point out the nonsense of fraudulent "hard" science equally good as the faux-science of social studies.


wAuF4j4.jpeg



A purging of the scientific community is inevitable and it's going to happen sooner rather than later. It's going to be a rough wake-up call for many, and it's going to be glorious. We need to accelerate this process in any way we can.
Some scientific journals are not credible and will publish anything. Nothing new, and this is understood within the scientific community.
 

winjer

Gold Member
Some scientific journals are not credible and will publish anything. Nothing new, and this is understood within the scientific community.

Most scientific papers are not peer validated, only ~25% are. For several reasons, and one of the reason is that the hypothesis is just wrong.
But this is the scientific process, to find the best and most accurate model for reality. But this also means a lot of failure along the way.
Scientific journals will publish a lot of studies that are wrong. But what really matters is whether a study was peer reviewed, accepted and quoted.
This is why whenever I see some news about some study claiming something new, my first first question is always if it was peer reviewed. Otherwise, it means nothing.
 

HoodWinked

Member
Honestly the more likely issue is when the A.I. concludes something that is uncomfortable and thus people attribute it as racism when it's being truthful. Then because of the accusation they massage the model to falsify or suppress the result to the desired outcome or answer.

Its why its so common to see the A.I. guard rails... "As an A.I. model.... blah blah".
 

violence

Member
Honestly the more likely issue is when the A.I. concludes something that is uncomfortable and thus people attribute it as racism when it's being truthful. Then because of the accusation they massage the model to falsify or suppress the result to the desired outcome or answer.

Its why its so common to see the A.I. guard rails... "As an A.I. model.... blah blah".
We’re going to get real far doing that. If the AI starts pointing out different dog breeds have have different temperaments, it may became an uncomfortable issue one day and the AI will need to be suppressed/re-educated. Perhaps the idea of evolution creating anything other then equality will need to be changed to something more comfortable.
 
Top Bottom