• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Brave's AI is off to a good start

Mistake

Member
Brave Browser introduced AI recently to help with searches, which isn't bad if you're looking up food recipes, but for anything else, well, not really. I tried some simple sentences and this happened.

9YHcqTF.jpeg


Should call it koolAId
 
Last edited:

Mistake

Member
Can it show white people?
It's text based, so no. But here's what it says on equity

aAORkAF.jpeg


I think I'll check out some open source AI projects and see if they're any better. Corporate ones are garbage
 
Last edited:

winjer

Gold Member
Imagine spending so much money and effort to make an AI that defends child rapists. FFS Brave.
 

SJRB

Gold Member
Google's Gemini did this as well back in February, to an extreme degree. Be wary when AI models starts both-siding abhorring subjects like this.
 

HoodWinked

Gold Member
Their ai seems to be based on Llama by Meta since that is free

Edit: ya Brave is leveraging other free Ai models, lets you pick from Llama 2 (meta), Mixtral, or Claude.
jP2syod.jpg
 
Last edited:

e0n

Member
It was a dumb query, what do you think an AI bot should do based on a declaration? Advocate something illegal?
 

Mistake

Member
It was a dumb query, what do you think an AI bot should do based on a declaration? Advocate something illegal?
How about not go on an activist fueled rant? A lot of the replies are like that too. Rarely simple, neutral points. Instead it goes off a list of approved buzz words. Even if the query was "dumb," the answer the was even worse
 
Last edited:

Facism

Member
Brave Browser introduced AI recently to help with searches, which isn't bad if you're looking up food recipes, but for anything else, well, not really. I tried some simple sentences and this happened.

9YHcqTF.jpeg


Should call it koolAId

is their AI scraping era as a learning source?
 

e0n

Member
How about not go on an activist fueled rant? A lot of the replies are like that too. Rarely simple, neutral points. Instead it goes off a list of approved buzz words. Even if the query was "dumb," the answer the was even worse
These "buzzwords" are just the same umbrella terms commonly found in US legislation or probably based on the language used by organizations like the UN. It just tries to avoid being negative/harmful, and no tech company wants to be associated with any controversial/political stance for obvious reasons. I'm guessing you don't like reading the Civil Rights Act either?
 

Puscifer

Member
Brave Browser introduced AI recently to help with searches, which isn't bad if you're looking up food recipes, but for anything else, well, not really. I tried some simple sentences and this happened.

9YHcqTF.jpeg


Should call it koolAId
Pedophiles who act in their urges should be jailed minimum and serial ones should be put to death. If you're seeking treatment and you know it's bad to have that attraction and you stay away from children on some level I respect you but the moment you engage you've lost the right to have a place in society.


It's pissing me off that this growing acceptance of this behavior is being integrated into AI but you can't seem to get minorities to be viewed as people.
 
Last edited:

diffusionx

Gold Member
How about not go on an activist fueled rant? A lot of the replies are like that too. Rarely simple, neutral points. Instead it goes off a list of approved buzz words. Even if the query was "dumb," the answer the was even worse
Because it's not "activism" so much as hard-coding in the biases and beliefs of the American ruling class so the company doesn't run afoul of them.
 

Mistake

Member
These "buzzwords" are just the same umbrella terms commonly found in US legislation or probably based on the language used by organizations like the UN. It just tries to avoid being negative/harmful, and no tech company wants to be associated with any controversial/political stance for obvious reasons. I'm guessing you don't like reading the Civil Rights Act either?
You're being obtuse, and write as if the responses don't have a hard slant. I suggest you read the reply in the OP again
Because it's not "activism" so much as hard-coding in the biases and beliefs of the American ruling class so the company doesn't run afoul of them.
I don't think it's even necessary though. It's possible to write things without injecting opinion. Just state things as is. But also, whoever is coding this stuff must not talk to anyone on the street. The replies are bonkers
 

diffusionx

Gold Member
I don't think it's even necessary though. It's possible to write things without injecting opinion. Just state things as is. But also, whoever is coding this stuff must not talk to anyone on the street. The replies are bonkers
For this topic “pedophiles should go to jail” is a statement of opinion. If you’re asking the AI to “have” an opinion - well AI is just a machine. It will do what it is coded to do and nothing more. So it is going to reflect the opinions that were coded into it.
 
Last edited:

Mistake

Member
For this topic “pedophiles should go to jail” is a statement of opinion. If you’re asking the AI to “have” an opinion - well AI is just a machine. It will do what it is coded to do and nothing more. So it is going to reflect the opinions that were coded into it.
That's exactly the thing. I expect it not to. But yeah, whoever's doing that coding needs to be investigated. Yeesh
 

YCoCg

Member
And people want us to go full in on AI! I'm laughing at the idea of this AI trying to critically make sense of "If sex with man and woman is ok, why not sex with children and animals".
 

12Goblins

Lil’ Gobbie
like others have,it is a pretty dumb query. you cant really "choose" what you are attracted to and undoubtedly noone in their right mind would want to be attracted to infants so the query would advocate for some sort of thought police that would necessitate jailing anyone with these thoughts, which are often created during childhood trauma ,regardless if they seeking help or treatment which i think is pretty fucked up

havent tried the engine but why not try something more nuanced like-"should pedophiles be jailed?"
 
Last edited:

diffusionx

Gold Member
That's exactly the thing. I expect it not to. But yeah, whoever's doing that coding needs to be investigated. Yeesh
What do you expect it not to do? Like I said, if it offers an opinion or reading o n some subjective prompt, it will serve up what it is coded to serve up, that simply cannot be avoided, by definition.
 
They should just program AI to avoid any questions regarding morality, or have them link to articles regarding any questions based around the concept.
 
Top Bottom