• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Google released another AI banger

ResurrectedContrarian

Suffers with mild autism
This just serves to show that AI, still isn't intelligence.
It's just probabilistic , using large data bases.
In reality, those models don't know the true meaning of what they are saying, just the likelihood of the next word.
That's not the problem

""AI"" itself is not the problem, the problem is taking the entirety of the internet as training data. 90% of the publicly written stuff is just fucking garbage. So you end up with shit like this.

Also not the problem.

To be clear, many of you are confusing the model's training data set (what the model internally knows and represents; a massive amount of data that these models are trained on) with the webpages that it is being fed live from your search in order to answer the question -- where the latter is actually RAG (retrieval augmented generation).

I'm not guessing that they are using RAG, it's obvious from the citations and concept of this search. And I can confirm that this is their stack approach in 2 seconds by checking their latest presentations on RAG systems.

Google's hilarious mistake is to believe that RAG alone will work well for this use case, when it's an extremely brittle thing to do with uncontrolled data like web searches. If the top hits for a topic include stupidities or parody (example: "how many rocks should I eat" search brings up the Onion parody article, about one rock a day, in the top 10 search results), the RAG approach ends up just pumping that text into the model as possible citations and letting the model say "okay, with what you gave me, here's what the sources say."

If you ask these idiotic questions to a model directly -- eg using ChatGPT -- without RAG use, you'll get excellent answers that are drawn from the model's internal knowledge from its massive training.

But since Google thinks "our greatest asset is that we have this massive search index, so we can pump live results into the model for it to cite," they built a stack that consumes the search results live -- and this is an idiotic mistake.

But this is the point. AI is just a machine. It's not thinking, it has no soul, it can't create anything new, it's basically a glorified search tool, but as we are seeing, even the preeminent search company on earth can't make an AI that is better than their search tool (which is also way worse than it used to be).
Actually that last part is the problem: their search itself is extremely poor. And this use of an AI model is being fed the top search results as sources to cite. The AI itself is doing its job of "okay you gave me these 20 sources to cite, here's what they say," but Google is building this interface on top of its broken search.
 
Last edited:

ResurrectedContrarian

Suffers with mild autism
By the way, if you don't think these LLMs can tell parody or comic information from genuine, that's nonsense. Google simply didn't set it up to do so, and instead fed the information into the model as context in a typical RAG fashion.

If you send that Onion article about eating rocks to ChatGPT -- without mentioning the source, or anything identifying the onion -- it will correctly recognize the nature of the parody: https://chatgpt.com/share/56abb1a7-05bf-4c96-b2f0-8a814b04ec86

Or if you suspect it recognizes that older article, I fed it a very recent Onion article that certainly isn't in its training. It immediately recognizes not only the satire but also the targets and intentions of the satire: https://chatgpt.com/share/d335b6fb-2240-4a89-a608-2f0beeaf0a9b

These models are actually very good at recognizing parody, shitposting, everything in between -- but Google isn't using it that way.
 

Hudo

Member
I'm not guessing that they are using RAG, it's obvious from the citations and concept of this search. And I can confirm that this is their stack approach in 2 seconds by checking their latest presentations on RAG systems.
Thanks for clarifying this. I have not used Google's Gemini, so I thought that what they were doing was just continuous finetuning, which is prone to catastrophic forgetting (although there are measures that can be taken into account to combat this). But of course, RAG is the cheaper and easier way to bolt on some newer data. And it makes sense they couple this with their search. Well, then it's not surprising that you get results like that.
 

ResurrectedContrarian

Suffers with mild autism
Perplexity (competitor that does RAG over search results) does a much better job at understanding the reliability of the sources. Google engineers are just inept.

GOTzqr5WgAA1jPn
 

Panajev2001a

GAF's Pleasant Genius
Google got fat and lazy. It happens to a lot of companies. We all know how screwed up the company culture has been since the Damore memo. Their CEO also isn't very impressive at all, in any capacity.
I suspect they are not lazy and their engineers are worked hard, but poorly and badly directed with tons of processes that create their own work.
 

Panajev2001a

GAF's Pleasant Genius
Why is Google suddenly so far behind Open Ai? Nothing seems to work for them.
It is likely that Google is suffering both from innovator dilemma (they can only seem to invest in things that protect their core) and from lack of vision and likely internal bureaucracy that is making very very large scale cross cutting processes super slow / hard to execute.

Also their hiring and promoting policy is likely hurting them… people are incentivised to launch new products, get promoted, and move on and those products often die out. Google is not a product company.

Having product managers telling that we must catch up, must catch up… does not just magically handwave problems away. I hope the billions they spent designing their own AI accelerators (TPU’s) pay off… Google is in a weird spot.
 
Last edited:

Complistic

Member
First page of this thread had me rolling 😂

I'd love to see a GAF "AI" that just had the hottest takes on whatever gaming related questions you asked it.
 
Last edited:

Hugare

Member
People posting here as if Google's AI should serve as an example about how dumb AIs are nowadays is causing me pain

Google's AI is like OpenAI's retarded cousin
 
""AI"" itself is not the problem, the problem is taking the entirety of the internet as training data. 90% of the publicly written stuff is just fucking garbage. So you end up with shit like this.

The only reasonable way to combat is to develop "common sense" reasoning skills for LLMs, which is already a branch of active research. But'll take a while.
Yet you can take a well educated person, and if given the time to read the entire internet, they wouldn't be fooled even if spent decades in troll forums.
But what if everyone and every math/science site trolled the net and said 1 + 1 = 3, would AI be smart enough to know that is wrong or just go with it since that's the data scrape?
It is said science advances one tombstone at a time, when a new scientist has ideas saying the herd is wrong. The herd tells him he is wrong, and ignores his claims correcting them. But eventually they die out and a new generation of scientists embraces the truth.

True intelligence is able to discern the truth even in a sea of lies. Even if the establishment is wrong, and the masses are wrong, intelligence is able to see the light of truth.
I'm amazed simpsons hasn't been cancelled for mocking special needs kids.
 
Last edited:

j0hnnix

Member
Fuck it's only one small rock! I've been doing it wrong all these years... Finally AI has helped me through this horrible mistake I've made.
 

Bitmap Frogs

Mr. Community
chatgtp is really impressive I asked for the recipe to stone soup and he recognizes it’s from a tale, gives a generic soup recipe vaguely inspired by the tale, and recommends to clean and boil a stone to throw it into the pot to illustrate the tale, then points out that you should take it out of the pot before eating

gemini just spits out a generic soup recipe
 

daffyduck

Member
They seem to have fixed this, for now.
I wonder how many employees were thrown under the bus for being “responsible”.
 

TheInfamousKira

Reseterror Resettler
Okay, so I was googling places to buy rope.

Google immediately, within the first five results, gave me "The best way to kill yourself with a rope," and I scoffed. The end.
 
Top Bottom