New Chrome extension "uses AI to moderate comments on YouTube, Reddit, Facebook, Twitter, and Disqus"

Apr 18, 2018
8,009
13,014
555
USA
dunpachi.com
#1
Via The Verge.

The extension is made by Jigsaw, a member of Google's incubator program.



How inconvenient it would be to read things you don't like, or things from people you don't like.

How convenient that we now have a curated slider to tell us what is and isn't toxic. I expect this to be handled in a fair and balanced way just like current social media giants handle themselves. I am sure this will improve our political discourse and our communication, and I am sure certain political viewpoints won't be marked as toxicity* unfairly.
 
Sep 25, 2015
5,373
2,631
340
Somewhere in space
#4
A toxicity slider? Now I've seen everything.

You could probably whip up an impartial solution to this that filters by aggression, specific language, or various other generally bad things that have ruined comments sections for years now, but having the algorithm be context-aware is a key, and I don't necessarily trust its interpretation of that context without being able to see the data it was trained on and logic used to delineate "toxic" from "non-toxic".
 
Likes: Jubenhimer
Nov 11, 2018
279
183
235
#6
Don't you remember? SJWs are so triggered by different opinions, that they have to make an entire browser extension so create an artificial safe space. That way they don't have see that "Hate Speech" or "Toxic Masculinity".

This is why I can't wait for current mainstream leftist culture to burn in hell soon.
 
Last edited:
Likes: IzukuMidoriya
Oct 1, 2006
3,411
2,867
1,090
#7
I do think this is linked to Dissenter coming out, but I don't understand what the first part of your post refers to.
Look up Harmful Opinions' YouTube vids on it. Basically, it was a messaging app that all the YouTube skeptics promoted that used machine learning to learn how to moderate content. That learning appears to have been used to start the Adpocalypse (mass demonitazion) and other Google censoring. I have no doubt that this is connected to it.

Harmful Opinions is also no longer on social media. He got depersoned soon after making his vids, with skeptics dogpiling on his "crazy conspiracy theories".
 
Last edited:
Apr 18, 2018
8,009
13,014
555
USA
dunpachi.com
#8
Look up Harmful Opinions' YouTube vids on it. Basically, it was a messaging app that all the YouTube skeptics promoted that used machine learning to learn how to moderate content. That learning appears to have been used to start the Adpocalypse (mass demonitazion) and other Google censoring. I have no doubt that this is connected to it.

Harmful Opinions is also no longer on social media. He got depersoned soon after making his vids, with skeptics dogpiling on his "crazy conspiracy theories".
Thanks, I'll investigate that when I have some free time.
 
Apr 15, 2018
2,515
2,904
240
#9
Look up Harmful Opinions' YouTube vids on it. Basically, it was a messaging app that all the YouTube skeptics promoted that used machine learning to learn how to moderate content. That learning appears to have been used to start the Adpocalypse (mass demonitazion) and other Google censoring. I have no doubt that this is connected to it.

Harmful Opinions is also no longer on social media. He got depersoned soon after making his vids, with skeptics dogpiling on his "crazy conspiracy theories".
This has nothing to do with candid. Google has been working on a moderating algorithm since they purchased YouTube, they even had a YouTube program, YouTube heroes, to mine data for it. Also I know for a fact Reddit has done research into technology like this as well, going back to at least a few years. It was one of the first ideas spread around silicon valley when AI became a big thing.
 
Last edited: