• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Companies are pulling ads from YouTube to protect their brands

Status
Not open for further replies.

RiccochetJ

Gold Member
This is what happens when you focus too much on services and partnerships to make money instead of greatly supporting the (diverse) users that made your website popular. Reminds me of Twitter's struggles.

I don't think you appreciate just how big youtube is. A quick search says that 300 hours of videos are uploaded to youtube every minute. They have over a billion unique visitors viewing content. Do you have someone you want to point at specifically who is the inflection point of Youtube getting a billion visitors?
 
It's almost as if YouTube is filled with shitty malcontents who aren't and shouldn't be put on a "TV Network Personality" pedestal with no agent, no PR sensibilities, no competence in dealing with people.
 
With the zero accountability youtubers have and seeing the situation with playtonic unfairly getting caught in the middle of this shit storm I think most companies are going to stay away from partnering with popular youtubers. They mostly seem to be a PR disaster waiting to happen.
 
This is what happens when you focus too much on services and partnerships to make money instead of greatly supporting the (diverse) users that made your website popular. Reminds me of Twitter's struggles.

Yes, how dare a business that's extremely expensive to run (it lost money forever) try to make money!
 

Boney

Banned
I've been saying this for years, but these social media sites need to have a substantial amount of human resources dedicated to moderating flagged content.

I know they been have issues monetizing but it doesn't. make a difference
 
This sucks for Youtubers because it's only going to further encourage Google to create more obstacles that restrict channel growth. I get why advertisers wouldn't want their brand associated with hate speech. I just can't think of a good solution that doesn't hurt creators in the process.
 

haimon

Member
My company had issues similar to this. We are a ad tech firm and our clients wanted to make sure we are not on brietbart.

We ended up having to make sure we do not support anonymous sites with various ad networks and losing about 30% of the sites because we us and our clients don't trust Google and other ad partners.
 

haimon

Member
I've been saying this for years, but these social media sites need to have a substantial amount of human resources dedicated to moderating flagged content.

I know they been have issues monetizing but it doesn't. make a difference
Easy to say, but companies exist to make money. If they need to have thousands of people to moderate content then that will never be a viable solution due to costs.
 

Kinitari

Black Canada Mafia
There are a significant amount of technical challenges to something like this. Maybe the recent advancements in AI being able to watch videos will help?... But the computing power and the nuance required to pull something like that off... Yikes.

I imagine the end goal is to have automatic content tagging of videos (that will have to remain opaque to content creators), with advertisers being able to 'opt in' to particular kinds of content for their advertising. It solves the problem and also makes advertising opportunities more appealing.

I just don't know how many years off we are from something like that, maybe 1 or 2?
 
This is awesome news but I worry about all channels being effected based on vulgarity too.

Like only the hate speech promoting channels should be penalized by this but its probably a logistical nightmare to sort and categorize these channels.

No easy solution here. But it's dope that the advertisers took matters into their own hands
 
First of all, since this is being driven by advertisers, there's no way this isn't going to affect vulgar content, violent content, and content that talks about sex in general.

Second, and the reason I think it's weird that everyone's applauding this, while of course there are a lot of companies that don't want their ads displayed with hate speech, I would think there's also companies that, for example, don't want their ads displayed with LGBTQ+ content either, and there's a serious question as to whether whatever restrictions Youtube puts in place would allow those companies to do that as well.
 

Bluth54

Member
There are a significant amount of technical challenges to something like this. Maybe the recent advancements in AI being able to watch videos will help?... But the computing power and the nuance required to pull something like that off... Yikes.

I imagine the end goal is to have automatic content tagging of videos (that will have to remain opaque to content creators), with advertisers being able to 'opt in' to particular kinds of content for their advertising. It solves the problem and also makes advertising opportunities more appealing.

I just don't know how many years off we are from something like that, maybe 1 or 2?

I imagine if they do that people will just find ways around the bot, just like how Jim Sterling is able to get around companies monetizing his Jimquisition episodes with the copyright deadlock.

It will be interesting to see what Youtube does. I assume some channels with essentially always be safe for ads, like say Conan O'Brien's show. Some channels may essentially be considered provisionally safe, basically all their previous content has been fine but can be pulled if the youtuber goes off the deep end. And I wouldn't be surprised if human viewership may be required for monetization for some videos.
 
First of all, since this is being driven by advertisers, there's no way this isn't going to affect vulgar content, violent content, and content that talks about sex in general.

Second, and the reason I think it's weird that everyone's applauding this, while of course there are a lot of companies that don't want their ads displayed with hate speech, I would think there's also companies that, for example, don't want their ads displayed with LGBTQ+ content either, and there's a serious question as to whether whatever restrictions Youtube puts in place would allow those companies to do that as well.

Yeah I think as great as it sounds that advertisers are pulling ads cuz of hate speech I think this is gonna be an awful precedent and backfire big time. Floodgates about to open to the type of stuff you're talking about
 

digdug2k

Member
Heh. I reported something like this to friends there years ago when beer commercials were playing before my kids videos. Like, I get it. It's my phone. You think it's me. I'm old. But when I click on twinkle twinkle Little star... I think your "targeted advertising" can do better (and it doesn't even require spying on me).
 

G.ZZZ

Member
I love how companies who lobby for stealing from the poors (aka creating or mantaining tax loopholes, lowering worker's rights etc...) then complain when society go to shit. Complicit, all of them.
 

patapuf

Member
This is going to affect a lot more things than a few alt-right channels.

I'm not sure this news is worth cheering for yet.
 
Judging from the whining from the alt-right over being shadow banned on Facebook and twitter and desperately worrying their Muslims are evil YouTube will stop making them money this is the right policy.

If these personalities earned no money from ads or even (shock) had to pay to broadcast their messages I wouldn't shed a tear. If they want to complain about equal treatment then fine, demonitise all politics on the big social platforms. Get the money out. It corrupts. A large amount of the fake news industry is actually just about ad revenue not even partisan beliefs.
 

entremet

Member
How many people would it take to monitor 300 hours of video being uploaded every MINUTE? A technology-based solution is absolutely necessary. Whenever people say things like this, I instantly tune out, because it's obvious you have no idea what you're talking about.

Yep. Same people who believe Twitter should be moderated by people. Absolutely insane. GAF is a small forum in terms of active posters and we have dozens of mods, pay/work email requirement, and a waiting period.

That is not scalable for these big internet giants.

Also, cheering this will only affect bigoted stuff are gonna be in for a rude awakening.
 

nampad

Member
Good, don't want to support hate speech.
This is also better advertising for the companies that stopped their commercials then the commercials themselves.
 
Yep. Same people who believe Twitter should be moderated by people. Absolutely insane. GAF is a small forum in terms of active posters and we have dozens of mods, pay/work email requirement, and a waiting period.

That is not scalable for these big internet giants.

Also, cheering this will only affect bigoted stuff are gonna be in for a rude awakening.
There are a ton of options for them to explore to make moderation a bit harsher and quicker and to protect users. Twitter only recently put in some much needed features to block harassment. There is no excuse for that not being done a few years earlier.

Google, Facebook, Twitter want to profit of these networks being gigantic, having hundreds of millions of users and showing them ads. Then they shouldn't come crying and say it is impossible to moderate because it is used so much. You wanted it to be used by so many people, now take responsibility for what happens on your platform.

It being scalable is their problem to fix. The monetization is scalable, the profits are scalable, the userbase is scalable, so make the moderation scalable also.
 

entremet

Member
There are a ton of options for them to explore to make moderation a bit harsher and quicker and to protect users. Twitter only recently put in some much needed features to block harassment. There is no excuse for that not being done a few years earlier.

Google, Facebook, Twitter want to profit of these networks being gigantic, having hundreds of millions of users and showing them ads. Then they shouldn't come crying and say it is impossible to moderate because it is used so much. You wanted it to be used by so many people, now take responsibility for what happens on your platform.

It being scalable is their problem to fix. The monetization is scalable, the profits are scalable, the userbase is scalable, so make the moderation scalable also.

I'm not saying it's impossible to moderate. I'm just staying moderation will be based on alogorithms, like that Content ID bullshit.

It's not scalable for humans to moderate these network.

And because of this many wholesome channels will be affected. But hey, let's just cheer indiscriminately.

Weren't some LGBT channels excluded from the recent YT initiative?
 
I'm not saying it's impossible to moderate. I'm just staying moderation will be based on alogorithms, like that Content ID bullshit.

It's not scalable for humans to moderate these network.

And because of this many wholesome channels will be affected. But hey, let's just cheer indiscriminately.

Weren't some LGBT channels excluded from the recent YT initiative?
Sure, use algorithms where possible, and have content flagged automatically for humans to look at. It will take a lot of people. If that can be automated, great. If not, they need to eat the costs and do it manually until they can automate it.

Errors will be made. Happens in every business. But that is better then doing nothing.
 

ramparter

Banned
Surprised people would relate the ad to the content, but then again like in TV companies have the right to choose at what content they market their products.
 
Human learning and knowledge graphing can make moderating YouTuber or Twitter a lot easier. A simple algorithm like the content ID does not need to be in place if Google built up a team that utilize both human learning and knowledge graphing. I expect that they are in the works of doing this with the big push to have content captioned so they can match the typed words with whatever word-to-text program they have.

It is possible to have humans do this work and do it well.

I know this for a fact based on my line of work.
 

entremet

Member
Surprised people would relate the ad to the content, but then again like in TV companies have the right to choose at what content they market their products.

A good example of this is the use of profanity non premium cable TV channels. You can actually do it, but advertisers don't like so many non premium cable channels limit it.

The FCC can't touch cable TV, but advertisers keep the content clean as it where.

TV is not a medium for viewers, it's a medium for advertisers. They're the ones buying ad time.
 

Vanillalite

Ask me about the GAF Notebook
YouTubes problem is it's morphed into something via it's users that wasn't necessarily baked into it's original concept.

Originally it was just a place for videos just like Google Photos is a place for photos to store and share.

Now it's morphed into all these other things. Plus video is a whole different ball game to deal with vs regular social media.

Not to mention this is expensive.
 
Yep. Same people who believe Twitter should be moderated by people. Absolutely insane. GAF is a small forum in terms of active posters and we have dozens of mods, pay/work email requirement, and a waiting period.

That is not scalable for these big internet giants.

Also, cheering this will only affect bigoted stuff are gonna be in for a rude awakening.

It's absolutely possible to have people in the loop
And before you leap to your feet I mean in the loop for things the system isn't sure about.
YouTube and Facebook are sitting on mountains of data that would enable them to triangulate in an automated fashion generators of the kind of content that is unacceptable (beyond just copyright or tits) I don't mean an AI that looks at an individual bit of content but something more holistic that takes a whole time line into account, clicks, origin ip addresses, duplicate accounts, likes and complaints, the whole thing.
Then the big problematic or uncertain things are pushed upwards, for manual review. That way Dr David Duke can't get around puny rules on individual posts, his account soon becomes identified correctly and for what is, and gets the wooden spoon.
Google spent millions on things like click fraud of their golden goose advertsing system, how about they start spending money on figuring the "social" part of social media out. YouTube can't even reliably classify stuff for adults vs young children, Facebook can't recognize a live stream suicide, or that a redirect URL is not a different or honorable news site, and so on and so forth. Saying it's too hard is a failure. We built a system we can't manage in real time? Is that something to be proud of?
 

entremet

Member
It's absolutely possible to have people in the loop
And before you leap to your feet I mean in the loop for things the system isn't sure about.
YouTube and Facebook are sitting on mountains of data that would enable them to triangulate in an automated fashion generators of the kind of content that is unacceptable (beyond just copyright or tits) I don't mean an AI that looks at an individual bit of content but something more holistic that takes a whole time line into account, clicks, origin ip addresses, duplicate accounts, likes and complaints, the whole thing.
Then the big problematic or uncertain things are pushed upwards, for manual review. That way Dr David Duke can't get around puny rules on individual posts, his account soon becomes identified correctly and for what is, and gets the wooden spoon.
Google spent millions on things like click fraud of their golden goose advertsing system, how about they start spending money on figuring the "social" part of social media out. YouTube can't even reliably classify stuff for adults vs young children, Facebook can't recognize a live stream suicide, or that a redirect URL is not a different or honorable news site, and so on and so forth. Saying it's too hard is a failure. We built a system we can't manage in real time? Is that something to be proud of?

We're not disagreeing. I'm just saying that people that think humans can run this are absolutely out of touch.

The rest of the internet isn't GAF. I see this stuff come up many times. Get more mods!

That not happening in these big sites. They will build an algorithm and many decent channels will also get fucked in the process.
 
Status
Not open for further replies.
Top Bottom