• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Companies are pulling ads from YouTube to protect their brands

Status
Not open for further replies.
Advertisers are well in their right to pull ads and Google should be working to better apply appropriate ads to content, but it's not as cut and dry as just banning all the bad shit on the internet. Racism, alt-right views, white power, and all the stuff in that sphere is terrible and it's disgusting that we are still dealing with these issues...but as it stands you have to weigh that against freedom of speech. As well, the Market Place story touches on the fact that it is also a balancing act for Google (and Twitter and several other social media sites) about continued embrace of open speech on the internet as opposed to a more moderated approach.
Videos won't be removed, it is about showing ads around them or not, or marking them as 'sensitive' so advertisers can chose to put ads there or not. Google is not going to delete videos.
 
And yes, the incoming amount of videos is daunting, but if you add user reporting and the absolutely disgusting amounts of user tracking and data mining they are capable of, they can pre-filter the complaints until the problem becomes manageable by a bunch of humans.

Yeah, user reporting. What could possibly go wrong.
 

daveo42

Banned
Videos won't be removed, it is about showing ads around them or not, or marking them as 'sensitive' so advertisers can chose to put ads there or not. Google is not going to delete videos.

I understand that, but that's what is being called for in this thread.
 
Re: How can they deal with monitoring 300 hours of video being uploaded every minute:

Hire 18000 people to monitor video uploads. Problem solved.
It's not like they don't have the money or something.
 

SomTervo

Member
Advertisers are well in their right to pull ads and Google should be working to better apply appropriate ads to content, but it's not as cut and dry as just banning all the bad shit on the internet. Racism, alt-right views, white power, and all the stuff in that sphere is terrible and it's disgusting that we are still dealing with these issues...but as it stands you have to weigh that against freedom of speech. As well, the Market Place story touches on the fact that it is also a balancing act for Google (and Twitter and several other social media sites) about continued embrace of open speech on the internet as opposed to a more moderated approach.

It's not about whether the videos can be posted or not, it's about whether corporations want to make money off the heinous views or not.
 

Damaniel

Banned
Advertisers are well in their right to pull ads and Google should be working to better apply appropriate ads to content, but it's not as cut and dry as just banning all the bad shit on the internet. Racism, alt-right views, white power, and all the stuff in that sphere is terrible and it's disgusting that we are still dealing with these issues...but as it stands you have to weigh that against freedom of speech. As well, the Market Place story touches on the fact that it is also a balancing act for Google (and Twitter and several other social media sites) about continued embrace of open speech on the internet as opposed to a more moderated approach.

Freedom of speech only means that the government can't arrest or harass you for what you say. Companies, and those who advertise with them, are free to ban whoever they like for whatever reason they like. I'm not going to shed a tear for the poor, poor racists that can't monetize their hate speech.
 

Eidan

Member
Advertisers are well in their right to pull ads and Google should be working to better apply appropriate ads to content, but it's not as cut and dry as just banning all the bad shit on the internet. Racism, alt-right views, white power, and all the stuff in that sphere is terrible and it's disgusting that we are still dealing with these issues...but as it stands you have to weigh that against freedom of speech. As well, the Market Place story touches on the fact that it is also a balancing act for Google (and Twitter and several other social media sites) about continued embrace of open speech on the internet as opposed to a more moderated approach.

Advertisers not wanting their brands interlaced with a YouTuber's content is not an attack on freedom of speech. Hell, Google could outright shut down a lot of the worst channels and it still wouldn't be an attack on their freedom of speech. I'm tired of "freedom of speech" just being "freedom from consequences".
 

daveo42

Banned
It's not about whether the videos can be posted or not, it's about whether corporations want to make money off the heinous views or not.

Freedom of speech only means that the government can't arrest or harass you for what you say. Companies, and those who advertise with them, are free to ban whoever they like for whatever reason they like. I'm not going to shed a tear for the poor, poor racists that can't monetize their hate speech.

Advertisers not wanting their brands interlaced with a YouTuber's content is not an attack on freedom of speech. Hell, Google could outright shut down a lot of the worst channels and it still wouldn't be an attack on their freedom of speech. I'm tired of "freedom of speech" just being "freedom from consequences".

I understand that, but that's what is being called for in this thread.

.
 
05f1f0fc787802a243d007a793c0047f.gif


Works for me.
 

tokkun

Member
Why do we pretend the exact same policies would apply here?

Because this has been the result every time someone has attempted large-scale content filtering. This goes back to the 90s when people were complaining about net filters preventing them from seeing websites about breast cancer.

The DMCA removal process is not a special snowflake. It is representative of the fundamental problem with these systems that need to scale to the size of the Internet. If you are going to claim that somehow we will invent a new system is going to avoid the problems of all past systems, it seems like the burden of proof is on you.
 
Because this has been the result every time someone has attempted large-scale content filtering. This goes back to the 90s when people were complaining about net filters preventing them from seeing websites about breast cancer.

The DMCA removal process is not a special snowflake. It is representative of the fundamental problem with these systems that need to scale to the size of the Internet. If you are going to claim that somehow we will invent a new system is going to avoid the problems of all past systems, it seems like the burden of proof is on you.
I sadly don't have a large Youtube like website to proof it. But to me it seems a combination of automatic systems and manual checks can be pretty accurate.

The content ID system has to be strict so Youtube doesn't get sued. With the advertising systems there can be some errors, so it can be a little less strict. I mean, they are able to ban websites and such from their Adsense program, so why can't the same be done for Youtube channels?

Will it every be 100% accurate, real time and make everyone happy? No. But a lot of progress can be made.
 
all they need to do to 'not get sued' is follow the DMCA rules.

the Content ID system was put in place to appease to big media corporations and is not a requirement of the DMCA.
 

tokkun

Member
I sadly don't have a large Youtube like website to proof it. But to me it seems a combination of automatic systems and manual checks can be pretty accurate.

The content ID system has to be strict so Youtube doesn't get sued. With the advertising systems there can be some errors, so it can be a little less strict. I mean, they are able to ban websites and such from their Adsense program, so why can't the same be done for Youtube channels?

Will it every be 100% accurate, real time and make everyone happy? No. But a lot of progress can be made.

I agree that such a system can be "pretty accurate". But "pretty accurate" for a service with 1 billion users still means that many people will be negatively affected by false positives.

Like I said, people are free to disagree about what approach is better. But if you are going to push for stricter rules and fewer false negatives, the consequence is going to be more false positives, and some innocent people will be hurt by that. You are free to argue that those are acceptable losses if the new policy hurts people creating offensive videos, but at least go into it open-eyed about what the consequences will be.

Also, it's not just going to be about false positives. You'll find that companies are a lot more conservative about the messages their brands are associated with than you are. Don't be surprised when the stricter filtering applies not just to alt right videos, but also to videos that you don't find offensive, like those including swearing or sexual themes or even anything that could be considered politically sensitive.
 
I'm just glad I don't have to watch more ads.

I have no idea how this impacts YouTubers. How much does the average YouTuber make anyway?
 

Aselith

Member
That's good but it seems very hard to police in practice due to people being able to go off the rails at the drop of a hat like JonTron did. Is this going to be the new DMCA strike style abusable system? Hopefully they find a good solution.
 
Yeah, user reporting. What could possibly go wrong.

It could be a multi-tier approach. Every user has an internal ranking, that determines the "weight" of their reports. A video with a significant report "weight" is then passed onto an actual human moderator. If they deem it to be nonsense, the users who reported it are penalized.

This way the power of trolls is minimized if they continue, and it limits the number of videos that require actual human checking for content, etc. This could still be run in conjunction with their automated system too, with maybe an auto flagging also being considered a "user" and being factored in. Erroneous flags are then fed back in as training data into their system to prevent similar false cases in the future
 

Mindwipe

Member
I sadly don't have a large Youtube like website to proof it. But to me it seems a combination of automatic systems and manual checks can be pretty accurate.

The content ID system has to be strict so Youtube doesn't get sued. With the advertising systems there can be some errors, so it can be a little less strict. I mean, they are able to ban websites and such from their Adsense program, so why can't the same be done for Youtube channels?

Will it every be 100% accurate, real time and make everyone happy? No. But a lot of progress can be made.

Adsense routinely discriminates against sites run by sexual minorities, shits all over vulnerable groups like sex workers and has a very bad track record with transgender people.

Adsense is literally a textbook example of this working extraordinarily badly.
 

Slayven

Member
Some sort of reckoning was always coming—Youtubers are primarily propped up by ad rates that were always going to fall over time. The smart ones have gone into sponsored media/branded content because there's no way ad revenue is going anywhere but down for individual channels long-term, unless you've got enough annual growth to counteract it.

But yeah, the only way Youtube will be making any changes is if it threatens their bottom-line like this, so once again it's up to selfish corporations to make positive changes in Bizarro World :)
I always wondered why more Youtubers didn't branch out. The true business people youtubers are the makeup youtubers. They grind then get get sponsership deals and contacts within the industry
 

Morzak

Member
Re: How can they deal with monitoring 300 hours of video being uploaded every minute:

Hire 18000 people to monitor video uploads. Problem solved.
It's not like they don't have the money or something.



18000 people wouldn't be enough by a long shot. 300 hours per minute are 432000 hours of video per day or a bit over 3 million hours a week. Assuming a 40 hour work week the amount of people needed to watch videos to cover that amount would be 75600. That doesn't include supervisors, cross checking, administration and so on. That would more then double the amount of employees google had in 2015. That isn't a feasible way to manage it.

I really don't think people understand the scale of youtube.

There will be ways to manage it with an automated system, ML applications have made huge strides over the past years and there will probably a way to classify and categorize videos, but the error rate will probably be significant and it will hit a lot of unrelated channels. As soon as advertisers see that it is possible, it won't stop at hate speech and they will want to pull ads from a lot of different videos.
 

CTLance

Member
I really don't think people understand the scale of youtube.
Regarding the scope, they can already discard a tremendously huge chunk of videos that simply don't get enough views to matter, or that idle about with very few impressions per week. They can hire those hypothetical 1800 people to just go after trending, highly active videos. That alone would help tremendously.

They can also programmatically harvest the referrer header from visitors and check if any social media sites in that data set contain certain keywords, or if the related social media and YouTube accounts have already scored badly from previous offenses. Or if any of the news and debunk sites are linking to the video while using certain keywords.
That's like the most basic of basic bots. Comparatively easy to code and deploy. Yet it would help highlight likely candidates in a hurry.
(I'm pretty sure they already do some variation of the above to avoid view count fuckery and the like.)

Again, if YouTube had acted early enough, they'd already have a flexible, home-grown, highly adapted system in place. They didn't, so now everybody will have to suffer while they hack together some gruesome Frankenstein monster that will deliver a huge amount of false positives and overlook an egregious amount of genuine offenders. That's entirely on them, and they deserve every bit of backlash that they will get from this. Does it suck? Yes. It's still better than pretending that it cannot be done, ever.

And about the overstepping of boundaries wrt ads, come on, we already have incredibly detailed Adsense (etc) controls that allow advertisers to specify exactly who they want their ads to be shown to. This is already happening. Big time. No, it doesn't make it OK, but it means this is a rather silly complaint. Obviously it's gonna happen even more, even regardless of this particular matter. However, it's another issue entirely, and would be much better addressed as part of that distinct problem.
 

jstripes

Banned
It could be a multi-tier approach. Every user has an internal ranking, that determines the "weight" of their reports. A video with a significant report "weight" is then passed onto an actual human moderator. If they deem it to be nonsense, the users who reported it are penalized.

This way the power of trolls is minimized if they continue, and it limits the number of videos that require actual human checking for content, etc. This could still be run in conjunction with their automated system too, with maybe an auto flagging also being considered a "user" and being factored in. Erroneous flags are then fed back in as training data into their system to prevent similar false cases in the future

I'd go one step further with humans checking the videos and require it be checked by three separate and random people, with a majority decision affecting the outcome. That way you help to avoid employees with "interesting" personal philosophies tainting the moderation.
 
It could be a multi-tier approach. Every user has an internal ranking, that determines the "weight" of their reports. A video with a significant report "weight" is then passed onto an actual human moderator. If they deem it to be nonsense, the users who reported it are penalized.

This way the power of trolls is minimized if they continue, and it limits the number of videos that require actual human checking for content, etc. This could still be run in conjunction with their automated system too, with maybe an auto flagging also being considered a "user" and being factored in. Erroneous flags are then fed back in as training data into their system to prevent similar false cases in the future

Multi-tier, you say?



YouTube-Heroes-Levelsystem.jpg
 

dracula_x

Member
Starbucks and Walmart join growing list of advertisers boycotting YouTube – https://www.theguardian.com/technology/2017/mar/24/walmart-starbucks-pepsi-pull-ads-google-youtube

PepsiCo, Walmart and Starbucks on Friday confirmed that they have suspended their advertising on YouTube, joining a growing boycott in a sign that big companies doubt Google’s ability to prevent marketing campaigns from appearing alongside repugnant videos.

The companies pulled their ads after the Wall Street Journal found that Google’s automated programs placed their brands on five videos containing racist content. AT&T, Verizon, Volkswagen and several other companies pulled ads earlier this week.
 
Status
Not open for further replies.
Top Bottom