Metacritic's weighting system revealed

In case people might have thought otherwise... sites didn't know their own weight, much less publishers.

Edit: is the precise methodology applied to find the weights published anywhere? Wanna see those cases where the weighting failed and apply hypothesis testing to them.
There probably is some sense to these ratings, I just assume that it's not based on the quality of the publication, or its influence/traffic.

I'd like to actually know how Metacritic decides this.
No, it is based on quality. If this list is correct, it represents Metacritic's beliefs as to what the best review sites are.

Metacritic said:
Metascore is a weighted average in that we assign more importance, or weight, to some critics and publications than others, based on their quality and overall stature. In addition, for music and movies, we also normalize the resulting scores (akin to "grading on a curve" in college), which prevents scores from clumping together.
Which explains this:

Metacritic said:
Can you tell me how each of the different critics are weighted in your formula?

Absolutely not.
Having the list out in the open exposes the whole thing as a nonsensical farce.
I actually had no idea that there was such a huge disparity in the way they weighted things. I'd love to see similar data on how they handle film reviews, too. At least RottenTomatoes is fairly upfront about categorizing "cream of the crop" publications.
I propose the creation of a new rankings website where, instead of weightings, the distribution of scores given by websites is normalised based on their mean review score and then spread in a normal distribution about 50%.

You could also go finer grain and do it per individual reviewer but that might lead to bias. For instance, some sites like to have reviewers who focuses on AAA titles which typically get higher scores and freelancers who review low-scoring shovel ware.

Obviously, the need for such a rankings system is questionable, but if there must be one, it can be done more fairly than Metacritic.
Seriously, looking at those rankings this is a borderline scandal, what the hell possessed Metacritic to give these weightings??
Yeah, it seems really unnecessary. Either use a particular critic's/site's reviews or ignore them altogether. That said, I still think metacritic is a pretty good general reference for overall opinion. Most of the time user/ critic average scores don't differ too wildly. (I know there are exceptions.)
Well, it kinda makes sense as you can have real sleazy sites that give shitty ass reviews for absolutely everything and you want some way to protect against that. At the same time it's bullshit because good quality publications get ignored (e.g. Giant Bomb) in favour of mainstream sites and we don't exactly know why this is. Is it because of readership size? Or did they pay CNET? Or is it their based on their history?

Who knows, and to be honest I don't give a shit
I still can't believe that Metacritic tallies IGN THREE TIMES.

Highest (1.5) -- IGN
Highest (1.5) -- IGN AU
Highest (1.5) -- IGN UK
Effectively, the IGN umbrella has a 4.5 weight....8 times higher than Giant Bomb.

Now this just makes me angry.
Those exclusive IGN reviews must really pay off.

Does mean that a Bioshock's 9.4 counts as a 14.1?
Not quite... otherwise you'd see some games scoring above 100%.

What it means is that their 9.4 carries the weight of 1.5 reviews in the average.
Whereas the Giant Bomb 5 out of 5 only carries the weight of 0.5 reviews in the average.
Wow O_O I write for DarkZero, which is a volunteer based site (we all have other jobs or are students, etc.)

I have no idea how we are fixed into the highest category. O_O I'm shocked. We don't get paid or anything, we're just a group of guys who like gaming. O_O
Metacritic vs. Gamerankings (does not weight)

Bioshock (X360): 96 - 95.07
Red Dead Redemption (PS3): 95 - 94.66
Half Life 2 (PC) 96 - 95.48
LoZ Ocarina of Time (N64) 99 - 97.54

Differences are pretty minor, but across the board MC is slightly higher which is interesting.
This was revealed by Adams Greenwood-Ericksen of Full Sail University at a talk titled 'A Scientific Assessment of the Validity and Value of Metacritic' delivered at the Game Developer's Conference in San Fransisco this afternoon.
Well, guess who isn't getting a Christmas card from Metacritic this year? I'm also guessing his next research request might not be approved. Just a hunch!

BTW... Full Sail University? What's this school's cred like? I heard they beat SDSU in the tourney. Oh... wait a minute...
To be fair, gamerankings aggregates are barely any different than metacritic's, so the weighting doesn't end up having much of an effect. I'm not sure why they even bother.
No, it is based on quality. If this list is correct, it represents Metacritic's beliefs as to what the best review sites are.

Which explains this:

Having the list out in the open exposes the whole thing as a nonsensical farce.
That's what they say. But looking at the list, I don't think many people would agree with what they determine as quality and stature. So either:

1) They have an incredibly skewed sense of quality; some sites are unusually high, some are unusually low.

2) They're lying and they weigh publications by a different criteria.

I don't want to say for sure it's 2, because I have no evidence to suggest they're lying (and because of that, it's easier to believe they just have a poor barometer of quality), but something's definitely up. Whether that's a farce or something more sinister, I don't know!