• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

From Stage to Screen: How fast do conference games release? (PS4, Xbox One)

1. You state the following in the OP: "For these purposes, "shown" means displayed footage with an associated title."...yet you later seemed to suggest that you abandoned this when deciding when to count Halo: TMCC as being announced.
Footage was shown, and a title ("Halo") was associated with it. Multiple speakers referred to it as a singular game. This wasn't the final name, but titles can change in development. In any case, altering the data for this one game does not reverse the conclusions.

2. You are not actually addressing the kind of research question necessary to speak to the general impression you sought to clarify. In other words, if you sought to understand whether the general impression in the community, which suggests Sony has a longer window for announce-->release, then you should not be counting 3rd party titles at all. 3rd party titles do not generally contribute in any way to that impression so why would you include them in trying to address said impression?
The claim that "3rd party titles do not generally contribute in any way to that impression" is unsupportable. Deep down, Final Fantasy VII Remake, and Kingdom Hearts III are all very common examples given of Sony's delays. In any case, I did also present results for just first-party titles in the thread. This does not reverse the conclusions.

3. Starting at E3 2013 would seem to strongly favor one competitor here over the other since it somewhat arbitrarily restricts the analysis only to games announced for current gen. Of course, there are high profile games form last gen that got announced and had to get revived on PS4, like The Last Guardian.
I started at the announcement of the PS4, actually. In any case, I did also present results including the development time of the PS3 versions of appropriate titles. This does not reverse the conclusions.

4. Your methods skew the data to favor the platform with more titles recently announced. As those titles get announced their time to release starts at zero and seems like it should drastically drag down averages. You note this issue in your OP but seem to act as if it is no germane to your conclusions somehow.
The claim that I avoided dealing with this factor is simply false. I did also present results cutting off at E3 2015 in order to eliminate the skew. This does not reverse the conclusions.

5. Refusing to share your data makes zero sense. That is borderline indefensible given the stated purpose of your 'research' efforts. That is not how serious research is done. I know, 'vidya games' or whatever but when ya put so much effort into the OP and follow up replies it becomes hard to imagine a rationale for refusing to share the raw data.
The rationale is that the entire data set is already available to the public. (You yourself use some of it below.) I shared my methodology for analyzing this data, and anyone is capable of re-running the analysis. Several people have done so, on GAF and on Reddit, and come to much the same conclusions. I'm certainly open to correct any errors of fact you discover.

6. I know some of these issues are already mentioned and discussed beyond the OP, but the fact you did not seek to address them a priori makes me even more dubious as to your actual application of an appropriate methodology.
So the only appropriate way to present this information would've been as a singular initial block, and never engaging with the following talk? I explained clearly in the OP that there were more results already developed and forthcoming, but that I wanted to react to discussion so that the most common requests were answered first.

You seem to be saying that the cadence with which facts are presented affects their truth value. This is ludicrous.

7. I checked the 18 titles you said you used in your Xbox One analysis for only first/second party titles, excluding Crackdown 3 since that is not released yet. Using the dates that I found online for when each was announced/released I got pretty significantly different numbers than you did. I can't see what you did in terms of your actual calculations since you refused to post your data, but I got 10.76/12.12 "months" (aka 30 days) for the 18 Xbox One games depending on how we count Ryse.

Further, if I include Dead Rising 4 (why did you leave that out?) I arrive at 10.5/11.78 depending again on how we count Ryse. When I add Zoo Tycoon we get 10.25/11.47.
The exclusion of Dead Rising 4 was an error on my part, I apologize. Its inclusion lowers the mean, but not significantly. The exclusion of Zoo Tycoon was intentional, as it was never shown onstage at a press conference.

Generally, though, your different results are due to flawed method. I did not use 18 titles in the analysis, I used 25. You have specifically removed the games that have been canceled or not released yet, so of course your "time to release" number is lower. If you did the same thing on the Sony side, that number would also be lower. This would leave the relative positions of the platforms unchanged.

In fact, I did already present the total results minus unreleased games. This does not reverse the conclusions.

But you did not provide any contextual stats analysis at all, making the notion that you were researching probabilities about future release intervals and/or cancellations and whatnot nonsensical. You don't use small samples and frequentist stats for that kind of analysis, ya look towards Bayesian models instead.
There's a Bayesian operation at the metalevel of the thread. The prior probabilities of most observers are explicitly delineated. The frequentist results presented should impel a significantly altered posterior distribution for rational observers. (Provided the examined set of games is taken to be of a piece with the future set of announced games.)
 

onanie

Member
If a notion gets repeated enough, people might start believing it. The idea is that it takes an extraordinary effort to refute that notion. OP's efforts have been extraordinary.
 
That set of data also does not address the question posed here by the OP. And yes, I am refuting the notion that the methods used here somehow are germane to addressing the research questions the OP specified. The methodology simply does not stand up to scrutiny.

And fyi, that data has some major errors in it. Last I checked it too more than 3 days for The Last Guardian to release. ;)

That also counts literal re-releases, like some games showing up literally the day they are announced. That list also claims FFXV was only 77 days between announcement and release...lol

That data is all sorts of bogus actually. The number for all sorts of games are all screwed up. It seems to be counting the days from the last time it was shown at a conference to the day of release instead of looking at the day the game was announced or first presented.

Ok well submit data that supports your assertion then. Otherwise you're sounding like a MS apologist.

There have been two methods that have been employed and have come to the same conclusion. You have no data that you can present that refutes what others have discovered.

Where is your proof? Please show and tell.

Thanks
 
Bravo, OP, for all your effort and analysis. This is truly fascinating stuff.

Still digging through your follow-up analyses right now but the original results and your first couple breakdowns in the comments are already so impressive.
 

thill1985

Banned
Footage was shown, and a title ("Halo") was associated with it. Multiple speakers referred to it as a singular game. This wasn't the final name, but titles can change in development.

It was not announced as a title. It was announced as a confirmation that Halo, the IP, would make a splash on the platform. "Halo" is a label for an IP. It is not a videogame title. It was never suggested that they were announcing a singular new videogame titled "Halo".
The claim that "3rd party titles do not generally contribute in any way to that impression" is unsupportable. Deep down, Final Fantasy VII Remake, and Kingdom Hearts III are all very common examples given of Sony's delays. In any case, I did also present results for just first-party titles in the thread.

You're right, I should have said multiplatform (perhaps even multi-console) titles.

The claim that I avoided dealing with this factor is simply false. I did also present results cutting off at E3 2015 in order to eliminate the skew.

...that does not eliminate the skew. Counting only games announced AND released eliminates the skew.
The rationale is that the entire data set is already available to the public. (You yourself use some of it below.)

No, I actually didn't. I went through and found each individual game you listed for the platform with the smaller number of titles and used the info that I gathered myself. I did not use any existing data set. If you have a data file there is no rationale that makes sense for keeping it for yourself.
I shared my methodology for analyzing this data, and anyone is capable of re-running the analysis.

...not without data they can't. They have to gather their own data first, which makes your lil experiment here incapable of being checked or reproduced with any amount of rigor. Your methods for gathering the data set to work with in the first place are not going to be the same as others will be.

So the only appropriate way to present this information would've been as a singular initial block, and never engaging with the following talk?

I get that you are defensive, but you KNOW I never suggested any such thing. If you have data, post it. There is no reason not to post it unless it somehow doesn't back up your results as reported. Assuming that is not the case, and that you are trying to be an honest broker, you should post your data.

You initially posted results with all manner of screwy methods skewing the data and then after others called you on it you provided more results. That is surely better than nothing, but it also suggests that your intention was to get a certain result to report back to folks here instead of doing good research for its own benefit.

The way you react when these criticisms of your methods are levied is quite telling, as your primary rebuttal always ends with a pronouncement that the 'results' do not change regardless of methodology. Of course the results change! By 'results' you seem to be looking at the ranking between the two companies only. The question is why is that your focus? Why are you treating that like an all-encompassing safe space to fall back to? Your concern *should* be a strong methodology that speaks to addressing the research questions you cited. Instead of being concerned with what constitutes good methods you instead seem to care primarily about the ranking between these two companies.

The exclusion of Zoo Tycoon was intentional, as it was never shown onstage at a press conference.

If you only include games shown on stage that is not addressing the research question you posed in a meaningful way. Your methods need to be designed to address the questions you want answered. You seem to have done that backwards; getting results and then deciding how to filter them to answer something that may not be relevant to addressing the general impressions you supposedly were trying to clarify.

If the research questions to be addressed is: How long do games on platform X have between announce/release compared to games on platform Y?

...then you should be including games that are announced, regardless of the venue. The venue has nothing to do with your research question.

Generally, though, your different results are due to flawed method. I did not use 18 titles in the analysis, I used 25. You have specifically removed the games that have been canceled or not released yet, so of course your "time to release" number is lower. If you did the same thing on the Sony side, that number would also be lower. This would leave the relative positions of the platforms unchanged.

My "flawed" method was chosen to answer a research question you claimed to be addressing. Your methods do not address the question. If you want to understand how long games on each platform has between announce/release, you need to have release dates. Including canceled titles or unreleased games does nothing but bias your data one way or the other without serving any purpose towards addressing your question. You introduced all sorts of systematic error that have no reason to be included in your methodology in the first place.

Btw, I did not even tally up the Sony side because it had a lot more games for the few minutes I felt like spending on the matter. I am not interested in whether or not picking an intelligent methodology 'flips' the results. Why are you interested in that? You are supposed to be picking a good methodology for a given research question and letting the data filter out as it will on its own. Not picking and choosing your methods based on which outcome you favor.

There's a Bayesian operation at the metalevel of the thread. The prior probabilities of most observers are explicitly delineated. The frequentist results presented should impel a significantly altered posterior distribution for rational observers. (Provided the examined set of games is taken to be of a piece with the future set of announced games.)

Ha! I'm fairly confident you understand that this is NOT what I was referring to. I was saying you need to include an effort to incorporate Bayesian analysis within your data if you want to claim you are addressing a future probability.

For example, your frequentist statistics can give you info about how likely a game announced for a given platform is to be canceled in the future only if you use the data as the prior and then update to account for context. Context like: how was the game announced? What publishers announced it? Do the devs typically cancel their games?...etc. All of those can use frequentist stats to incorporate into a Bayesian analysis. You only start with the prior and then update as many iterations as possible. It is a lot more work, so I don't blame ya for not wanting to do a rigorous analysis, but at the same time you can't honestly claim to be addressing the research question you had posed without it.
 

thill1985

Banned
Ok well submit data that supports your assertion then. Otherwise you're sounding like a MS apologist.

There have been two methods that have been employed and have come to the same conclusion. You have no data that you can present that refutes what others have discovered.

Where is your proof? Please show and tell.

Thanks

My assertion? I cited the linked data. In the column that it claims represents 'Days to Release' all manner of games have wildly incorrect numbers and the games are recycled all over the place. I mentioned some that I found that stuck out like sore thumbs already.

I also explained that the issue is likely due to that column being calculated as the difference between release and the most recent showing of the game at a trade show instead of looking at when the damn thing got announced in the first place. It's GREAT data for tracking stuff, but the column it claims shows the relevant metric is completely bunk.

I am not interested in bothering to do my own research on this topic. I do that for a living on my own. Unlike the OP I actually understand the effort that goes into such an endeavor and the research questions are not interesting enough for me to bother in this case. I only posted in the thread here because the OP asked me to.
 

Synth

Member
You initially posted results with all manner of screwy methods skewing the data and then after others called you on it you provided more results. That is surely better than nothing, but it also suggests that your intention was to get a certain result to report back to folks here instead of doing good research for its own benefit.

The way you react when these criticisms of your methods are levied is quite telling, as your primary rebuttal always ends with a pronouncement that the 'results' do not change regardless of methodology. Of course the results change! By 'results' you seem to be looking at the ranking between the two companies only. The question is why is that your focus? Why are you treating that like an all-encompassing safe space to fall back to? Your concern *should* be a strong methodology that speaks to addressing the research questions you cited. Instead of being concerned with what constitutes good methods you instead seem to care primarily about the ranking between these two companies.

Yea, I'm sorry but "the results don't change" is a silly response to give against valid criticism of the methodology. The results (as in the actual percentages) do change, as Liabe Brave has stated, even if MS remains with a higher value and Sony with a lower one. Not only that, but many of these changes while limited in isolation would likely not be so if multiple questionable elements were addressed simultaneously.

The question posed is how long from stage to screen, and the conclusion talks about the reasons for there being a perception that doesn't match the data... but it wouldn't match the data, because the data itself isn't actually measuring the question it sets out to address. A game that isn't released yet isn't being magically released today, and so the perception also isn't that it was. A game like TLG or FF Versus XIII being announced back on the PS3 doesn't magically not happen when an entire generation rolls by, and so the perception also doesn't assume it did. The data being used for either case is essentially fictional.

Of course, regardless of what is done to the numbers, and which company appears ahead of which, there will be a notable difference between the results of the data, and the general perception, simply because we don't all walk around with complete historical data in our heads. As a result the higher profile games that have long development times (like FFVXIII/FFXV) will stay in people's minds a lot more than a game that even the average fan of the console would likely forget to throw into a game list (Below). So, on MS' end you end up with a few bigger name titles like Crackdown 3 registering in people's minds as having taken forever to happen, but they get drowned out by games with more attention (like Forza) getting announced and released quickly. In the top 5 longest periods you have Below at 47 months for MS and then nothing else really in that ballpark... whilst the entire top 5 for Sony is in that ballpark, and that's without TLG sitting at near 90 months, or Versus XIII / XV sitting around 125 months. So yea, the perception differing from the data isn't at all surprising really.
 
It was not announced as a title. It was announced as a confirmation that Halo, the IP, would make a splash on the platform.
This is incorrect. Earlier in the thread I posted direct quotes from the conference where the speakers were clearly talking about a single game (while also promising that more than one game in the franchise would appear on the platform).

...that does not eliminate the skew. Counting only games announced AND released eliminates the skew.
Both approaches address the skew, in different ways. What bias are we trying to remove? That Sony has announced games more recently than Microsoft, leading to falsely short intervals. By setting a terminus ante quem at E3 2015, we are both eliminating the possibility of Sony announcing later in the same year (since both platforms present at E3), and chucking all games still in development for less than 2 years. Obviously, this strikes out every small number, and levels the playing field.

This matters little, though, because in fact I also presented results for only released games. This does not reverse the comparison.

I went through and found each individual game you listed for the platform with the smaller number of titles and used the info that I gathered myself. I did not use any existing data set. If you have a data file there is no rationale that makes sense for keeping it for yourself.
There is a pedagogical rationale.

You initially posted results with all manner of screwy methods skewing the data and then after others called you on it you provided more results.
This is an inaccurate recapitulation of events, indeed so wrongheaded that it's scarcely credible that you've offered it in good faith. The "all manner of screwy methods" was to include every game from every conference. The "more results" I provided were not expansions of what I initially presented, but invariably filtered subsets.

That is surely better than nothing, but it also suggests that your intention was to get a certain result to report back to folks here instead of doing good research for its own benefit.
It should be very easy for you to disprove everything I said then, by posting the "real" data.

The way you react when these criticisms of your methods are levied is quite telling, as your primary rebuttal always ends with a pronouncement that the 'results' do not change regardless of methodology. Of course the results change! By 'results' you seem to be looking at the ranking between the two companies only. The question is why is that your focus?
Because the competing platform is the focus of your criticism. Here are some of your points:

"...yet you later seemed to suggest that you abandoned this when deciding when to count Halo: TMCC as being announced."
"Starting at E3 2013 would seem to strongly favor one competitor here over the other..."
"Your methods skew the data to favor the platform with more titles recently announced."
"I checked the 18 titles you said you used in your Xbox One analysis [but not the Sony titles]..."

Instead of being concerned with what constitutes good methods you instead seem to care primarily about the ranking between these two companies.
Again, you seem not to have read the thread at all. You claim this despite the fact that many of the further conclusions I posted do not compare the platforms. That comparison did feature in my OP--warranted by the surprising nature of the results--but only remained the primary topic later due to thread interest. Please observe that when bringing my own analyses, I discussed general topics like the individual character of press events without distinction as toy platform; further comparisons were wholly in response to others. Further, in my recap post only one of the four prior expectations I address is comparative. And I repeated my belief in the primacy of complete results over comparisons later.

If you'd like to discuss any topic besides the accuracy of the Microsoft/Sony comparison, I'd love for you to bring it up. I'd much prefer that.

If the research questions to be addressed is: How long do games on platform X have between announce/release compared to games on platform Y?

...then you should be including games that are announced, regardless of the venue. The venue has nothing to do with your research question.
You seem not to have read even the title of this thread. The question to be addressed is emphatically not "How long do games on platform X have between announce/release compared to games on platform Y?" The question I set out to address is literally the first question in the OP:

Exactly how long has it been taking games to come out after they're presented?

My "flawed" method was chosen to answer a research question you claimed to be addressing. Your methods do not address the question. If you want to understand how long games on each platform has between announce/release, you need to have release dates.
This glosses over a fundamental issue: the set of games we're looking at have not all released. There are three ways we can deal with that bias. Most conservatively, we could simply wait for them all to be released (or canceled). This is boring. At the other extreme, we can do as you (and Anderson DL) suggest, and simply drop all the data that's hard to parse. This isn't an invalid approach, but if we're interested in future probabilities it has a drawback: it assumes the premise that currently unreleased games will exactly replicate released games in their timing and pattern.

I chose a third approach, which was to allow that real-world data sets are messy, but attempt to incorporate the full content of our current knowledge. Since that knowledge is incomplete, explicit disclaimers about the inevitable skews and limits of inference are inevitable. I chose to believe the audience could mentally accommodate those, and that discussion would be better served by a complete (so far) set of data rather than a carefully scrubbed one.

You are supposed to be picking a good methodology for a given research question and letting the data filter out as it will on its own. Not picking and choosing your methods based on which outcome you favor.
I'm at somewhat of a loss what you're proposing occurred here. You think I collected all the data, then analyzed it in many different ways until I found one that supported my bias? Since I'm a stranger whose character you don't know, this assertion must be based on the traits of the results presented. But I don't see any warrant for such a conclusion from that source.

For one, some of the alternate filterings suggested by you and others give an even greater disparity in Sony's favor. If I was pre-selecting, why wouldn't I have posted one of those? Second, the data set I chose was all games, from all conferences, at their first appearance only. Every proposed alternative is more restrictive in culling this data set, which suggests that I didn't overfit my initial scope. Finally, and most trenchantly, all those alternatives give results which are cardinally different, but ordinally the same. I think you'll agree that this strongly suggests the comparison to be a robust, pervasive trait of the entire set (given that so many subsamples exhibit it).

Ha! I'm fairly confident you understand that this is NOT what I was referring to.
It was an attempt at dry (perhaps not very funny) humor. :) I appreciate that you saw it.

I was saying you need to include an effort to incorporate Bayesian analysis within your data if you want to claim you are addressing a future probability. ...It is a lot more work, so I don't blame ya for not wanting to do a rigorous analysis, but at the same time you can't honestly claim to be addressing the research question you had posed without it.
I will cop to a lack of scientific rigor here. If the only valuable approach is journal-publishable research work, my efforts fall short of that ambition.

However, I must reject the conclusion that what I've presented is fully meaningless with regards to assessing the pattern of presentation-to-release intervals. At the very least, it provides a halting start toward a fuller and unexceptionable analysis, and a productive foundation for discussion that can guide and shape the growth of such a putative grounded consensus. I don't believe a mere biased screed devoid of apposite cogency could serve as a target for nuanced discussions of inferential technique, for example.

I am, as are we all, crooked timber. Yet the things we build need not be straight in order to be useful, because the edifice is built through iteration. In the end, every involution is just decoration on its piled surface. I invite you to expand and improve upon my little work, if you have the inclination.
 

thill1985

Banned
This is incorrect. Earlier in the thread I posted direct quotes from the conference where the speakers were clearly talking about a single game (while also promising that more than one game in the franchise would appear on the platform).

You can repeat this all day long but you are not addressing my point. Saying a new Halo game is under development is not the same thing as formally announcing Halo: TMCC or Halo 5. The formal thrust of their presentation was about how the franchise in general would be on Xbox One at 60fps and utilize some sort of cloud implementation.

Both approaches address the skew, in different ways.

No, my method is straightforward and does not skew anything. It directly addresses the posed question and does so without introducing any systematic error.

What bias are we trying to remove? That Sony has announced games more recently than Microsoft, leading to falsely short intervals. By setting a terminus ante quem at E3 2015, we are both eliminating the possibility of Sony announcing later in the same year (since both platforms present at E3), and chucking all games still in development for less than 2 years. Obviously, this strikes out every small number, and levels the playing field.

A better approach is to simply take all released games, regardless of when they were announced, and counting days/months between those dates.

This matters little, though, because in fact I also presented results for only released games. This does not reverse the comparison.

A strong methodology 'matters little'? Oh....well then. >.>

There is a pedagogical rationale.

I'm not sure you know what that term means because what you just typed here does not make sense. You do not teach anyone anything by refusing to share what you know or, in this case, your data.

The "all manner of screwy methods" was to include every game from every conference. The "more results" I provided were not expansions of what I initially presented, but invariably filtered subsets.

...right, and those are screwy methods to use given the set of questions you purportedly set out to investigate. You did not present results in line with these research questions. Instead you presented results with loads of systematic error incorporated and refused to share your data.

It should be very easy for you to disprove everything I said then, by posting the "real" data.

Posting data requires gathering it. You could remedy that scenario easily. You refuse for some reason.

Because the competing platform is the focus of your criticism. Here are some of your points:

"...yet you later seemed to suggest that you abandoned this when deciding when to count Halo: TMCC as being announced."
"Starting at E3 2013 would seem to strongly favor one competitor here over the other..."
"Your methods skew the data to favor the platform with more titles recently announced."
"I checked the 18 titles you said you used in your Xbox One analysis [but not the Sony titles]..."

I pointed out the obvious problems with your methodology and how you applied it. I cited Halo because you discussed it with someone else. I didn't bring it up. You and/or another poster brought it up. And BECAUSE you are hiding your data for inexplicably 'pedagogical' reasons I had to grab my own data for games individually, hence I used the system with fewer games to check on your calculations...again, because you continue to hide your data.

Btw, YOU were framing things entirely in terms of console wars before I even started posting here. I'm not going to be blamed for your decisions to frame things in a particular way somehow retroactively. That's absurd.

Again, you seem not to have read the thread at all. You claim this despite the fact that many of the further conclusions I posted do not compare the platforms. That comparison did feature in my OP--warranted by the surprising nature of the results--but only remained the primary topic later due to thread interest. Please observe that when bringing my own analyses, I discussed general topics like the individual character of press events without distinction as toy platform; further comparisons were wholly in response to others. Further, in my recap post only one of the four prior expectations I address is comparative. And I repeated my belief in the primacy of complete results over comparisons later.

You should re-read my criticism. I was saying that when you decided to respond to criticisms of your methods, you always framed things as 'not affecting the results' as if 'the results' you were trying to derive are about platform comparisons instead of answering the questions your said you investigated. I am aware that you posted other data, but I was not commenting on that. I was commenting on how you chose to frame criticisms to your methodology and how you framed your response to such criticisms.

This glosses over a fundamental issue: the set of games we're looking at have not all released.

That does not gloss over that issue. It simply ignores it which is appropriate as we have no idea when those games will release. There is no reason to include those games. They offer nothing material to the research question until they release. Those games ARE relevant to other research questions though, like the one you are attempting to deal with in the other thread dealing with Patrick's article. Just not here.

This isn't an invalid approach, but if we're interested in future probabilities it has a drawback: it assumes the premise that currently unreleased games will exactly replicate released games in their timing and pattern.

...if you are attempting to set up research into future probabilities you need to completely abandon your stats analysis approaches here and instead only use existing frequentist data (existing meaning we cannot count games still in development since they do not constitute data until they ship) as entries into an iterative Bayesian analysis.

I chose to believe the audience could mentally accommodate those, and that discussion would be better served by a complete (so far) set of data rather than a carefully scrubbed one.

No, you chose a poorly devised methodology for your set of research questions and refused to just post your data. If you want folks here to be able to wrestle with incomplete data and messy or poorly defined data sets you should post your data. Just because you happen to have some number does not make that meaningful data. Games that have not shipped do NOT constitute data at all here.

I'm at somewhat of a loss what you're proposing occurred here. You think I collected all the data, then analyzed it in many different ways until I found one that supported my bias?

I think the patterns of your OP's methodology and how you've framed criticisms and your response to them is very far from what would pass muster for a real analysis interested in addressing the questions your set out to investigate. That combined with your efforts to hide your data for inexplicable reasons leaves much to be desired. Having a bias is fine and human and Immanuel Kant would obviously agree, but we should try to overtly check our biases. Instead you seemed to embrace it in your methodology and responses to criticism.

For one, some of the alternate filterings suggested by you and others give an even greater disparity in Sony's favor.

So? You are STILL framing this as a console warez matter!

However, I must reject the conclusion that what I've presented is fully meaningless with regards to assessing the pattern of presentation-to-release intervals.

Without posting your data, yes it is meaningless. Just post your data so we can move past that and work together to improve the analysis and methodology. :)

I will consider spending some time thinking on how to rework the methodology in ways that are tractable and let you know if I come up with anything. Unless I get caught up in E3 hype in the meantime.
 
So? You are STILL framing this as a console warez matter!

The thrust of your bad faith critique is patently obvious, though you clearly don't want to engage in the actual effort to disprove anything, likely because you lack faith you can actually support your position with verifiable data.
 

Tripolygon

Banned
You can repeat this all day long but you are not addressing my point. Saying a new Halo game is under development is not the same thing as formally announcing Halo: TMCC or Halo 5. The formal thrust of their presentation was about how the franchise in general would be on Xbox One at 60fps and utilize some sort of cloud implementation.
All you've managed to prove in here is, you trying to prove some kind of bias on the side of OP. You're not trying to engage in the discussion and it is rather transparent.
 

Melchiah

Member
All you've managed to prove in here is, you trying to prove some kind of bias on the side of OP. You're not trying to engage in the discussion and it is rather transparent.

It's the usual strategy to try to sway the talk towards the person from the subject. It's easier to put a label on someone, than to actually provide facts like the OP did.
 
Not only that, but many of these changes while limited in isolation would likely not be so if multiple questionable elements were addressed simultaneously.
I already said exactly this in the thread. It is definitely possible to add multiple restrictions to the data, and gerrymander a result where Microsoft releases games more quickly, on average, than Sony. However, this is most emphatically not a simple case of "questionable elements" being removed. In order to do this, we must both carve away at certain portions of the data, and add to other portions. To wit:

A game that isn't released yet isn't being magically released today, and so the perception also isn't that it was. A game like TLG or FF Versus XIII being announced back on the PS3 doesn't magically not happen when an entire generation rolls by, and so the perception also doesn't assume it did.
I've already talked at length about the reasonable rationale for including unreleased games, so I won't repeat that. I will say that the intervals for games announced in a previous generation, for that generation, should rightly be ascribed to their era. The current thread is about announcements for the current consoles, so I counted from when those ports were announced. This is the same (sound) reasoning why we don't count from the original announcements of Halo for TMCC, or of Wipeout for the Omega Collection.

However, provided we keep the arbitrary and less-than-necessary nature of these parameter changes in mind, there's no reason not to examine how they affect the results. So, I added all the PS360-era interval time, and removed all unreleased games from the list. Here are the results:
Code:
[B][U]Microsoft[/U][/B]
98 games total
Mean interval:   12.44 months
Median interval: 10.25 months
Modal interval:   5.5  months (n=7)
[I]Segments:[/I]
0-12 months:    56   57.1%
12-24 months:   31   31.6%
24-36 months:   10   10.2%
36+ months:      1    1.0%

[B][U]Sony[/U][/B]
216 games total
Mean interval:   12.85 months
Median interval:  9.25 months
Modal interval:   5.5  months (n=13)
[I]Segments:[/I]
0-12 months:   135   62.5%
12-24 months:   60   27.8%
24-36 months:   16    7.4%
36+ months:      5    2.3%
Yes, the mean interval now has shifted so that Sony is higher. But the gap is less than half a month, so nothing like the chasm believed in by most. Note also that Sony's median value is still lower, meaning that the quick half of their presented titles, in aggregate, release faster than the quick half of Microsoft's slate (and this despite having more than twice as many opportunities for delays). The modal value is identical, with a basically commensurately higher n, meaning the highest prior probability for release on the two platforms is essentially indistinguishable with this data set.

When looking at yearly segments, we find that Sony releases a greater proportion of their presented games within 12 months of showing them off. For each of the next couple longer interval segments--up to two years, and up to three--Sony has a smaller proportion in each than Microsoft. For the very longest intervals, the inclusion of extensive PS3 development time has significantly increased the segment for Sony. But since this is such a small absolute number of titles, the combination of the top two segments remains lower than with Microsoft.

Of course, regardless of what is done to the numbers, and which company appears ahead of which, there will be a notable difference between the results of the data, and the general perception, simply because we don't all walk around with complete historical data in our heads. ...So yea, the perception differing from the data isn't at all surprising really.
I agree with this, as I previously stated in the thread. It's easy to generate very plausible theories of how selective memory causes the general perception. Recall will focus on notable delays, while punctual releases---even of highly-anticipated games--fall out of conscious recollection. Sony is accurately and properly perceived as having more long-term intervals, while the fact that they also have more (and shorter) short-term intervals goes unremarked, skewing the overall perception.
 
very well done!...

the main source of confusion, imo? the difference in quantity. there're always gonna be at least some late ps4 titles you can think of, no matter your taste in games...
 

Synth

Member
I already said exactly this in the thread. It is definitely possible to add multiple restrictions to the data, and gerrymander a result where Microsoft releases games more quickly, on average, than Sony. However, this is most emphatically not a simple case of "questionable elements" being removed. In order to do this, we must both carve away at certain portions of the data, and add to other portions. To wit:

I've already talked at length about the reasonable rationale for including unreleased games, so I won't repeat that. I will say that the intervals for games announced in a previous generation, for that generation, should rightly be ascribed to their era. The current thread is about announcements for the current consoles, so I counted from when those ports were announced. This is the same (sound) reasoning why we don't count from the original announcements of Halo for TMCC, or of Wipeout for the Omega Collection.

Thanks for the response.

Hopefully you didn't take offense to the term "questionable". In that, I literally meant that the parameters could be taken into question (as they have been in the posts you've been replying to and elaborating on). I honestly don't consider the removal of unreleased games as posing a "restriction" on the data though, I consider their inclusion to be the literal invention of data where there is none. We don't have a release date for those games, so you you simply don't have the data to use at all for these games.

Similarly the inclusion of the PS3/360 years for games like The Last Guardian isn't a restriction on the data either. The cut-off point of 2013 is the restriction, because in a question of "how long can these games take to release" you've made it so that the absolute highest value truncates at 48 months, when what's responsible for a large part of the perception your looking to quantify is a result of games that have taken far longer than that. It would be like measuring Duke Nukem Forever from the point Gearbox picked it back up for quick release, and then wondering why the data doesn't match the perception that game's development had. There isn't a separate perception for "PS3 The Last Guardian" vs "PS4 The Last Guardian". It's seen as one game that took almost a decade to release.

However, provided we keep the arbitrary and less-than-necessary nature of these parameter changes in mind, there's no reason not to examine how they affect the results. So, I added all the PS360-era interval time, and removed all unreleased games from the list. Here are the results:
Code:
[B][U]Microsoft[/U][/B]
98 games total
Mean interval:   12.44 months
Median interval: 10.25 months
Modal interval:   5.5  months (n=7)
[I]Segments:[/I]
0-12 months:    56   57.1%
12-24 months:   31   31.6%
24-36 months:   10   10.2%
36+ months:      1    1.0%

[B][U]Sony[/U][/B]
216 games total
Mean interval:   12.85 months
Median interval:  9.25 months
Modal interval:   5.5  months (n=13)
[I]Segments:[/I]
0-12 months:   135   62.5%
12-24 months:   60   27.8%
24-36 months:   16    7.4%
36+ months:      5    2.3%
Yes, the mean interval now has shifted so that Sony is higher. But the gap is less than half a month, so nothing like the chasm believed in by most. Note also that Sony's median value is still lower, meaning that the quick half of their presented titles, in aggregate, release faster than the quick half of Microsoft's slate (and this despite having more than twice as many opportunities for delays). The modal value is identical, with a basically commensurately higher n, meaning the highest prior probability for release on the two platforms is essentially indistinguishable with this data set.

When looking at yearly segments, we find that Sony releases a greater proportion of their presented games within 12 months of showing them off. For each of the next couple longer interval segments--up to two years, and up to three--Sony has a smaller proportion in each than Microsoft. For the very longest intervals, the inclusion of extensive PS3 development time has significantly increased the segment for Sony. But since this is such a small absolute number of titles, the combination of the top two segments remains lower than with Microsoft.

I agree with this, as I previously stated in the thread. It's easy to generate very plausible theories of how selective memory causes the general perception. Recall will focus on notable delays, while punctual releases---even of highly-anticipated games--fall out of conscious recollection. Sony is accurately and properly perceived as having more long-term intervals, while the fact that they also have more (and shorter) short-term intervals goes unremarked, skewing the overall perception.

Yea, like I said regardless of what the final data comes out with, it's not going to be as dramatic as people's perceptions indicate, because those perceptions are selective. It only take a handful of standout examples to cement a given perception. Recent examples are that MS cancels everything they work on, as a result of Fable Legends, Phantom Dust and Scalebound. Three games is all it took to override people's perceptions of all the games that don't get cancelled, and aren't named Halo, Gears and Forza.
 
Hopefully you didn't take offense to the term "questionable". In that, I literally meant that the parameters could be taken into question (as they have been in the posts you've been replying to and elaborating on).
In that case I'd prefer "arguable", but no problem either way.

The cut-off point of 2013 is the restriction, because in a question of "how long can these games take to release" you've made it so that the absolute highest value truncates at 48 months, when what's responsible for a large part of the perception your looking to quantify is a result of games that have taken far longer than that.
Again, I didn't argue that the perception is baffling. I pointed out that I myself shared it, after all. My conclusion was instead about the warrant for this belief: that, even accounting for factors like these, the perception is wrong. Maybe that's unsurprising once you mull over the reasons. But I think there's value in getting people to re-examine commonly held opinions that aren't well-supported.

That said, the platform comparison was only one-quarter of the things I attempted to address. It's only become the major throughline of discussion because it's what was most interesting to GAF.

Recent examples are that MS cancels everything they work on, as a result of Fable Legends, Phantom Dust and Scalebound. Three games is all it took to override people's perceptions of all the games that don't get cancelled, and aren't named Halo, Gears and Forza.
To be fair, they did shut down, or cause to shut down, two of the studios behind those games. Meaning they canceled not just those titles but every potential future title from them. But of course, you're correct that Microsoft cancel very few games overall, so the perception is skewed.
 

thill1985

Banned
The thrust of your bad faith critique is patently obvious, though you clearly don't want to engage in the actual effort to disprove anything, likely because you lack faith you can actually support your position with verifiable data.

I already explained what is wrong with his methodology. When his methods do not serve the research questions he purports to investigate there is nothing to 'disprove' as conclusions based on faulty methods are already worthless. And he is the one not posting his data. How did you verify his results? Oh right, ya didn't because you do not have access to his data as he did not bother posting it for "pedagogical" reasons. Why are *you* defensive about my criticisms of someone else's methodology? Why are you acting threatened? o.0
 

Synth

Member
In that case I'd prefer "arguable", but no problem either way.

Again, I didn't argue that the perception is baffling. I pointed out that I myself shared it, after all. My conclusion was instead about the warrant for this belief: that, even accounting for factors like these, the perception is wrong. Maybe that's unsurprising once you mull over the reasons. But I think there's value in getting people to re-examine commonly held opinions that aren't well-supported.

That said, the platform comparison was only one-quarter of the things I attempted to address. It's only become the major throughline of discussion because it's what was most interesting to GAF.

To be fair, they did shut down, or cause to shut down, two of the studios behind those games. Meaning they canceled not just those titles but every potential future title from them. But of course, you're correct that Microsoft cancel very few games overall, so the perception is skewed.

I was actually going to go with "MS has a perception of closing all their studios", but realised that's aligned pretty well with the data... so went with game cancellations instead.
 

Euphor!a

Banned
I already explained what is wrong with his methodology. When his methods do not serve the research questions he purports to investigate there is nothing to 'disprove' as conclusions based on faulty methods are already worthless. And he is the one not posting his data. How did you verify his results? Oh right, ya didn't because you do not have access to his data as he did not bother posting it for "pedagogical" reasons. Why are *you* defensive about my criticisms of someone else's methodology? Why are you acting threatened? o.0

Can you please post your data.
 

thill1985

Banned
Sorry, I assumed you had something that would contradict the two people that did do research.

The reddit person never posted their calculations, only raw data which included a column that was incorrectly labelled. It would take forever to use that reddit person's data to filter it and edit it into something useful though. The only data I put together was a list of 18-20 Xbox One games from the other guy's list with the days/months between announce/release. If ya want that I can give it to ya I suppose, but it's not much.

To clarify, my issues with the OP here are about the methodology not being chosen in service of addressing the research questions posed, and the fact that he refuses to just post his data even after people ask him for it.
 
To clarify, my issues with the OP here are about the methodology not being chosen in service of addressing the research questions posed, and the fact that he refuses to just post his data even after people ask him for it.

No, we understand you have complaints but are just incapable of articulating a cogent counter argument or producing data that falsifies any of OP's work. Likely because neither exists.
 
Top Bottom