• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Why have consoles not adopted the yearly refresh model?

Aren't consoles sold at a loss for the first year or two? If that's the case then wouldn't Microsoft, Sony, and Nintendo lose money from a model like this?
 
For all the reasons stated above, it would kill the console market. All the old games would work, but if you were even one iteration behind you could not buy the latest games if the developers targeted the latest hardware (unless games targeted multiple hardware configurations which would piss off devs even more).

This is a world class awful idea.
 
Why would anyone choose a yearly model, just because phones do it doesn't mean all industries should. Yes some pc hardware companies make annual upgrades but it's not forced in the Mobile world it is because that's where the profit is getting you to upgrade.
 
I moved from PC to Consoles because I was getting tired of trying to keep up with the upgrade cycles. I could never justify spending obscene amounts of money on the latest and best graphics cards, etc. so I would always stick to the middle tier. Then a year or so down the line, it would struggle to play games at even the minimum settings. In fact I stopped playing games till I got myself a console. Now, as a bonus, I do not have to muck around with installs, drivers, etc. Life is simpler with plug and play.

So a yearly console refresh seems like a horrible idea to me. IMHO, we should leave yearly refreshes to cell phones - and even there I am not sure a yearly release is justified.
 
This. And maybe I am oldschool, but I hate the annual updates. You get used to your device and bam, the kind of pressure to get a new one, because there are certain apps or features that aren't working as good as before. I prefer that the machines are getting pushed to their limits instead of releasing new hardware every now and then and I prefer that studios invest time to create incredible experiences with a certain amount of polish.
Thats exactly what I think. It's great to buy a console and not worry for 4 or even 7 years (!). And every year consoles are pushed more and more to their limits, so you still have better graphics with time. Not PC level, but still...


Except it wouldn't be obsolete... Are iPhone 5's obsolete the moment the iPhone 6 comes out? Most people wait two generations to upgrade... And the software works across generations, and hardware configurations, so long as they are running the same OS version...

So if a game is developed for XB1 gen 1, it will still run in XB1 gen 2, because both consoles will be running the same OS... when the game is being run on the gen 2 hardware,the game could take advantage of the extra horse power.

The console wouldn't become obsolete until it stops getting OS updates due to hardware limitations.
Maybe not obsolete, but you're still behind the technology curve. If MS or Sony launches a revised hardware, with more RAM, or better cpu clock, for example, we would get games running better on new hardware, and that feels bad. The point choosing consoles is that you don't have to worry about upgrades. You have the best technology available on consoles, for at least 4-5 years.
 
There's one thing people need to get, Phone games =/= Console games. Phones can go through multiple iterations every year but the games are relatively simple and don't take 3-4 years to develop. Even if the specs are relatively strong with phones able to run a compromised version of Unreal engine 3-4, the market's filled with f2p crap. Plus you use a phone for a lot more day to day things than just gaming which is why people keep buying the new iterations.

That aside, this kind of thing already happened once and proved it's a stupid idea. Anyone remember the Sega 32x?
 
I think it is viable on the steam machine model, open the manufacturing process to a set number of hardware manufacturers under the specifics given by the console holder. The console holder makes the basic cheapest model for mass market and third party manufacturers make the more powerful iterations for tech enthusiasts.

As for the increasing costs of gaming development, it is not like the PC market since there will be a set number of third party consoles certified by the console holder and it must be it's duty to make sure each new iteration of the console to be compatible with games already in the market.

I don't think it to make a big impact since a console success is on the mainstream market reach through the lowest entry but it will affect the enthusiasts market and eventually render obsolete the current console transition model in favor of an evolutionary console.
 
Aren't consoles sold at a loss for the first year or two? If that's the case then wouldn't Microsoft, Sony, and Nintendo lose money from a model like this?
For Nintendo specifically, they could potentially get away with a bi-yearly software refresh for their hardware (both console & handheld) provided that their games are forwards-compatible. And as I've stated throughout the thread, this is looking to be the case. More so since it'd be the only way that the NX Console could survive against the PS5 & whatever the next Xbox is called (the NX Handheld has no competition, so I wouldn't worry about that).
 
Because you already have a solution for this, it's called a PC. If you want to upgrade your hardware every year, play games on a PC. Consoles are, by their very design, supposed to be an alternative to that. You'd literally be stripping away one of the only advantages they have over PC.
 
If I wanted refreshes and turmoil I'd buy a PC.

I buy a console specifically because I can just use it for years and it will do what it did last week, predictably. I'm also not into phone refresh culture. Does it make calls? Ok it's good enough.
 
I don't think developers want to optimize for several iterations of the same console.

It's not a even a matter of want. They simply cannot. The op really has very little concept of professional game development to even entertain the notion.

The refresh only works on phones and tvs because a. They are not game focused devices. b. Almost no games that are on them are actually taking full advantage of the hardware. Or c. The market/competition for high end fidelity is nonexistent.

And before even saying "what about PC" I will say BUGS! The amount of bugs in PC titles due to the modular hardware/software paradigm is insane. Sony, Nintendo and MS would never allow a game to shop at the atrocious levels that a majority of PC games ship.

Consoles are a thing of beauty that it seems many do not understand. They are a level playing field for gamers, developers, publishers and consumers. People will write songs of sorrow when they are gone.
 
I have an Android phone from like 5 years ago, so I don't buy that shit annually. Believe it or not, we aren't all suckers and hipsters. If consoles did that shit, I'd be out so fast your head would spin. Eff that noise.
 
Because it's a stupid idea, people complain that a new console comes out every year already when it's actually 5-7 years, can you imagine if consoles did come out every year?
Can you imagine your son asking for the latest console every year?
Consoles aren't phones.
 
Maybe not obsolete, but you're still behind the technology curve. If MS or Sony launches a revised hardware, with more RAM, or better cpu clock, for example, we would get games running better on new hardware, and that feels bad. The point choosing consoles is that you don't have to worry about upgrades. You have the best technology available on consoles, for at least 4-5 years.

Feel bad? Those who feel bad would upgrade...Those who don't would stick with what they have, until upgrading becomes more appealing , and there 'old' console would still be able to play all the latest games, just not with all the bells and whistles...

The thought of upgrading doesn't weigh into people decision to by a console... Hell, ps3s and 360's are still selling today, despite more powerful options being available...

People buy what they can afford, and consider things like the software catalog.

With the current system, consoles are behind the technology curve the moment they are launched... If being at the front of the curve was a consume gamers main concern, they'd be on PCs instead...

For the record, I think yearly would be too rapid for consoles. But 2-3 years would be fine...
 
It defies the point of the advantages of their games development, single architecture and spec focus, and simplicity. Going yearly would remove those advantages
 
Consoles are about having a simple, consistent experience. You go to your friend's house, your uncle's, your neighbor's, and their xboxes all play the same game the same way. When you go to the store to buy the latest and greatest video game, you don't have to ask if it's compatible with the xbox you bought four years ago, because it is, and always be. Your mom/dad/grandmother can walk in to a Gamestop and buy a game for you for your Xboxone, and they don't have to worry about the hardware. It just works.

Then there's the very large gulf between ios/android development and AAA console development. The differences are massive. Console game development is a whole order of magnitude larger in every sense. Having multiple SKUs makes things even more difficult for developers. Do you attempt to squeeze every bit of performance out of the newest model, or settle for lowered graphical fidelity on the oldest model? The difference in the amount of work between these two things is also an order of magnitude larger than ios/android development. This is why the N64 Ram pack, and the Xbox Kinect failed. Developers had to make assumptions about the kind of hardware their users were going to have, so they settled on the lowest common denominator, ensuring that every owner of the platform could play their game, even if they didn't have the extras.
 
What would be the advantage of a new version every year?
Software already gets upgraded with Firmware versions.
Tech advancement has slowed down in the last 10 years and is getting slower and slower.
What brings the big jumps in power is the shrinking of the manufacturing process, but the shrinking gets harder and harder.
Look at the ps4, it has an APU manufactured in 28nm and launched Nov. 2013.
The first gpus in 28nm launched Jan. 2012 and gpus are still produced in 28nm.
So launching it in the end of 2012 means 28nm and launching it one, two or three years later still means 28nm.
So the power increase has to come solely from new cpu or gpu generations and these don't happen every year and/or aren't that big.
How do you upgrade the jaguar cores in the ps4 e.g.
In 2012 you would have Bobcat in 2013 Jaguar and in 2014 and 15 Puma.
These are tiny increases, let's say 10-20% per year.
Introducing a new console version for 10-20% more power seems ridiculous to me.
 
Except it wouldn't be obsolete... Are iPhone 5's obsolete the moment the iPhone 6 comes out? Most people wait two generations to upgrade... And the software works across generations, and hardware configurations, so long as they are running the same OS version...

So if a game is developed for XB1 gen 1, it will still run in XB1 gen 2, because both consoles will be running the same OS... when the game is being run on the gen 2 hardware,the game could take advantage of the extra horse power.

The console wouldn't become obsolete until it stops getting OS updates due to hardware limitations.

Yes. It would. Because I'm releasing a game that must compete with other games graphically. Mobile games don't have the same stringent pixel-fucking reviews that console games receive. If two competing companies release similar products simultaneously and one product was built on Gen1 hardware specifications and the other was built on Gen2 or Gen3 hardware specifications, one of them is going to look and run better than the other and get destroyed by Digital Foundry and the like for being so.

Also, if I build a game for Gen3 hardware, and the end-user has Gen1 what then? In current console development, that literally wouldn't work because of how we tend to code/optimize to the hardware specifically. So now we have to build an optimized SKU for each Gen of available hardware and development cost has just skyrocketed again.
 
1. Consoles are aimed at kids, not people with extra money to spend hundreds of dollars on a new machine every year.
2. Consoles generally get more profitable to sell as time goes on I think as costs come down?
3. It already takes time for devs to learn how to fully utilize systems
4. People will be super annoyed if every year all their games don't work on their new console, that's the kind of thing you have to stretch to every generation or do backwards compatibility which is a whole other thing.
5. Splitting the online usernames to unfavorable levels.
You're serious, aren't you?
 
Because it's a stupid idea, people complain that a new console comes out every year already when it's actually 5-7 years, can you imagine if consoles did come out every year?
Can you imagine your son asking for the latest console every year?
Consoles aren't phones.

I actually heard a couple people complain when the PS4/XBone were announced that Sony/MS were only creating new consoles to get you to give them more money (because new games wouldn't work on existing consoles). They were talking about 7 year old consoles! I can't even imagine if there were
PS4.1.00
PS4.1.34
PS4.2.00
PS4.2.00.323

NOTE:
These were non-gamer parents that I unfortunatly have to listen to because I'm also a parent and have to sometimes be around annoying people. But these people are a huge part of the console market.
 
It's not a even a matter of want. They simply cannot. The op really has very little concept of professional game development to even entertain the notion.

The refresh only works on phones and tvs because a. They are not game focused devices. b. Almost no games that are on them are actually taking full advantage of the hardware. Or c. The market/competition for high end fidelity is nonexistent.

And before even saying "what about PC" I will say BUGS! The amount of bugs in PC titles due to the modular hardware/software paradigm is insane. Sony, Nintendo and MS would never allow a game to shop at the atrocious levels that a majority of PC games ship.

Consoles are a thing of beauty that it seems many do not understand. They are a level playing field for gamers, developers, publishers and consumers. People will write songs of sorrow when they are gone.

bender-laughing.gif


Halo MCC, Driveclub, AC Unity, SFIV Ultra, Battlefield 4 ...and the list goes on. PC specific bugs are the exception, not the norm.
 
This is a point I always bring up when people complain about consoles. The majority don't mind paying $500+ every year sfor that same phone with a new number by its name.

Gamers complain having to pay more than $399 for a system every 5-10 years. Gamers are cheap and like to complain. I would welcome a real future proof $800 console that would hold up better than what we currently have, but again gamers are cheap and would complain so much about that while paying that $500 for that 'new' phone. Not every gamers of course.
 
It would take away the remaining simplicity of consoles, both for developers and consumers, it would also bring up absurd amounts of compatibility issues, similar to PCs and phones and would benefit no one but the hardware manufacturers - who make most of their moneyoff their games and licensing anyway. Also, it would take away that awesome generational shifts, which are really a nice thing imo. I'm completely opposed to this and it annoys me that Nintendo is doing something similar with their handhelds already.
 
I think comparing consoles to phones in some regards is a bit pointless. The audience and range of smart phones, regardless of what brand, is always going to be larger than those of consoles. When people buy smart phones or computers they buy it with multiple functions in mind.

Sure with the past few gens consoles serve multiple functions outside of gaming, but who is going to buy a console for the sake of having it as a DVD/Blu Ray player? Nobody has really done that since the Ps2 era.

That aside though, there doesn't seem to be much demand from those who use consoles to constantly update and refresh what they have. As long as it works and functions as the user wants then there is no point or purpose to dump more money into improving it.

Phones and PCs are always going to have more of a societal demand for upgrades and refreshes to keep up with new tech and what have you because of their multi-purpose functionality.
 
Devs get better at utilizing a console's tech over its lifetime. It doesn't happen in a year

Graphics on my old hd4870 became a lot better between 2009 and 2012 as well. (bf3 looked a gen ahead of most 2009 stuff)

It's not unique to consoles and has a lot to do with developers discovering new rendering techniques.

e.g on pc ambient occlusion has gone through like 5 different iterations over the years, the old primitive kinda ugly SSAO method vs the now visually VASTLY superior yet similar performance impact HBAO+
You get much improved graphics in this case and it has nothing to do with "unlocking the power of the hardware" or some shite.

Just a better method or new method to do old/new things that they didn't think of before.

Other examples:
SMAA and MFAA >>>> fxaa/mlaa and MSAA while being more efficient.

If you really think that some marginal (relative to pure hardware gains) improvements in performance through drivers over the years (that still happen on pc as well btw,AMD GCN drivers and performance of old GCN cards have improved by a good percentage over the years) are worth prioritising over simply being able to put twice as powerful hardware in a box every two years then you are very mistaken.

No amount of getting familiar with cell on a ps3 bought in 2006 ever let it approach what was done on a 4x more powerful budget gpu from 2009 with 5x more memory bandwidth

That's the thing with hardware, it evolves fast (still does in gpu land even today).

So say in 2018 talented devs like ND can maybe get 30 percent more performance out of that jaguar+castrated hd7850 + 176GB/sec shared memory bandwidth over the next few years, so that games run as well on that 1.8TF GPU than on a 2.5TF gpu that is new to them.

Who fucking cares at that point when hardware moved on in 2016 to gpus with 16TF and 1TB/s memory bandwidth and in 2018 budget gpus will probably have well over 10TF of performance.

That 0.7TF equivalent worth of efficiency doesn't register anymore on the new scale.

If or when gpu makers reach a wall where performance/watt doesn't double every 2 years but stays stagnant then maybe we can start caring about incremental efficiency increases through familiarity with old hardware over the course of 4-8 years... for now and from the perspective of anyone who isn't stuck on the same ancient hardware for 8 years? HAHA
 
It's not a even a matter of want. They simply cannot. The op really has very little concept of professional game development to even entertain the notion.

The refresh only works on phones and tvs because a. They are not game focused devices. b. Almost no games that are on them are actually taking full advantage of the hardware. Or c. The market/competition for high end fidelity is nonexistent.

And before even saying "what about PC" I will say BUGS! The amount of bugs in PC titles due to the modular hardware/software paradigm is insane. Sony, Nintendo and MS would never allow a game to shop at the atrocious levels that a majority of PC games ship.

Consoles are a thing of beauty that it seems many do not understand. They are a level playing field for gamers, developers, publishers and consumers. People will write songs of sorrow when they are gone.

Was true 10 years ago... but right now it change.... A LOT !
 
There's a lot of speculation in regards to how "simple" this task would be for developers, and that's mostly coming from my side. But until a game developer comments, I don't think we can necessarily assume it's overly difficult either. It's not in the same league as a full port.

If you gave a developer a single vendor for CPU and GPU, and every other component and said "Hey, we're just going to be bumping this up every year without drastic changes," this scenario feels a lot like giving a development team consistent PC hardware. If you remove the complexity of mixing parts, and you're targeting a certain graphics API version, say DX11 for multiple years, I'm not seeing how this becomes outrageously difficult.

Yes, the libraries and APIs all move forward, but they don't change all that drastically most years (security usually does, for example), and you aren't forced to use the new one in most cases.

Think of a major desktop application (or a game). Often times, little to no work goes into having support for the new Windows, Linux, or Mac distribution. A lot of work only goes in if they are trying to take advantage of a new feature, which is exactly what happens with a console over time. They don't use the "future" features of the hardware at the beginning, and start to use it later. Drivers and GPUs are the main hurdle as there are many combinations. If you lock it down to two or 3, would this not remove most of the job?

The software and OS already evolve over a console's life for developer kits. I'm talking about adding a spec bump on each iteration. I don't think this would be nearly as much of a technical headache as a lot of people are thinking.

Business model, audience size, desire, and pricing seem to be the major factors in calling this either a good or bad idea. Which I agree is a lot to lock down, but that's why I posed the question in the first place.
 
Well, people who need a refresh every year can simply purchase these limited edition consoles that are coming out. You can now choose , taking the ps4 as example, a black, white, 20th anniversary, taken king, batman,star wars, mgs edition ( im sure i forgetting some) . And hardware sometime change, 500g or 1tb drive etc.

So in a way you think they dont do it, but they do change their machine once in a while, and you have the opportunity to follow. Yeah its not the same thing as phones which change performance every year, but anyway, its a way of changing.

And i would be piss to buy a new ps4 with higher specs every 2 years. In 2012, iv bough an ipad 2, and now look, plenty of games are unplayable on my machine. Nintendo did this strange move by releasing a new 3ds, to play what, one game that it is required to upgrade if you want to play it???? One damn game.

No, never i would jump to upgrade my system like pc gamers do, constantly upgrade hardware to get the most out of their games. I once was a pc gamer who spent crazy amount of money, and i decide to pull the plug. My 400$ ps4 is good to go for a good 6-7 year.
 
Yes. It would. Because I'm releasing a game that must compete with other games graphically. Mobile games don't have the same stringent pixel-fucking reviews that console games receive. If two competing companies release similar products simultaneously and one product was built on Gen1 hardware specifications and the other was built on Gen2 or Gen3 hardware specifications, one of them is going to look and run better than the other and get destroyed by Digital Foundry and the like for being so.

Also, if I build a game for Gen3 hardware, and the end-user has Gen1 what then? In current console development, that literally wouldn't work because of how we tend to code/optimize to the hardware specifically. So now we have to build an optimized SKU for each Gen of available hardware and development cost has just skyrocketed again.

Why wouldn't your game be able to compete graphically? Look at PC games... They have min spec requirements, and they have ultra settings... Depending on your hardware, your game will run somewhere in that range... This would be no different, except instead of a wide variety of Hardware configurations, a dev would have just two to consider... So it's easier that making a PC game...

As far as consoles are concerned, it would be akin to a cross gen game... Except, instead of dealing with vastly different architectures, your talking about very similar architectures, with upgraded hardware... So it will be easier than making a cross platform console game...

As a developer, you have the added benefit of not having to worry about the low installed base when new consoles first launch, because people with both generations of hardware can play the game.
 
The funny thing is that companies like Apple are starting to move away from yearly iterations for products they see as less disposable, like iPads. Where a release would only have one improvement (iPad mini 2 to iPad mini 3) or they start giving iPads longer releases (iPad Air 2).

So even in spaces where people used to see yearly revisions will start seeing longer time spans. As technology starts to plateau in smartphones and tablets, expect to see fewer changes year after year and the phone revisions to be more like cars. Where the nameplate gets changed and minor fixes inside, and they do huge platform changes every 4-5 years.
 
Because if I wanted that I'd buy a PC. I like having a baseline for 5-7 years before we upgrade to something new and more impressive. The sort of gradual upgrades you want are fine if you just have to have them, but that's why PCs are modular. Investing $40/mo. for two years sounds like a nightmare.
 
Why wouldn't your game be able to compete graphically? Look at PC games... They have min spec requirements, and they have ultra settings... Depending on your hardware, your game will run somewhere in that range... This would be no different, except instead of a wide variety of Hardware configurations, you'd have just two to consider... So it's easier that making a PC game...

As far as consoles are concerned, it would be akin to a cross gen game... Except instead of dealing with vastly different architectures, your talking about very similar architectures, with upgraded hardware... So it will be easier than making a cross platform console game...

As a developer, you have the added benefit of not having to worry about the low installed base when new consoles first launch, because people with both generations of hardware can play the game.

In the case of PC games, we target the highest-end hardware and then optimize down from there. Or sometimes we optimize the low-end first to maximize our available customer-base. And when new PC hardware releases, we haven't actually accounted for that at all, that hardware simply runs the game at higher performance. Because in PC development, performance benchmarks are put on the end-user and their hardware. In console development, the performance benchmarks are on the developer. You can't upgrade your console in order to get better performance, so we must ensure that the game runs extremely well on that particular hardware spec.

We're also coding to an API, not the hardware. PC games are run nearly exclusively on middle-ware solutions to mitigate the issue of unlimited configurations. And even those fail frequently. So, we're not actually accounting for unlimited configurations in PC development, we're targeting one specification (our test machine) and pretty much crossing our fingers that our middle-ware and APIs handle the rest. This means that multiple hardware configurations for consoles aren't actually easier than unlimited configurations for PC. Unless we start making all our console development process identical to PC and code directly to APIs and middle-ware solutions - which will actually cause a significant performance hit to console games OR require the consoles aim for the absolute high-end hardware each year and drive up the price of entry. Because now the console is essentially a high-end PC that can't be modified outside the yearly hardware update.

It's not akin to cross-gen games. Because in a cross-gen solution, you have two different SKUs targeting two very different markets. Let's say we're making a cross-gen PS3/PS4 game versus a Gen1 PS4/Gen2 PS4 game. In the first example, our target audience is 80m users + 20m users. In the second, it's 10m users + 10m users... for about the same amount of work (although, not really in this example because of the Cell processor being an asshole). We're also targeting casual gamers who remained in the last gen and hardcore gamers that upgraded to the new hardware in the first two years. (In many cases, the previous gen release is to mitigate the small customer base of the next gen.) In the second example, we're targeting mostly hardcore gamers who upgraded to the new hardware in the first year and more hardcore gamers that upgraded to new hardware in the second year. Only about a 1/4 of your available audience has actually upgraded in the first 2 years (the other 3/4 are still happily playing last gen hardware), so as your potential SKUs for PS4 grow, Gen1, Gen2, Gen3, etc. each Gen is actually fracturing your potential customer base smaller and smaller (per individual Gen) while requiring more and more development time for, at the bare minimum, optimizations, or more realistically an entirely different SKU.

TLDR; You do not want the PC development process in console gaming. It would only hinder performance and/or drive up prices.
 
Graphics on my old hd4870 became a lot better between 2009 and 2012 as well. (bf3 looked a gen ahead of most 2009 stuff)

It's not unique to consoles and has a lot to do with developers discovering new rendering techniques.

e.g on pc ambient occlusion has gone through like 5 different iterations over the years, the old primitive kinda ugly SSAO method vs the now visually VASTLY superior yet similar performance impact HBAO+
You get much improved graphics in this case and it has nothing to do with "unlocking the power of the hardware" or some shite.

Just a better method or new method to do old/new things that they didn't think of before.

Other examples:
SMAA and MFAA >>>> fxaa/mlaa and MSAA while being more efficient.

This cannot be stressed enough. Remember the switch to deferred rendering? REMEMBER!
 
The current 5-8 years cycle is awful. You have a certain userbase, then you have to give it all up. A continuous platform based on yearly refreshes would give developers relatively stable userbase to target at any point in time, unlike the current system where it's tremendous risk to release anything at the beginning and at the end of the lifecycle of a console. On top of that, games are content light and unpolished early in console lifecycles.
 
In the case of PC games, we target the highest-end hardware and then optimize down from there. Or sometimes we optimize the low-end first to maximize our available customer-base. And when new PC hardware releases, we haven't actually accounted for that at all, that hardware simply runs the game at higher performance. Because in PC development, performance benchmarks are put on the end-user and their hardware. In console development, the performance benchmarks are on the developer. You can't upgrade your console in order to get better performance, so we must ensure that the game runs extremely well on that particular hardware spec.

Currently this is the case, but we are talking about a scenario where the user CAN upgrade there console, buy purchasing a later version of it. We're talking about a shift in the way consoles are distributed, but you are arguing based on the classic approach. Presumably, if consoles were on a more rapid upgrade cycle, developers would 'target' one gen of the console (based on whatever criteria makes sense at the time) then optimize up or down, just like they do on PC... In this case, the benchmark would be on the user, who chooses which SKU he want to use.


We're also coding to an API, not the hardware. PC games are run nearly exclusively on middle-ware solutions to mitigate the issue of unlimited configurations.
And even those fail frequently. So, we're not actually accounting for unlimited configurations in PC development, we're targeting one specification (our test machine) and pretty much crossing our fingers that our middle-ware and APIs handle the rest. This means that multiple hardware configurations for consoles aren't actually easier than unlimited configurations for PC. Unless we start making all our console development process identical to PC and code directly to APIs and middle-ware solutions - which will actually cause a significant performance hit to console games OR require the consoles aim for the absolute high-end hardware each year and drive up the price of entry. Because now the console is essentially a high-end PC that can't be modified outside the yearly hardware update.

But the difference between your gen1 console and gen2 are going to be small in terms of architecture... The SDK will be the same or very similar across generations... It won't be akin to developing for PC in that regard. You target the one machine that makes the most sense in the upgrade cycle, and optimize up or down to the exact specs of the other machine...

It's not akin to cross-gen games. Because in a cross-gen solution, you have two different SKUs targeting two very different markets. Let's say we're making a cross-gen PS3/PS4 game versus a Gen1 PS4/Gen2 PS4 game. In the first example, our target audience is 80m users + 20m users. In the second, it's 10m users + 10m users... for about the same amount of work (although, not really in this example because of the Cell processor being an asshole). We're also targeting casual gamers who remained in the last gen and hardcore gamers that upgraded to the new hardware in the first two years. (In many cases, the previous gen release is to mitigate the small customer base of the next gen.) In the second example, we're targeting mostly hardcore gamers who upgraded to the new hardware in the first year and more hardcore gamers that upgraded to new hardware in the second year. Only about a 1/4 of your available audience has actually upgraded in the first 2 years (the other 3/4 are still happily playing last gen hardware), so as your potential SKUs for PS4 grow, Gen1, Gen2, Gen3, etc. each Gen is actually fracturing your potential customer base smaller and smaller (per individual Gen) while requiring more and more development time for, at the bare minimum, optimizations, or more realistically an entirely different SKU.

Your math here is using an upgrade cycle rate that is faster than what I think is plausible... 1year cycle, yeah your right, the fracturing would shrink your customer base... But on a 3-4yrs cycle, when gen1 is ramping up sales among the more casual, gen 2 is launching for the hardcore. There will be at least 40million PS4s by the end of 2016, you could launch PS4 v2 and have an 'old gen' installed base that is still actively into buying software... And a 'new gen' who trades up for the latest bells and whistles... This would be unlike what happened between PS3 and PS4 after an 8 year gen, when devs put all this effort into these cross gen games to target the 80million base, but old gen users had stopped buying games... It was a waste.

TLDR; You do not want the PC development process in console gaming. It would only hinder performance and/or drive up prices.

You wouldn't have the PC process... You also wouldn't have the classic console process either... You'd have something in between... it wouldn't have all guess work of PC development, but it would allow users more control over their experience than the console process...
 
Presumably, if consoles were on a more rapid upgrade cycle, developers would 'target' one gen of the console (based on whatever criteria makes sense at the time) then optimize up or down, just like they do on PC... In this case, the benchmark would be on the user, who chooses which SKU he want to use.

But the difference between your gen1 console and gen2 are going to be small in terms of architecture... The SDK will be the same or very similar across generations... It won't be akin to developing for PC in that regard. You target the one machine that makes the most sense in the upgrade cycle, and optimize up or down to the exact specs of the other machine...

So, like I said, either we're developing to the hardware or we're developing the API/middle-ware. If we're going to continue to develop to the hardware, we'd need to always target the first Gen and then subsequent Gens will simply have marginally increased performance. If we're going to transition to an PC-esque process and develop to the API, then the consoles need to be way more powerful to maintain quality performance across all Gens - driving up their price. You also have to consider that eventually we'll need a new set of Gens anyway, either because of architecture breakthroughs or hardware growth. There are already PC games, that aren't really all that old, that cannot run - at all - on old hardware. So we'd still have that hard break-point in the cycle.

If the hardware differences between Gen1 and Gen2 are small, then you've lost a lot of the point to doing this in the first place. Consoles aren't smartphones. The long life cycle is as much to do with consumer interest and demand as it does with anything else. Mass-market gamers aren't likely to upgrade a $800+ box every year just to play video games, so we're likely targeting a much smaller customer base with Gen upgrades from a hardware manufacturer perspective. And the profit margins from hardware are already minuscule at best.

Your math here is using an upgrade cycle rate that is faster than what I think is plausible... 1year cycle, yeah your right, the fracturing would shrink your customer base... But on a 3-4yrs cycle, when gen1 is ramping up sales among the more casual, gen 2 is launching for the hardcore. There will be at least 40million PS4s by the end of 2016, you could launch PS4 v2 and have an 'old gen' installed base that is still actively into buying software... And a 'new gen' who trades up for the latest bells and whistles...

Well, yes, that is what the OP suggested - to mirror the yearly updates that smartphones currently employ (and ironically are moving away from for many of the reasons listed in this thread).

However, you do bring up a good point that a 4 year cycle with full backwards compatibility would probably work. That duration is probably large enough for early adopters to want to upgrade, previous generation hold-overs to move to Gen1 hardware, and game studios to finish a development life cycle within the life-span on the hardware. That said, I feel like this would run into a scenario where Gen2 games don't work on Gen1 hardware, or run incredibly poorly on Gen1 hardware. And now you have this weird confusing state to the end user - see: Nintendo. Where your casual customers don't understand the differences between Gen1 software and Gen2 software or why Gen2 doesn't work/runs or looks poorly on their Gen1 system.

There's also something to be said about homogenization of innovation as well. If your next Gen needs to be roughly the same as last Gen, hardware manufacturers take fewer and fewer risks and make fewer and fewer changes between Gens with the exception of bumping up the output of the CPU and GPU a bit. This plan actually races to the console singularity rather than the slow shuffle we have now. If that's what you want, great, but I'm not sure everyone would be on board with that.
 
^ This poster knows what they're talking about. Anyone thinking that consoles should have a yearly refresh model needs to read these posts.

For those arguing in favor of minor hardware upgrades, this is the appropriate answer:

"If the hardware differences between Gen1 and Gen2 are small, then you've lost a lot of the point to doing this in the first place. Consoles aren't smartphones. The long life cycle is as much to do with consumer interest and demand as it does with anything else. Mass-market gamers aren't likely to upgrade a $800+ box every year just to play video games, so we're likely targeting a much smaller customer base with Gen upgrades from a hardware manufacturer perspective. And the profit margins from hardware are already minuscule at best."

Minor hardware upgrades also don't guarantee significant performance gains. APIs and drivers are also a big part of the equation, and improving hardware without those makes the extra hardware largely pointless. You need them to make it worth having the extra power, otherwise you're not really gaining anything. The best solution really is a ~4ish year cycle, with backwards compatibility for the previous generation.
 
Sad to see people defending the cancerous "Refresh every year" plan nowadays, planned obsolescence seems ok now. Seriously just use your brain and stop acting like a puppet.
 
Well, I disagree : D

Faster models of the same chip only mean lower frame times and better performance.
You dont need extra optimization if the bits run through the exact same path in the same moments through the chips.

¿Could you change your code to get better results given faster hardware? Maybe.
¿Do you need it if it's already optimized for weakest model? No.

And "get the new" as the reason people buy consoles? What?
No, people buy consoles not for being new, but for being more powerful and its exclusive games. Power is something everyone wants.

I dont think this model would break anything. Just give a little more for the ones who are willing to pay a little more. And that's not a problem if games are developed with the weakest model in mind.

If a game is not optimized for the weakest, it wont be either for the faster, and the difference in performance wont be anything to have envy for.

No, you're way off on the big picture.

You can't just optimize for the weakest and say "Faster models of the same chip only mean lower frame times and better performance."

What exactly does that mean? What exactly do you think is going to 'get better' when the games play the later models? "Better performance"? Well, that sucks, because that means the game which was optimized for the weakest machine doesn't play well on the weakest machine and needs 'more power' to run right.

If the game is truly optimized to run on the weakest machine, and you don't optimize for the 'better machines', there's just no gain. There's no magic 'it just runs better!' -unless- it was running crappy before.. ie, yes, a choppy 20fps might now run at a solid 30fps... but if the game was properly optimized on the weakest machine to lock in 30fps, what is the faster, optimized model supposed to automagically do w/o model specific optimization?

All of the important stuff - textures, draw distance, number of enemies on screen, explosion and smoke, etc would need to be -optimized- for each model for you to see any appreciable benefit per iteration, and that's just hellish on developers. It would turn the beauty of console design (one target) into an absolute nightmare.

I'll repeat, you are creating an absolute nightmare situation.

These are the options:

1) A dev optimizes for the weakest machine. Better models play the game the same way, making 'new model' owners angry that the model from 3 years ago is 'holding them back'.

2) A dev optimizes somewhere in the middle. New owners are still angry, and now 'older' owners are angry new games run like shit.

3) A dev optimizes for the newest model, and the back of the box warns that any other model may have issues... yeah, that's not going to work.

4) A dev optimizes for all devices differently. Yeah. Right. Game now in development hell, aiming for 6 models of PS5 and 6 models of XB2, etc.

Two huge reasons consumers buy consoles are because they are:
1) futureproof for a gen (5-10 years)
2) plug and play -- plug it in, attach it to a tv, and anything they buy off the shelf "just works"

Iterative console releases would destroy both 1 AND 2. It would be a nightmare. PCs already cover this market, better, and those buying into that market know what they are getting.

I understand the desire for people heavily into gaming to have their console be super high tech... but that isn't the predominant console market. Not everyone is buying on launch day, expecting a $1000 rig in a $400 body. The large, important base of console buyers are waiting years to actually pick one up as it is - -they aren't interested in having the -top of the line technology-, they want the -top of the line console-. Do you see the difference? The PS4 and XB1 will represent 'the best' for most people for years to come, until the ps6/xb2 are released. It doesn't matter that by then it's 5-10 year old tech that wasn't even particularly cutting edge at release... that is simply unimportant to the majority of the market.

This idea that you can just optimize for the weakest and every other model will just make games 'better' is fallacious and wrong. There is no magic like that. All you'd be doing is pissing off older unit owners when games run like shit, and pissing off new owners because everyone else's old models are holding games back.
 
Sad to see people defending the cancerous "Refresh every year" plan nowadays, planned obsolescence seems ok now. Seriously just use your brain and stop acting like a puppet.

Well said. Plus OP, if you really feel like you should yearly be on top of the technology, quit consoles and go full PC.
 
I bought a console because it's a fixed platform. I don't have to provide tech support to the family, we just put in a disc/ launch a game and play.
If there's variables in there, it all falls apart.
 
This is a point I always bring up when people complain about consoles. The majority don't mind paying $500+ every year sfor that same phone with a new number by its name.

Gamers complain having to pay more than $399 for a system every 5-10 years. Gamers are cheap and like to complain. I would welcome a real future proof $800 console that would hold up better than what we currently have, but again gamers are cheap and would complain so much about that while paying that $500 for that 'new' phone. Not every gamers of course.

The 'majority' of people who buy smartphones are not upgrading to the latest model (cost- nearly £600 for the latest Apple handset here in the UK) every single year, and they would certainly mind if forced to do so. Just because phone companies release new models each year doesn't mean the entire customer base is on an annual upgrade cycle.

The idea of what is 'cheap' varies from individual to individual, let alone country to country, it's a bit rich to call people 'cheap' and 'complainers' for expecting expensive pieces of technology to have a lifespan measured in years rather than months. I bought mine about 5 years ago without a contract, keep it in good nick with a case, swap out the battery every couple of years and have no interest in upgrading it until it dies as the incremental improvements don't make a £600 improvement to it's core function as a telephone for me.

I imagine the tech enthusiasts with a high disposable income who feel that they need the latest new phone every year are more likely to be the type of customers who would be interested in incremental console upgrades too, rather than the ones complaining about it.
 
Top Bottom