• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox One | Understanding Microsoft's Cloud claims | Tech panel and ArsTech article

They have not demonstrated this for any game yet. I'd have to wait and see, but I am extremely doubtful to say the least.

They haven't demonstrated a game yet period...

Let MSFT prove it at E3 and beyond. If they don't then you call them out and gloat. If they do, you eat crow (with about half the other GAF members). The over reaction on the forum with the current (non) information is honestly crazy.
 

lockload

Member
So this has a shred of truth potentially?

Seems like nonsense honestly to a leyman

This is already happening in the bunniness world

We dynamically offload work to cloud servers based on what the work is and load, the key is have the unit of work defined so that any server can process that work. Once youve got that its more about deciding when to push that work to the cloud etc

This is already happening just not for non videogame stuff

The key is it isnt a bank of xbox ones in the cloud it super fast servers that can compute the data being sent. Still i got the feeling that is a longer term aspiration as home networks get faster but for a game that was online multiplayer only that could happen earlier as if you lost your internet connection you couldn't play that game anyway
 

nib95

Banned
How viable is the claim regarding the cloud acting as dedicated servers for all games? Because that sounds awesome. Something that would make 3rd party games like Call of Duty way better on Xbone potentially.

Not only viable, but probably the main (perhaps only) way this cloud array will actually realistically benefit the console and it's services. Dedicated Servers are great for online gaming. Less lag, more players, bigger maps etc.

Question is, at what cost?
 

shinnn

Member
There are absolutely a ton of things that could be offloaded. Like you said, complicated AI determination routines could be offloaded. When you approach an enemy and change direction (or switch weapon, start running, etc.), a split second delay to its reaction and new plan of action is more humanistic than a mirror, and could free the CPU to do other tasks.

The main reason why rendering/dynamic tasks are being brought up a lot is because MS reps have brought it up as a way of closing a perceived power gap and leaping forward. Wars don't have much place in this thread, but it's good to establish context about what kinds of things we truly will achieve to anyone reading.
Could they, for example, generate the Forza 5 crowd on the cloud?
 

mhayze

Member
Same question I keep coming up with even since the leaked powerpoint they have been setting sights on it. That thing was dated years ago. Now the money...volumes of money they are spending is crazy.

They have been betting big on Azure across the board. Xbone is just one use case. It's more like they're hoping someone finds a good reason to use this stuff in games because they will be charging money for using cloud resources (eventually), even if they have to give it away to start with. There are a mountain of servers they already have that they need more people to use.
 
How viable is the claim regarding the cloud acting as dedicated servers for all games? Because that sounds awesome. Something that would make 3rd party games like Call of Duty way better on Xbone potentially.

It's just a matter of how much money they want to spend on servers. If they actually follow through, it could be a really nice feature.
 

USC-fan

Banned
So, on a purely technical level, this isn't totally impossible. As has been pointed out in this thread, the idea is conceptually similar to how client-server multiplayer games already work, where the game state is computed on the server, and then sent to the client every tick. The thing is, the magic that makes that all work is prediction. The information from the server is always a few frames behind, so to hide that latency the client needs to be able to predict the current state of the game based on the old information from the server. This is a fairly well understood problem, but when latency or packet loss get too high, the algorithms break down and you start to see weird artifacts (players suddenly teleport or move backwards, you discover you've actually been dead for three seconds, etc.).

So, lets consider what this approach might look like for the example application we've been told about, lighting. Basically, you end up needing three different lighting systems:

1) There's the fancy all singing, all dancing lighting system that runs "in the cloud" and produces gorgeous looking effects, but is calculating the lights based on information from a frame or two ago.

2) There's the predictive lighting system running on the console. It tries to take the lighting information given to it from the system above, and (as cheaply as possible) figure out how to update it so that it actually looks correct for the frame that's currently being computed. This is potentially a very, very hard problem.

(You can try to minimize the work (2) has to do by moving some of the prediction to (1), but since the server can never know exactly when the information it sends will arrive at the console, (2) can never be fully eliminated).

3) You also need a complete fallback lighting system that can run real-time on the console. This is necessary for when the cloud is unavailable (for whatever reason) and for when something happening in the game totally invalidates the lighting information sent from the server (for example, the players turns on a light in a previously dark room).

This is a lot of complexity for what will probably end up being an incremental improvement in lighting quality. It would also cost a whole lot of money to run all those servers.

The other problem with the idea is that it doesn't help differentiate the Xbox from the competition. As cloud computing goes, Azure is an also-ran product and MS is hardly a powerhouse in the field. There's nothing stopping Sony from introducing the same capability for use on the PS4, or for that matter stopping EA or Activision from setting up their own clouds that work cross platform. There's also nothing about this idea that couldn't have been done on current gen consoles. The fact that almost no one has tried it before is somewhat telling.
At some level its def possible but not the way they are pushing it. It not going to give you 40x the performance on the x360. Its just silly and misleading at best...

they better show something at E3 or its just PR....
 

cjcool804

Banned
Even if this works, which I doubt it will, it'll basically turn the Xbox One into the console equivalent of the gametrailers player. Varying quality depending on factors totally out of your control.

Whereas the PS4'll just have the guts to do it all locally.

I wouldn't be surprised if Microsoft just ends up faking the Cloud effect for first party titles and forces a lower quality mode if the internet connection drops to fool people.

May be the worst post ever.
 

Nafai1123

Banned
As far as I'm concerned anyone who isn't INCREDIBLY skeptical that this will work effectively or be implemented in games is putting a WAY too much faith in MS. It's like making factual arguments about how cool big foot is without ever seeing him/her or arguing about the existence of God to an atheist (aka. it makes you look like a tool). If MS shows some proof of this and it is indeed impressive I will fully embrace my dinner of crow, but at this point it sounds like a logistical nightmare that tries to solve a problem that's already been solved (pre-baked lighting/physics).
 
i think i am going to believe the engineers at microsoft and not random people on gaf.

Gaf may not be great with predictions or business acumen but i think the reason nearly every one of us signed up here is because this place, above all others, cuts through PR bullshit and gives actual news and breakdowns on this industry.


There are definitely people talking out of their ass in this thread on both sides. In fact most of us are. But there are people on this forum who understand and will take the time to explain it more clearly and without the hyperbole of anyone's PR machine.


And for the record defensive xbox fans, this isn't the only thing mentioned that needs proof. I think alot of what Sony was saying with streaming and having a friend take over your game may end up being bullshit as well.

Go take a look at the 360 unveiling. Or the kinect unveiling. See what they said was going to happen with each and then look at what actually did.

And, similarly, it's funny how wrong they were about stuff that ended up huge. In the 360 unveiling they sold xbla as a platform to get girls to play casual games.
 
Sorry that I'm a cynic. I hope it doesn't upset you too much. Maybe you'd want to actually say something about my post other than just taking shots at it?

That's beyond being Cynical. Microsoft has nothing to gain for making your game look worse when you lose your connection. I mean WTF.
 
And for those asking why PC games don't do this, they actually do. Any MMO does stuff like this all the time. Most PC games don't have thousands of dedicated servers at the disposal of everyone playing at any given time (unless they are an MMO of some kind).

MMOs also come with hefty monthly fees or aggressive monetization schemes to pay for those servers. And even with that, they're still often huge financial failures.

And an MMO server is fundamentally different from what Microsoft is suggesting here. A single MMO server is more powerful than your average PC/console, sure, but it's also supporting hundreds of paying players. Microsoft is suggesting that they'll give you a server several times more powerful than your console, just for you. And not to facilitate something gameplay-critical like a huge persistent online world, but just for nicer lighting that by their own design the game must be able to function without.

Even if it's technically possible, it is financially nonsensical.
 

Kunan

Member
Could they, for example, generate the Forza 5 crowd on the cloud?
This may be usable for telling when members of the crowd need to react (and how they should react) to approaching/passing cars, so that the game doesn't need to make these distance and logic checks themselves. Send each car's position and current state (spin out, driving fine, etc), and have the server know the exact positions of all people on the track. You could have people react individually instead of as groups.

The main issue with crowds is instance rendering and the application of the animation transforms. You don't want to be sending animation matrices back to the game for every primary actor. Despite this, the above may be one way of alleviating the stress and creating a more believable experience. Lots of little things like that can really add up, whilst not breaking the offline experience.
 

Scrooged

Totally wronger about Nintendo's business decisions.
Lets say that this is real and does work. Will the games that have the feature require you to use it? If not...then what's the point? Will you just get a few more FPS if you decide to not use the feature? Most developers target either 30 or 60 FPS so that doesn't make much sense. This isn't PC gaming.

I just don't understand the use of cloud computing in real time on a closed system. If the game doesn't require it...then why have it. If the game does require it, then you're just getting yourself into SimCity territory. And for what?
 

Swifty

Member
The problem is that the more complex your calculation, the more data there is. This simply won't scale at all. At least with cloud gaming via OnLive or Gaikai, you have a fixed bitmap you need to deliver to the player.

But this. This just doesn't make sense at all. A cloud computing clusters excel at chomping really large data sets. Like, what if you want to get the post-transform of a mesh after doing some nifty deformations on it? The incoming result would be enormous! So cool, your cloud absolutely ate that mesh for breakfast but ugh, enjoy waiting for it to come back.

Don't get me started on how annoying this will be to debug and test consistently. I extremely doubt any 3rd parties are going to bother doing anything serious with this.
 
This would be great, but I honestly can't see it happening in the next 2 or 3 years. The amount of servers they need is immense.

The problem is that the more complex your calculation, the more data there is. This simply won't scale at all. At least with cloud gaming via OnLive or Gaikai, you have a fixed bitmap you need to deliver to the player.

But this. This just doesn't make sense at all. A cloud computing clusters excel at chomping really large data sets. Like, what if you want to get the post-transform of a mesh after doing some nifty deformations on it? The incoming result would be enormous! So cool, your cloud absolutely ate that mesh for breakfast but ugh, enjoy waiting for it to come back.

Don't get me started on how annoying this will be to debug and test consistently. I extremely doubt any 3rd parties are going to bother doing anything serious with this.

This too.
 

Riggs

Banned
Oh shit, I forgot about that. During the reveal they mentioned playing other games, watching TV or chatting while you wait for a match in your favorite game.

Surf porn videos that won't play in IE, Skype call my mum, listen to some dope ass skrillex, and watch fucking star trek while I wait in the 40x queue. Life is good.
 

njean777

Member
Sounds like a bunch of bullshit to me, but if it works, then thats great. The only question I have is what about the people without internet?
 
And for those asking why PC games don't do this, they actually do. Any MMO does stuff like this all the time. Most PC games don't have thousands of dedicated servers at the disposal of everyone playing at any given time (unless they are an MMO of some kind).

They offload graphical tasks like lighting to servers in order to make the game run/look better? Wouldn't it make more sense to use that server infrastructure to increase network stability rather then menial graphic improvements?
 

charsace

Member
They're going to be spending an awful lot of time and money on BS R&D to make this offloaded fluid dynamics, etc. work just to have a cover story for being able to check your copyright licences every 24 hours.

MS and other companies have been putting money into researching cloud computing for a long time. This is just one way they can put cloud computing to use.

They offload graphical tasks like lighting to servers in order to make the game run/look better? Wouldn't it make more sense to use that server infrastructure to increase network stability rather then menial graphic improvements?

The server is running things like AI. And graphics aren't a menial task at all. Consumers care a lot about graphics. Most care about it more than game play or physics.
 
So, on a purely technical level, this isn't totally impossible. As has been pointed out in this thread, the idea is conceptually similar to how client-server multiplayer games already work, where the game state is computed on the server, and then sent to the client every tick. The thing is, the magic that makes that all work is prediction. The information from the server is always a few frames behind, so to hide that latency the client needs to be able to predict the current state of the game based on the old information from the server. This is a fairly well understood problem, but when latency or packet loss get too high, the algorithms break down and you start to see weird artifacts (players suddenly teleport or move backwards, you discover you've actually been dead for three seconds, etc.).

So, lets consider what this approach might look like for the example application we've been told about, lighting. Basically, you end up needing three different lighting systems:

1) There's the fancy all singing, all dancing lighting system that runs "in the cloud" and produces gorgeous looking effects, but is calculating the lights based on information from a frame or two ago.

2) There's the predictive lighting system running on the console. It tries to take the lighting information given to it from the system above, and (as cheaply as possible) figure out how to update it so that it actually looks correct for the frame that's currently being computed. This is potentially a very, very hard problem.

(You can try to minimize the work (2) has to do by moving some of the prediction to (1), but since the server can never know exactly when the information it sends will arrive at the console, (2) can never be fully eliminated).

3) You also need a complete fallback lighting system that can run real-time on the console. This is necessary for when the cloud is unavailable (for whatever reason) and for when something happening in the game totally invalidates the lighting information sent from the server (for example, the players turns on a light in a previously dark room).

This is a lot of complexity for what will probably end up being an incremental improvement in lighting quality. It would also cost a whole lot of money to run all those servers.

The other problem with the idea is that it doesn't help differentiate the Xbox from the competition. As cloud computing goes, Azure is an also-ran product and MS is hardly a powerhouse in the field. There's nothing stopping Sony from introducing the same capability for use on the PS4, or for that matter stopping EA or Activision from setting up their own clouds that work cross platform. There's also nothing about this idea that couldn't have been done on current gen consoles. The fact that almost no one has tried it before is somewhat telling.

I'd agree, but the example of lighting isn't really the best one. The ones I gave are more realistic. Besides, super complex lighting isn't going to really move the needle much anyway.

For this to be worthwhile, it has to be something that can be done far in advance of when needed, and something that actually adds value to the game.

Game devs are pretty creative and I'm sure they can think of something. As I said, the difference here isn't the fact that cloud computing exists (it has for many decades), it's that up until now, any given game could not rely on having dedicated servers available for it to use whenever it wants. You had to make a major investment if you wanted that. With next gen, this is now available to all games from the start.

And Azure is now a $1B business. How is that considered "also-ran"?
 
MMOs also come with hefty monthly fees or aggressive monetization schemes to pay for those servers. And even with that, they're still often huge financial failures.

And an MMO server is fundamentally different from what Microsoft is suggesting here. A single MMO server is more powerful than your average PC/console, sure, but it's also supporting hundreds of paying players. Microsoft is suggesting that they'll give you a server several times more powerful than your console, just for you. And not to facilitate something gameplay-critical like a huge persistent online world, but just for nicer lighting that by their own design the game must be able to function without.

Even if it's technically possible, it is financially nonsensical.

We don't know the details of how this costs the publisher/dev. And it's not "just for you". That's not how Azure works. It's a pool of servers that are shared among many applications. You'd just get your slice of the computing power.

And where do you conclude that it can't be for a persistent game world? If that's the design of the game (like an MMO) that's a perfect use for it. Some games don't make sense without being online. Others work better when online but still work offline, and a much smaller subset (these days) don't benefit from being online at all.
 
I'm calling BS on 3 servers/console.

These servers are monstrous. You don't get an entire server, you get a piece of one. Each server has an insane amount of RAM, processors, etc.. Everything is virtualized. The whole thing is very complex, but if you search for Azure online, you'll find all kinds of info about it.
 

Zaptruder

Banned
I had a pretty damn similar idea to this a few days before the Xbone reveal.

Basically, yes, use cloud to render time insensitive data.

More specifically, get the cloud to render out a point cloud mesh of the world.

Which would work great for games with static content (maybe like MMOs) which provide players with dramatic vistas.

It would also work quite well with VR.
 

Zaptruder

Banned
Also, precalculated lighting are what we think of as: Light maps.

What last generation used to do with baked light maps, in order to simulate high quality lighting in their scenes.

Well, this generation, we can get some dynamism going with the lighting, assuming that it isn't strict on its latency requirements (i.e. a sun based global lighting system that tracks across the scene slowly).

If the connection drops out, light map stops getting updated! Easy peasy.

And dynamic lighting is additive on top of cloud or baked light maps (i.e. you shine a torch on a point on the ground with shadow and no shadow - the shadow will still remain, but both parts will be lighter relative to the rest of the non-illuminated scene. As it would be in real life).

It could even do 'dynamic' light maps - pulsing and flickering lights from broken signage could be sent through as part of the precomputer data.


And finally, it may even be possible (although this is just another idea in my head at this point - but if it is, it's probably already been considered by others) - it's possible to even get some degree of dynamism with light maps.

i.e. light sources in a room are part of a light map - but can still be shot out by the player.

Instead of waiting for the server to compute the changes, the engine has a system for imposing 'darkness' based on the difference from current lighting to lightmap updated lighting, which will help to 'minus out' the effect of the light that has just been shot out... on the light map that requires updates from the server only when light changes.

So 5 seconds after the light gets blown out, the server sends a newly computed light map to update the scene, and the console is able to remove the darkness modifier for the light map.
 

Zaptruder

Banned
Lightmaps from my understanding are quite large file sizes, especially those at higher resolutions. You'd have to have some very fast internet to do the kind of file sizes you're talking in a matter of seconds. Not viable except for a tiny minority of people with uber connections really.

Lightmaps for the area that you're standing in?

i.e. if you're in a room, it'll only update the light map for that room?

Of course that'll invite a host of latency and artifacting issues.

Step outside the room into a large open area - if the data set is too large, then it won't be feasible to preprocess and send before stepping out (because you might not even step out of the room) - so the system will have to wait moments before a high res processed light map gets there.

Which leaves us in a situation not unlike Gears of War and normal maps popping in a second or two after.


Maybe they know this and are ok with this - wouldn't surprise me... it worked well enough for GoW and its marketing!


*sorry for the number of bumps to this thread... it's just a fascinating idea to me; something I think holds a lot of potential, so I'm interested to see how well MS can pull it off.
 
So if GTA5 had this cloud computing feature, you'd be able to have more pedestrians in the city around you all of whom have individualized AIs?
 
Top Bottom