• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Evilms

Banned
2102b4b9bfdea79330993f8e4d64eaa420200407165128.png


So, good or bad news if true ?
 
2102b4b9bfdea79330993f8e4d64eaa420200407165128.png


So, good or bad news if true ?

Personally, I view this as good news. The Xbox One OS is in a great place right now and I really don't think starting over would do them much good. I'd much rather they just continue to tweak the existing setup, and eventually just stop updating it on the Xbox One family of devices.

I have the new guide in the preview program and it's really solid so far. Being able to customize the order of icons is great.
 
Last edited:

psorcerer

Banned
The only thing I would add is that is that GPU is locked off from accessing the 6Gb/336Gbs pool and can ONLY see the 10Gb pool. It always has access to the lower 1 Gb address space of the 10 MCs regardless of What the CPU is doing. The CPU has access to all 16GB and the upper and lower addresses but Devs are emphasized to use the upper 1Gb address for CPU Audio and OS storage. The distribution of access and usage isnt uniform but mostly your example is about right.

Locking GPU in 10GB looks like a bad idea to me. You will need to copy memory from CPU to GPU all the time then. Loading from SSD? Let's now waste more bandwidth and latency to copy it over.
 

psorcerer

Banned
Would you mind sharing your thoughts on this

The addressing model is usually just a stride over some fixed amounts of 64/ 128B lines.
So it doesn't really matter in my calculations.
I was emphasizing what's different in the current XBSX design from the "usual" ones that are symmetric.
All the strides/address switching problems exist in the symmetric ones too.

Lady Gaia claims are pretty obvious too.
I just don't want to use the language of "reducing 320bit bus to 192bit" but you can look at it as such.
When the "second" part of the "bigger" chips is accessed you cannot access the "small" chips at all.
That's implied in my calculations.
 
2102b4b9bfdea79330993f8e4d64eaa420200407165128.png


So, good or bad news if true ?
Doesn't stop the next gen from having new features like being able to suspend more than one game.
I don't expect the PS5 OS to be that different either, I think the jump from PS3 to 4 was needed because the former was a first try in delivering a multimedia interface for a console.
I would personally prefer a sort of grid anyway, where I can see all the games or apps at once (like some RPGs menus), not this horizontal layout that the PS4 has, but I find it good in any case.
 
Last edited:

Gediminas

Banned
While everyone in here is talking about "wide and slow" and "narrow and fast", I am now looking at a paltry 1GB patch for The Division 2... being copied for the past 45 minutes on top of the original game. Now THIS is something that will be a thing of the past on the new consoles, and I am happy with just THAT.

That was all.
for me, the same with loading screens, teleporting and other screens. that time i rather spend to play or doing other staff. PS5 SSD is huge deal for me personally, for me it is more important than few pixel more.
 

DaMonsta

Member
While everyone in here is talking about "wide and slow" and "narrow and fast", I am now looking at a paltry 1GB patch for The Division 2... being copied for the past 45 minutes on top of the original game. Now THIS is something that will be a thing of the past on the new consoles, and I am happy with just THAT.

That was all.
Yup SSDs will provide quality of life improvements for both devs and end users. U.I. Stuff will be night and day from last gen.
 

SonGoku

Member
The addressing model is usually just a stride over some fixed amounts of 64/ 128B lines.
So it doesn't really matter in my calculations.
I was emphasizing what's different in the current XBSX design from the "usual" ones that are symmetric.
All the strides/address switching problems exist in the symmetric ones too.

Lady Gaia claims are pretty obvious too.
I just don't want to use the language of "reducing 320bit bus to 192bit" but you can look at it as such.
When the "second" part of the "bigger" chips is accessed you cannot access the "small" chips at all.
That's implied in my calculations.
and did you read the other source too? (the first spoiler)
Your endpoint was similar to Lady Gaias in that both consoles GPUs end up with roughly equivalent amounts of bandwidth proportional to computational power or put in layman terms similar amount of GB/s per TF
 
Last edited:
2102b4b9bfdea79330993f8e4d64eaa420200407165128.png


So, good or bad news if true ?

That is a "mistake" in my opinion. Even when I do understand that they want all the users to have the same experience (for brand consistency), a new cool UI/Dashboard could have been another good incentive for current users to upgrade.

In any event, it looks like quite clean.

I will be a bit disappointed if the PS5 do not have a total new UI. Not because I do not like the current one, but because I am tired of using it. 7 years is enough :)
 
Last edited:

RaySoft

Member
anyone who actually thinks higher clocks are in any meaningful way better than slower clocks with more CUs is just as delusional as the 13TF crowd was.

the Series X is more powerful in every way aside of the SSD speed, get over it.
higher clocks will not result in better graphics than the way wider GPU, we only have to look at PC hardware tests to see exactly this. wider cards perform noticeably better and overclocking a narrower card gets you only so much performance gains.

and the fact that sone even try to spin the dynamic clock rates as something positive is more than ridiculous.
the only reason the PS5 has dynamic clock rates is because the hardware is not able to run at full clocks and full load, how the fuck is that good?
changing clocks like that are only used to squeeze a bit more performance out of a system that needs to be as cheap as possible and to look better on a specs sheet.
the only good thing about them is that it gets more performance out of the chip they have, that's it, that's all that's positive about it. it's a necessary evil basically

realistically speaking the PS5 will most likely never run at its full clocks on both ends, because if it would, these changing clocks wouldn't be needed, but they are needed. why? because the system can't reliably run at the highest clocks.

what this means is GPU intensive games will need to downclock the CPU in order to make sure the GPU is having no issues.
and CPU intensive games will need to downclock the GPU for the same reason.
this will most likely not be an issue with launch window titles since those will still be developed to run on jaguar CPUs as well, but if open world games get more complex, if more and more advanced AI and physics get used, the CPU will be taxed more and more, meaning that the GPU will most likely be downclocked.
the Series X has both a higher clocked CPU and also a more capable GPU and better RAM. meaning when games come around that will take full advantage of high end PC hardware, and then get ported to console they will run and look better on Series X, no clock speed advantage or SSD speed will change that.

Do you rly think the XSX can hold it's CPU and GPU clock's under heavy CPU/GPU load at the same time? No, it won't.
After the fan can't sustain the temps anymore, the heat would be too massive and the box will shutdown in the end. The point is that it will never happen in games, and the reason is that games will _never_ draw max cpu and gpu recources at the same time.

IF it will ever happen, it's towards the end of the consoles lifecycle, when the industry (i.e. gameengines etc) has already saturated the recources of the new consoles, so it has to work overtime to try and keep up. This is what Cerny meant by "When that one game arrives.." line.
At that point PS5 drops a bit in power, so both CPU and GPU can go full load at the same time.

What most people don't get is that this scenario won't happen for years. Consoles are the only scene where this could be a scenario, since devs knows what hw their tageting. This way the taskmgr can make sure all jobs are balanced so they can saturate all available threads. This is one of the reasons consoles can to much more with less compared to PC.
 
Last edited:

FranXico

Member
That is a "mistake" in my opinion. Even when I do understand that they want all the users to have the same experience (for brand consistency), a new cool UI/Dashboard could have been another good incentive for current users to upgrade.

In any event, it looks like quite clean.

I will be a bit disappointed if the PS5 do not have a total new UI. Not because I do not like the current one, but because I am tired of using it. 7 years is enough :)
Well, Xbox gets regular UI updates, so the UI staying the same at launch time is not a major issue.
 

geordiemp

Member
The addressing model is usually just a stride over some fixed amounts of 64/ 128B lines.
So it doesn't really matter in my calculations.
I was emphasizing what's different in the current XBSX design from the "usual" ones that are symmetric.
All the strides/address switching problems exist in the symmetric ones too.

Lady Gaia claims are pretty obvious too.
I just don't want to use the language of "reducing 320bit bus to 192bit" but you can look at it as such.
When the "second" part of the "bigger" chips is accessed you cannot access the "small" chips at all.
That's implied in my calculations.

The big difference between yours and Ldy Gaia calculation is the GBs of everything that is not GPU bound, thats CPU + anything else.

She used 48 GBs you used much less. Is here estimate based on PC CPUs in general ?

I must admit, I always seem to agree with her statements....and next gen games will do allot more than Jaguar thats for sure and probably also at 60 FPS.
 
Last edited:
Do you rly think the XSX can hold it's CPU and GPU clock's under heavy CPU/GPU load at the same time? No, it won't.
After the fan can't sustain the temps anymore, the heat would be too massive and the box will shutdown in the end. The point is that it will never happen in games, and the reason is that games will _never_ draw max cpu and gpu recources at the same time.

As for temps I am actually more concerned with the longevity of the PS5 at normal OT. Overclocking 20% I need to wonder if it will be as bulletproof as the PS4 has been for the past 7 years. It is really the main reason I will not be purchasing a PS5 . I can see these failing due to the stress they will be under.

I Won't get a a XBX so I'll hold out for a few years and see how the PS5 holds up before I will consider it.
 

Evilms

Banned
nrtjfHh.jpg


Some information from Playstation Mag UK :

  • Godfall is a PS5 console exclusive, no PS4 or cross gen. Tailored to run on PS5.
  • PS5 allows Godfall to feel and play like no other game thanks to PS5 CPU and GPU.
  • Made by a team of 75 people.
  • Monster Hunter World in terms of gameplay, with elements of Dark Souls in combat.
  • Some ex-Destiny 2 team members are involved.
  • The game rewards aggressive play. Skill based combat based on timing in order to hit max damage.
  • Visual style and world building influenced by The Stormlight Archive, The First Law and Foundation series.
  • Positives around animation.
  • High fantasy setting divided into Earth, Air, Fire and Spirit Elements.
  • You are one of the least remaining Knight's Order tasked with stopping an apocalyptic event.
  • Start of the game you set a classed based on type of armour = 3 sets to pick from. A lot of customization to unlock as you progress.
  • Bosses designed to repel multiple people at once ie. in co-op bosses can take both of you out.
  • Game based around drop in and drop out gameplay like Destiny and Monster Hunter World with heavy Dark Souls Influence.
 
Last edited:

psorcerer

Banned
The big difference between yours and Ldy Gaia calculation is the GBs of everything that is not GPU bound, thats CPU + anything else.

She used 48 GBs you used much less. Is here estimate based on PC CPUs in general ?

I must admit, I always seem to agree with her statements....and next gen games will do allot more than Jaguar thats for sure and probably also at 60 FPS.

I do not believe in high CPU usage in gaming. CPU should be used only for "bad code". I.e. code that doesn't need to be fast, just needs to run somehow.
AFAIK, by the slightly snarky remarks on AVX by Cerny, he doesn't believe in CPU too much too.
 

geordiemp

Member
I do not believe in high CPU usage in gaming. CPU should be used only for "bad code". I.e. code that doesn't need to be fast, just needs to run somehow.
AFAIK, by the slightly snarky remarks on AVX by Cerny, he doesn't believe in CPU too much too.

So you think total access to non GPU assests will be under 30 ?

Dont see how that has anything to do with AVX code ?
 

Gediminas

Banned
nrtjfHh.jpg


Some information from Playstation Mag UK :

  • Godfall is a PS5 console exclusive, no PS4 or cross gen. Tailored to run on PS5.
  • PS5 allows Godfall to feel and play like no other game thanks to PS5 CPU and GPU.
  • Made by a team of 75 people.
  • Monster Hunter World in terms of gameplay, with elements of Dark Souls in combat.
  • Some ex-Destiny 2 team members are involved.
  • The game rewards aggressive play. Skill based combat based on timing in order to hit max damage.
  • Visual style and world building influenced by The Stormlight Archive, The First Law and Foundation series.
  • Positives around animation.
  • High fantasy setting divided into Earth, Air, Fire and Spirit Elements.
  • You are one of the least remaining Knight's Order tasked with stopping an apocalyptic event.
  • Start of the game you set a classed based on type of armour = 3 sets to pick from. A lot of customization to unlock as you progress.
  • Bosses designed to repel multiple people at once ie. in co-op bosses can take both of you out.
  • Game based around drop in and drop out gameplay like Destiny and Monster Hunter World with heavy Dark Souls Influence.
thanks.
 

RaySoft

Member
I do not believe in high CPU usage in gaming. CPU should be used only for "bad code". I.e. code that doesn't need to be fast, just needs to run somehow.
AFAIK, by the slightly snarky remarks on AVX by Cerny, he doesn't believe in CPU too much too.
A CPU is more like a workhorse. It's designed for more broad appeal (general purpose) instead of specializing in single tasks. It can do ANYTHING you throw at it.. Even real-time raytracing, but since it's not specialized, it takes it's time. What's absolutely crucial though, is that it's there.. NOTHING would work without it. It's pushing out jobs to other more specialized hardware (GPU etc) all the time. That way it's both the heart & brain of any system.
 
Last edited:
You could be correct, but surely if that were the case they would have officially had DF clear up the confusion to control the message as a definite win, no?

My guess is that they messed up and pulled the trigger on 20GB unified 560GB/s for both CPU and GPU access. Then they found signal integrity issues in late testing when pushing for 12TF, and were forced to chose slower GPU clock with 20GB or asymmetric ram or noisy console - for heat reasons to maintain integrity. The asymmetric ram solution they’ve chose AFAIK is that in any one data clock they can access the 6GB of the memory at 336GB/s from GPU or CPU, but not both sharing on a single data clock. And CPU can access the 10GB at 336GB/s exclusively, or the GPU can access it at 560GB/s exclusively, in a single data clock.

I think they planned to be able to support mixed memory quantities from the beginning.

According to MS they started with 12 TF, and worked from there. Given early predictions from AMD and memory manufacturers on performance and memory clocks, that will have set an early start point for bus width. Then cost becomes a factor, as does reliability in data transfer physics and stuff.

The only statement that MS made about CPU / IO / AUDIO was that it was at 336GB/s maximum wherever it was. They didn't say that accessing the 10GB GPU optimal RAM from these components would block all other access by the GPU. Even if the CPU accesses to the 10GB did limit the channels it was accessing to 3/5th peak speed (you'd be much better of running at full speed and caching somewhere before CPU), it would seem pretty extreme if all other channels not doing anything related to CPU/IO/Audio were all similarly limited or blocked entirely.

Those two banks of three chips either side of the processor house 2 GB per chip. How does that extra 1 GB get accessed? It can't be accessed at the same time as the first 1 GB because the memory interface is saturated. What happens, instead, is that the memory controller must instead "switch" to the interleaved addressable space covered by those 6x 1 GB portions. This means that, for the 6 GB "slower" memory (in reality, it's not slower but less wide) the memory interface must address that on a separate clock cycle if it wants to be accessed at the full width of the available bus.

I can't see why the memory would be "interleaved" in the way you're describing.

Memory accesses take tens or more often (on GPU) hundreds of cycles. You put in a request and wait to get the result back. And there's a turnaround penalty for DRAM - it can only read or write at anyone time, and you have to wait for all currently "in flight" reads / writes on a channel to complete before to can change from read to write or vice versa.

The contention I mentioned has to do with access between pools it can't be simultaneous. Im rather curious how it all works and seen a few comments on ree from technical inclined members which is why i brought it up. Anyways you can check above the relevant bits which go in more detail about it. P psorcerer conclusion is similar to lady gaias

I think it might be possible to read from different sections of the address range simultaneously, if the memory controllers have been built with that in mind.

Note that I'm not saying you can read from both the "optimal" and "normal" ranges of a single 2GByte chip at the same time*, just that you can do so at the same speed, on a per chip / channel /sub channel (if the controller has them*) basis.

(*maybe a 64-bit channel is further subdivided into, say 2 32-bit sub channels like the X1X).

You can only access the "slower" 6GB across 3 channels. You can access the faster 10GB across 5. Obviously, any channel accessing the 6 GB can't also be accessing the 10 GB.

But what *I* think is that even if the three channels are accessing the slower 6 GB, that still leaves 2 channels connected to only the 4 x 1GB memory chips that might be able to continue working. That is *if* there are jobs they can make good on in the memory connected across those other 128-bits / 2 channels.

And remember, most cases won't have the CPU accessing all 3 channels across that 192-bit, three channel range at once. And the rest of the system has to keep working. So I really don't think it's "all or nothing" between the two ranges. I think it all depends on which channels are needed for which accesses. I don't think there's a hard and fast split in they way most people are trying to describe.

It's simply not efficient in terms of power, cost, area, latency, throughput .... anything.

Yup....bandwidth reduces for both systems, the difference is that Ps5 will access its CPU and sound and other non GPU data at the same 448 speed, and hence it takes up less time away from the GPU needs.

Lady Gaia explains it nicely, assumes typical CPU bandwidth requirement of 48 GBs and in constant use in the slower access RAM, as thats the way code runs, CPU runs code, GPU displays what its told..

Funnily enough, taking the CPU access out, it leaves 39 GBs GPU access per TF for both......strange that......I am sure MS and SONY know what they are doing, and that is not a coincidence. But it could also be a leveller for both systems on big asset games....

Both are equally bandwidth limited and RAM limited IMO. Maybe one of them will slash out and upgrade as a last minute move, Sony with 16 gbps or MS with more RAM. to feed the wider bus properly...

Both Sony and MS have chosen compromises based on RAM costs. Both are not ideal.

Hence why I tyhink the RDNA2 silicon is more expensive than everyone thinks, as both have made big compromises on costs...and we are not seeing $ 399 ...


TnWWwmZ.png

Lady Gaia might be right, but I've seen nothing that states that accessing the slow 6GB (over their 192-bit bus) disables the channels connecting to the remaining 4GB of memory, or that channels connected to the 2GB chips can't access fast or slow ranges independently, if access patterns permit it.

If hitting one memory channel to access 64Bytes of data for the CPU knocked out the other four channels for the duration of that access .... that would be frikkin' crazy!

I would expect access to the slower areas to cockblock more GPU access than you might like, but a complete shutdown of anything accessing the "optimal" memory is too much.

It's possible I guess but I really don't expect it. MS have been extensively profiling access patterns for years now. Would be interesting to hear Lady Gaia's thoughts on this, but I really don't like the look of ResetEra.

I agree with you on price though. If I had to bet, I'd say $499 dollars. These are both shaping up to be really fantastic machines, and both Sony and MS have really pushed to deliver on the idea of a next gen system that can grow into the next few years.

Lady Gaia claims are pretty obvious too.
I just don't want to use the language of "reducing 320bit bus to 192bit" but you can look at it as such.
When the "second" part of the "bigger" chips is accessed you cannot access the "small" chips at all.
That's implied in my calculations.

I dunno man. I don't see why accessing the second part of the bigger chips on one channel would stop you accessing the "first" part (or whole chip for the small chips) on another channel.

If there's no overlap on the channels being used for the "fast" and "slow" access requests, and the scheduler isn't holding the GPU access up for some kind of high priority CPU job, I don't see why you can't perfectly reasonably do both at once.

Like Spengler ultimately decided ... cross the streams!
 

raul3d

Member
I do not believe in high CPU usage in gaming. CPU should be used only for "bad code". I.e. code that doesn't need to be fast, just needs to run somehow.
AFAIK, by the slightly snarky remarks on AVX by Cerny, he doesn't believe in CPU too much too.
I would expect CPU usage in next gen games to be vastly different to current gen games. Once the baseline is a 16 thread CPU with solid IPC and clocks, developers will find ways to keep it busy: Smarter AI, more complex physics simulations, better animation systems or even advanced audio effects. CPUs are not only used for “bad coding”, they are used for branchy coding where parallelisation is not effective. CPU code can be as optimized as GPU code; else there would not be any need for SIMD/AVX in the first place.

Actually, Cerny’s AVX2 remark was not snarky, it was an honest example of a worst-case workload based on the current state of these large instruction sets on modern CPUs. These instructions change the power consumption of the CPU significantly; even Intel’s latest Server CPUs, which have much more generous cooling, downclock in these workloads. For example, here are the frequencies of the i9-9980XE:
https://en.wikichip.org/wiki/intel/core_i9/i9-9980xe#Frequencies

As far as I know, dekstop Zen 2 does not automatically downclock in AVX2, but you would nevertheless have a hard time cooling it in a console form factor.
 

geordiemp

Member
I think they planned to be able to support mixed memory quantities from the beginning.

According to MS they started with 12 TF, and worked from there. Given early predictions from AMD and memory manufacturers on performance and memory clocks, that will have set an early start point for bus width. Then cost becomes a factor, as does reliability in data transfer physics and stuff.

The only statement that MS made about CPU / IO / AUDIO was that it was at 336GB/s maximum wherever it was. They didn't say that accessing the 10GB GPU optimal RAM from these components would block all other access by the GPU. Even if the CPU accesses to the 10GB did limit the channels it was accessing to 3/5th peak speed (you'd be much better of running at full speed and caching somewhere before CPU), it would seem pretty extreme if all other channels not doing anything related to CPU/IO/Audio were all similarly limited or blocked entirely.



I can't see why the memory would be "interleaved" in the way you're describing.

Memory accesses take tens or more often (on GPU) hundreds of cycles. You put in a request and wait to get the result back. And there's a turnaround penalty for DRAM - it can only read or write at anyone time, and you have to wait for all currently "in flight" reads / writes on a channel to complete before to can change from read to write or vice versa.



I think it might be possible to read from different sections of the address range simultaneously, if the memory controllers have been built with that in mind.

Note that I'm not saying you can read from both the "optimal" and "normal" ranges of a single 2GByte chip at the same time*, just that you can do so at the same speed, on a per chip / channel /sub channel (if the controller has them*) basis.

(*maybe a 64-bit channel is further subdivided into, say 2 32-bit sub channels like the X1X).

You can only access the "slower" 6GB across 3 channels. You can access the faster 10GB across 5. Obviously, any channel accessing the 6 GB can't also be accessing the 10 GB.

But what *I* think is that even if the three channels are accessing the slower 6 GB, that still leaves 2 channels connected to only the 4 x 1GB memory chips that might be able to continue working. That is *if* there are jobs they can make good on in the memory connected across those other 128-bits / 2 channels.

And remember, most cases won't have the CPU accessing all 3 channels across that 192-bit, three channel range at once. And the rest of the system has to keep working. So I really don't think it's "all or nothing" between the two ranges. I think it all depends on which channels are needed for which accesses. I don't think there's a hard and fast split in they way most people are trying to describe.

It's simply not efficient in terms of power, cost, area, latency, throughput .... anything.



Lady Gaia might be right, but I've seen nothing that states that accessing the slow 6GB (over their 192-bit bus) disables the channels connecting to the remaining 4GB of memory, or that channels connected to the 2GB chips can't access fast or slow ranges independently, if access patterns permit it.

If hitting one memory channel to access 64Bytes of data for the CPU knocked out the other four channels for the duration of that access .... that would be frikkin' crazy!

I would expect access to the slower areas to cockblock more GPU access than you might like, but a complete shutdown of anything accessing the "optimal" memory is too much.

It's possible I guess but I really don't expect it. MS have been extensively profiling access patterns for years now. Would be interesting to hear Lady Gaia's thoughts on this, but I really don't like the look of ResetEra.

I agree with you on price though. If I had to bet, I'd say $499 dollars. These are both shaping up to be really fantastic machines, and both Sony and MS have really pushed to deliver on the idea of a next gen system that can grow into the next few years.

Microsoft has been very open, I would of thought that digital foundry would of asked the obvious question and the elephant in the room....and if there was a special access they would detail it....we just have 2 numbers,....

My simpler understanding is all the memory is 14 Gbs, so to get to 560 you go wider and use all the wide, so it is sequential. However, we dont know if accessing the narror band just stops everything.

If it does, then the 2 systems are similar for large games, which is a waste of a 320 wide bus....does not compute.

Unless MS have a way of making everything fit in 10 GB. thats needed in normal running.......stil, does not add up.

Dont get me wrong, PS5 is no better....both are lacking
 
Last edited:

SonGoku

Member
Overclocking 20%
Its not a overclock, the GPU was designed to run at that frequency
nrtjfHh.jpg


Some information from Playstation Mag UK :

  • Godfall is a PS5 console exclusive, no PS4 or cross gen. Tailored to run on PS5.
  • PS5 allows Godfall to feel and play like no other game thanks to PS5 CPU and GPU.
  • Made by a team of 75 people.
  • Monster Hunter World in terms of gameplay, with elements of Dark Souls in combat.
  • Some ex-Destiny 2 team members are involved.
  • The game rewards aggressive play. Skill based combat based on timing in order to hit max damage.
  • Visual style and world building influenced by The Stormlight Archive, The First Law and Foundation series.
  • Positives around animation.
  • High fantasy setting divided into Earth, Air, Fire and Spirit Elements.
  • You are one of the least remaining Knight's Order tasked with stopping an apocalyptic event.
  • Start of the game you set a classed based on type of armour = 3 sets to pick from. A lot of customization to unlock as you progress.
  • Bosses designed to repel multiple people at once ie. in co-op bosses can take both of you out.
  • Game based around drop in and drop out gameplay like Destiny and Monster Hunter World with heavy Dark Souls Influence.
Preview or magazine already out?
I do not believe in high CPU usage in gaming. CPU should be used only for "bad code". I.e. code that doesn't need to be fast, just needs to run somehow.
AFAIK, by the slightly snarky remarks on AVX by Cerny, he doesn't believe in CPU too much too.
Can you elaborate on that? You mean bandwidth wise or cores utilization
Theres plenty that can be done to tax those cores
I can't see why the memory would be "interleaved" in the way you're describing.
I didn't write it, just copied it to ask for input
Im just a layman at this, memory setups not my forte just relaying what more knowledgeable people said on the subject.
My understanding is that both pools cant be accessed simultaneously and that there's a disproportionate use of bandwidth whenever the slow pool is accessed
KxUmSTs.png

c67XdCs.png
 
Last edited:

Mr Moose

Member
Better battery, no share button (create button instead, so probably same type of shit).
Not a fan of the built in mic though, better be a way to turn it off.
 

icerock

Member
New control looks pretty sweet.


I just heard from a journalist who writes for Windows Central that the controller is so heavy that after a prolonged session, play-testers are having trouble moving their fingers. Also, that light bar you see? It's dissipating a lot of heat and the controller can get really hot. Also, battery is worse than DS4. Sony are so worried about this that they are undergoing a significant revision of the controller. Watch this space.
 

travisktl

Member
Better battery, no share button (create button instead, so probably same type of shit).
Not a fan of the built in mic though, better be a way to turn it off.
There's a button in the middle at the bottom to turn it off. You can see it in the larger images on the PS blog.
 

icerock

Member
“DualSense marks a radical departure from our previous controller offerings and captures just how strongly we feel about making a generational leap with PS5. The new controller, along with the many innovative features in PS5, will be transformative for games – continuing our mission at PlayStation to push the boundaries of play, now and in the future. To the PlayStation community, I truly want to thank you for sharing this exciting journey with us as we head toward PS5’s launch in Holiday 2020. We look forward to sharing more information about PS5, including the console design, in the coming months.”

– Jim Ryan, President & CEO, Sony Interactive Entertainment

Looks like its on track for a holiday launch still, and console maybe revealed in June-July.
 
Status
Not open for further replies.
Top Bottom