• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Square Enix shows WITCH CHAPTER 0 [cry] Luminous Engine demo [DX12, 4 Titan, Screens]

b0bbyJ03

Member
You should probably be aware that due to efficiency issues of SLI, 4 Titans in SLI does not equal 4x 1xTitan performance.

VRAM is only equal to a single card's VRAM, and the performance of each additional card is reduced pretty heavily for the additional card when in SLI.

DirectX12 does allow you to combine the VRAM of multiple cards. Does that mean that they were able to use that feature? I have no idea, but its a possibility
 
You should probably be aware that due to efficiency issues of SLI, 4 Titans in SLI does not equal 4x 1xTitan performance.

VRAM is only equal to a single card's VRAM, and the performance of each additional card is reduced pretty heavily for the additional card when in SLI.

You forgot that this uses DX12 and DX12 brings back SFR, so basically it means you use both cards VRAMs which is equal to 48 GB now thanks to DX12.

DirectX12 does allow you to combine the VRAM of multiple cards. Does that mean that they were able to use that feature? I have no idea, but its a possibility

Yes. Otherwise how could they stream all the GBs of data from those textures.
 

Nzyme32

Member
DirectX12 does allow you to combine the VRAM of multiple cards. Does that mean that they were able to use that feature? I have no idea, but its a possibility

Ah yeah, completely forgot about that. I read that it would be very unlikely for developers to actually utilise such a feature since they would specifically have to develop with that in mind (iirc), but I have no idea how true that will be in practice or whether this demo uses it. Hopefully there will be more info soon.
 

b0bbyJ03

Member
Ah yeah, completely forgot about that. I read that it would be very unlikely for developers to actually utilise such a feature since they would specifically have to develop with that in mind (iirc), but I have no idea how true that will be in practice or whether this demo uses it. Hopefully there will be more info soon.

considering they are working with Nvidia I would expect them to but who knows. Also, I think its being touted as one of the biggest deals in regards to DX12.. you'll be able to combine multiple GPUs and VRAM. I wonder DX12 will just automatically use all available resources or if that would be something that needs to be programmed directly as you suggested.
 

HeelPower

Member
Honestly,Square should send their Tech Demos over to TV companies like Samsung etc

It could be a good way to show off new 4K TV capabilities as well as a good way for Square to gain some publicity.

You know those random demos that run on TV Displays.I've actually seen FFXIII CG on display on a Samsung TV once.
 
1 Titan X is 6.6 TFs, so 26.4 TFs total.

That's ~8.5x the graphics power of 1 GTX 680 and 14x the graphics power of a PS4. In one desktop.

Put just 9.5 more of those PCs together and you can do this

Holy shit.
 

Linkup

Member
considering they are working with Nvidia I would expect them to but who knows. Also, I think its being touted as one of the biggest deals in regards to DX12.. you'll be able to combine multiple GPUs and VRAM. I wonder DX12 will just automatically use all available resources or if that would be something that needs to be programmed directly as you suggested.

They are calling it Multiadapter. It's API support for multiple GPUs so it's automatic to an extent, but it need fine tuning. For memory you can have it done automatic or custom for each part depending on your needs.
 

b0bbyJ03

Member
They are calling it Multiadapter. It's API support for multiple GPUs so it's automatic to an extent, but it need fine tuning. For memory you can have it done automatic or custom for each part depending on your needs.

damn, that is exciting news.
 

Inuhanyou

Believes Dragon Quest is a franchise managed by Sony
I wish the console manufacturers would see this footage as "Yes, our next console MUST match the same PC specifications to run games at this level of graphical fidelity & stability while selling it at $399." instead of going "Wow! That looks totally awesome. I hope this runs on low-mid level PC specs. I'm sure it's fine, else the devs should downgrade their games, I'm sure the public won't notice it.".

That doesn't make any sense. Your never going to get that fidelity in a console at 399 in the timeline they would have to come up with their next console with the PS5/XB2. I think i'm going to make a thread about people living in the clouds like its 20 years ago, and the thought process behind why people think its reasonable.
 
Wait guys.. WAIT.. Did they say.. WITCH PROJECT??

edea.jpg


insidepost2.png








Final Fantasy VIII-2 confirmed! HOLY!

One guy can dream..

How dare you...my dreams!
 

Koozek

Member
Sugoi! I want to see what happens whe she goes full gold. Never go full gold.
->

tumblr_nmk6qrpK6C1qer734o1_500.gif
See! Even more proof of my theory that Eidos is working on a new mainline FF featuring Agni. I'm slowly becoming the 1Truth/Galvanizer of GAF with this theory, haha.
 

Mr. RHC

Member
See! Even more proof of my theory that Eidos is working on a new mainline FF featuring Agni. I'm slowly becoming the 1Truth/Galvanizer of GAF with this theory, haha.

Haha. The hero we desere. I keep forgetting that Deus Ex is a Square Enix IP now.
 

Ampsicora

Member
I know what I want to say it seems crazy. But Nvidia already made this trick, with Samaritan which was originally on a PC with 3 Geforce 590 and then, after one single year, they did the same presentation with only one 680.
My thought is... They will announce a new Geforce which actually can run this demo on a single card for the next year?
 

-SD-

Banned
This demo is also included in this "Advanced DirectX12 Graphics and Performance" presentation.

http://channel9.msdn.com/Events/Build/2015/3-673

Microsoft said:
DirectX12 enables graphics intensive apps to deliver better performance with greater flexibility and control. This technical session goes deep into the DirectX12 APIs you can use to reduce CPU rendering overhead, manage GPU resource usage more efficiently, and express the most cutting-edge 3D graphics possible across the spectrum of Windows devices. Whether you are building an app for the phone, PC, or Xbox, you don't want to miss this session.
 

AmyS

Member
Can't believe I missed this.

Pretty impressive.

28 TFLOPS... even the next-gen consoles won't have enough power to render this, lol.


To be fair, that sli set up doesn't scale linearly to performance or output, your going to loose the more you stack.

But yeah, i expect PS5 to be 10 to 12 tflops at the absolute highest if they are going for low to mid range parts in 2019.(although crazy to think even 5 to 6 tflops is going to be low end at that time)

That being said, this is a pure brute force demonstration as someone else said, all that individual hair isn't going to matter much if your making a game and scale that down somewhat anyway so consumers can't see it.

You could get near the same visual standard taking out a lot of the overkill effects. Especially since we don't know the res they are running it at or FPS. It could run at 4k 60, in which case lowering the res and framerate are obviously doable

Also, high bandwidth memory (HBM) should play a huge part in allowing for more complex visuals than we see today, even with the highest-end cards. So whatever level of CPU/GPU processing power PS5 has, bandwidth shouldn't be a problem.

HBM1 PC cards are happening this year (AMD 390X) HMB2 starting in 2016 (Pascal) so maybe the next Xbox and PS5 will get to use 2nd or even 3rd gen HBM.

12 ~ 15 tflops might be reasonable for consoles launching in late 2019.
 
This demo is also included in this "Advanced DirectX12 Graphics and Performance" presentation.

http://channel9.msdn.com/Events/Build/2015/3-673

OK, WOW.

From this video, the presenter says that the demo was running at 4k resolution and downsampled to 1080p.

You could do this demo on 1 Titan X just fine! If the demo uses less than 6GB of vram, you could run it on a 980ti when it comes out in a month or two.

Hell, Agni's Philosophy used 1.8GBs of vram on a 680, and this new demo doesn't look that much better, save for Agni's hair. You could probably run it on a $329 GTX 970, if the Luminous engine and the demo assets scaled appropriately.
 

jose1

Member
OK, WOW.

From this video, the presenter says that the demo was running at 4k resolution and downsampled to 1080p.

You could do this demo on 1 Titan X just fine! If the demo uses less than 6GB of vram, you could run it on a 980ti when it comes out in a month or two.

Hell, Agni's Philosophy used 1.8GBs of vram on a 680, and this new demo doesn't look that much better, save for Agni's hair. You could probably run it on a $329 GTX 970, if the Luminous engine and the demo assets scaled appropriately.

That is exactly what was going through my mind when I saw this. Graphics similar to these are possible in real-time today without a super rig. Render it at 1080-1440P with FXAA/SMAA and very high poly assets instead of 4k+ridiculous asset quality and a very very close image is achievable on a 970 or 2. This was showcased in the old Agni demo on a 680.

The whole point of this video is to take things to extreme ultra-pristine quality levels seen in pre-rendered CG, pushing the limits as to what kind of polygon counts can be rendered in real-time, hence the crazy poly counts and 4k resolution. These IQ increases make it exponentially more demanding on the hardware, hence the sudden need for 4 Titan X cards.
 

TheTux

Member
Honestly,Square should send their Tech Demos over to TV companies like Samsung etc

It could be a good way to show off new 4K TV capabilities as well as a good way for Square to gain some publicity.

You know those random demos that run on TV Displays.I've actually seen FFXIII CG on display on a Samsung TV once.

That's what I was going to say, I think they did that with FFXIII.
 

Lazaro

Member
Honestly,Square should send their Tech Demos over to TV companies like Samsung etc

It could be a good way to show off new 4K TV capabilities as well as a good way for Square to gain some publicity.

You know those random demos that run on TV Displays.I've actually seen FFXIII CG on display on a Samsung TV once.

They already do.

Here in Australia, I noticed 4K Panasonic TVs useing footage from the FF XIV ARR PC benchmark. They also used the Project Cars benchmark as well. I wonder if these TV manufacters notice the rise PC users connecting to TVs.
 

owasog

Member
Also, high bandwidth memory (HBM) should play a huge part in allowing for more complex visuals than we see today, even with the highest-end cards. So whatever level of CPU/GPU processing power PS5 has, bandwidth shouldn't be a problem.

HBM1 PC cards are happening this year (AMD 390X) HMB2 starting in 2016 (Pascal) so maybe the next Xbox and PS5 will get to use 2nd or even 3rd gen HBM.

12 ~ 15 tflops might be reasonable for consoles launching in late 2019.
I don't share your optimism. More bandwidth doesn't mean more TFLOPS. By the time 14nm is ready next year, 28nm is more than four (yes FOUR) years old. If history repeats itself, we'll still be on 14nm in 2020.

GF110 (2010, 1.5 TFLOPS) -> GM200 (2015, 6 TFLOPS), a 4x improvement in 5 years.

Assuming PS5/XB2 will use a budget GPU again, we'll get 7 TFLOPS consoles in 2020. (1.8TF x 4). Basically the power of a Titan X, but probably with a lot more memory and bandwidth (HBM).
 

Koozek

Member
You need to browse GAF a bit more carefully :p
Hm? If you're talking about the rumored, cancelled Eidos Montreal FF, or FF Fortress which apparently should have been continued at Eidos Montreal before 2011, yeah, I know about them now. I still believe they're working on something else.

I just don't understand why the FFXV art director, who had nothing to do with the development of the Agni's Philosophy tech demo, chose to draw Agni in a new outfit for that SE Happy New Year card on Twitter a few months ago. Why her instead of the obvious one, Noctis?
I mean, you could say, “Well, they were working on this DX12 demo at that time already and Naora knew about it and wanted to show a little teaser or hint.“ But, ah, I don't know...

There's something in the bushes :D
 

E-Cat

Member
I don't share your optimism. More bandwidth doesn't mean more TFLOPS. By the time 14nm is ready next year, 28nm is more than four (yes FOUR) years old. If history repeats itself, we'll still be on 14nm in 2020.

Well, a little bit more optimism would be warranted if you had done some research. The reason we'll have been on 28nm for four years is because 20nm was a bust, and was only used for SoC (aka iPhone 6). But it's not a situation that's likely to repeat again. 20nm was the last planar node for TSMC, and was basically created to ease the transition to FinFET transistors. For this compromise, it had all kinds of issues like poor power scaling.

However, 28nm --> 16FF+ will be like a node shrink+, with a 2x density improvement (instead of the usual 1.9x) and 70% (!) less power consumption - that's normally like 40-50% at best.

In their Q1/15 report, TSMC stated that their 10nm qualification is slated for late 2015, with volume ramp beginning at the end of 2016. Granted, there's still room for slippage and the first 100 million chips will be SoCs, but expect 10nm GPUs roughly a year from then. And it will be a true node shrink, too, with 2.1x the logic density and 40% power reduction.

As for 7nm, risk production is targeted at early 2017. The longest cadence in TMSC's history from an announced risk production date to product in the shelves is about two years--and that's the worst case scenario with lots of delays. All this is to say, there will be 7nm GPUs in late 2019/early 2020 at the latest.

GF110 (2010, 1.5 TFLOPS) -> GM200 (2015, 6 TFLOPS), a 4x improvement in 5 years.

That's pretty impressive with only one node shrink, tbh. It would only yield a 2x improvement in this case, which means the underlying microarchitecture has also been progressing. Also, Maxwell flops are more valuable than Fermi flops.

Assuming PS5/XB2 will use a budget GPU again, we'll get 7 TFLOPS consoles in 2020. (1.8TF x 4). Basically the power of a Titan X, but probably with a lot more memory and bandwidth (HBM).

In fall 2013, the most powerful GPU on the market was the R9 290X, which was 5.632 / 1.843 ≈ 3x more powerful than the PS4. AMD will probably release a ~50 TFLOPS GPU in the 2019-2020 timeframe (The 390x this year will be around 8 TFLOPS, then we'll have three die shrinks, so 2^3 * 8TFLOPS = 64 TFLOPS). Taking the conservative 50 TFLOPS number and dividing it by three, a PS5 released circa 2019-2020 should easily be around 15 TFLOPS--which is as it should be, cause it doesn't make any sense releasing a console that's not close to an order of magnitude more powerful than its predecessor.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Well, a little bit more optimism would be warranted if you had done some research.
Well, let's see..

The reason we'll have been on 28nm for four years is because 20nm was a bust, and was only used for SoC (aka iPhone 6). But it's not a situation that's likely to repeat again. 20nm was the last planar node for TSMC, and was basically created to ease the transition to FinFET transistors. For this compromise, it had all kinds of issues like poor power scaling.
All planar nodes below 45nm have had issues with poor power scaling, or at least poorer than the expected scaling, for the dreaded exponential leakage. Why do you think voltage has been dropping so slow (read: been virtually stuck at the same levels) for the past few planar nodes? No proportional dropping of voltage, no proportional power scaling. FinFET helps, but it just offsets the problem a bit. How much does it offset it? Well, not much - in the 7nm timeframe we'll be seeing the transition to exotic materials. That's literally 2 nodes from today's off-the-shelf!

However, 28nm --> 16FF+ will be like a node shrink+, with a 2x density improvement (instead of the usual 1.9x) and 70% (!) less power consumption - that's normally like 40-50% at best.
..Which says absolutely nothing when taken outside of any yield context. We have been able to make single-atom transistors for the past couple of years. In state-of-the-art laboratories, at who-knows-what-bucks a pop. Yay!

In their Q1/15 report, TSMC stated that their 10nm qualification is slated for late 2015, with volume ramp beginning at the end of 2016. Granted, there's still room for slippage and the first 100 million chips will be SoCs, but expect 10nm GPUs roughly a year from then. And it will be a true node shrink, too, with 2.1x the logic density and 40% power reduction.
For some definition of 'volume ramp', sure. I admire your optimism, BTW. Here's what NV themselves had to say about their over-stayed love affair with 28nm:

http://www.extremetech.com/computin...y-with-tsmc-claims-22nm-essentially-worthless

Let me quote one particular graph from the article:
NV-Pres3.jpg


10nm GPUs in 2017? Made by whom? Vivante?

As for 7nm, risk production is targeted at early 2017. The longest cadence in TMSC's history from an announced risk production date to product in the shelves is about two years--and that's the worst case scenario with lots of delays. All this is to say, there will be 7nm GPUs in late 2019/early 2020 at the latest.
See me go out on a limb and say that there won't be 7nm discrete GPUs anytime before 2020. There, it's gone on record now.

That's pretty impressive with only one node shrink, tbh. It would only yield a 2x improvement in this case, which means the underlying microarchitecture has also been progressing. Also, Maxwell flops are more valuable than Fermi flops.
Maxwell flops come largely at the cost of eliminating double precision. Transistors which used to do DP before do SP now. Given how NV see great future in front of half-precision, I guess we'll soon have bucketloads of fp16. Much efficiency, so gains!

In fall 2013, the most powerful GPU on the market was the R9 290X, which was 5.632 / 1.843 ≈ 3x more powerful than the PS4. AMD will probably release a ~50 TFLOPS GPU in the 2019-2020 timeframe (The 390x this year will be around 8 TFLOPS, then we'll have three die shrinks, so 2^3 * 8TFLOPS = 64 TFLOPS). Taking the conservative 50 TFLOPS number and dividing it by three, a PS5 released circa 2019-2020 should easily be around 15 TFLOPS--which is as it should be, cause it doesn't make any sense releasing a console that's not close to an order of magnitude more powerful than its predecessor.
You've brought some linear logic to an exponential gun-fight. Good thing is, we can all sit back an watch the show unfold in the next few years.
 

John Harker

Definitely doesn't make things up as he goes along.
Oh bummer I was suppose to go to that and ended up bailing... That would have been cool to see.
 
I wonder what it would take to run at 1080p, 30FPS with something like SMAA with a temporal component (like Ryse and Second Son). And if that could be within reach for next gen with evolving tech.
 

Momentary

Banned
Honestly,Square should send their Tech Demos over to TV companies like Samsung etc

It could be a good way to show off new 4K TV capabilities as well as a good way for Square to gain some publicity.

You know those random demos that run on TV Displays.I've actually seen FFXIII CG on display on a Samsung TV once.

Over here on base in North Carolina they use some kind of PC pCARS tech demo on the screen. I thought it was pretty damn cool how they were using a PC game to show off the capabilities of 4K.
 

E-Cat

Member
All planar nodes below 45nm have had issues with poor power scaling, or at least poorer than the expected scaling, for the dreaded exponential leakage. Why do you think voltage has been dropping so slow (read: been virtually stuck at the same levels) for the past few planar nodes? No proportional dropping of voltage, no proportional power scaling. FinFET helps, but it just offsets the problem a bit. How much does it offset it? Well, not much - in the 7nm timeframe we'll be seeing the transition to exotic materials. That's literally 2 nodes from today's off-the-shelf!

Yeah, probably. Microarchitectural innovations also help with the offset. Maxwell was able to double its power-efficiency over Kepler on the 28nm node. I think we'll manage for a couple generations yet.

..Which says absolutely nothing when taken outside of any yield context. We have been able to make single-atom transistors for the past couple of years. In state-of-the-art laboratories, at who-knows-what-bucks a pop. Yay!
Well, this is not some exotic research project - it will be a commercially viable node in just a few months. And considering 16nm is actually using the same backend as 20nm, there's a reason to believe the ramp will be relatively quick.

10nm GPUs in 2017? Made by whom? Vivante?
Are you expecting TSMC/GloFo/Samsung to go out of business? Note that I said "roughly" a year later. Could easily slip into early 2018.

See me go out on a limb and say that there won't be 7nm discrete GPUs anytime before 2020. There, it's gone on record now.

It's easy to be pessimistic, sure. Intel just recently ordered 15 EUV systems from ASML. It means we could be looking at the insertion of EUV at the 7nm node. Will it lower the cost per transistor compared to ArF multiple-patterning? Maybe it will, maybe it won't.

Maxwell flops come largely at the cost of eliminating double precision. Transistors which used to do DP before do SP now. Given how NV see great future in front of half-precision, I guess we'll soon have bucketloads of fp16. Much efficiency, so gains!

It doesn't make any sense to have DP hardware in a gaming GPU, so good riddance. I hope they will stay separate lines. FP16 will be useful for some Deep Learning stuff.

You've brought some linear logic to an exponential gun-fight. Good thing is, we can all sit back an watch the show unfold in the next few years.

That we can. Besides, it's you who is thinking linearly. What's your claim, anyway? That there won't be a 7nm discrete GPU by December 31st, 2020? I shall be bringing up this conversation around the time of the PS5's announcement - be sure to wear some tar and feathers.
 
Point #1:

I found something on a Square Enix employee's resume that could be very interesting, but I'm not sure how reliable it is......... he describes Agni's Philosophy as a "movie teaser."

Square-Enix also made several Animated Movies, featuring Cameos Characters. Their first animated movie, Final Fantasy : Spirit Within, released in 2001. And their second animated movie, Final Fantasy VII : Advent Children was released in 2005. In 2013, Square-Enix released their new Movie Teaser "Agni's Philosophy" using the newly High-Technology RTP.

.... which seems to me to imply that Agni's Pholosophy is a full-length film project along the lines of Spirits Within and Advent Children.

Point #2:
Square Enix has been hiring CG movie producers and directors this year. They had job listing up back in February (now gone) seeking a "game and movie director" for pre-rendered cinematics

SE called itself "a world-class game and movie studio to the global market" in the ad. Rather an odd description for itself when they haven't done a CG movie since 2009.

There is also a listing up now for a game and movie CG producer though it doesn't include the "world class movie studio" bit
https://js01.jposting.net/square-enix/u/producer/job.phtml?job_code=419

Now that could all just be standard Visual Works stuff, but I do wonder if this is a larger CG project being worked on here.
 

AmyS

Member
Wouldn't it be fair to reason that consoles shipping in late 2019 would be using 10nm FinFET chips, be they manufactured by Globalfoundries, TSMC or even Samsung, and regardless of them being AMD or Nvidia ?
 

E-Cat

Member
Wouldn't it be fair to reason that consoles shipping in late 2019 would be using 10nm FinFET chips, be they manufactured by Globalfoundries, TSMC or even Samsung, and regardless of them being AMD or Nvidia ?

Most likely, yeah.
 

Reallink

Member
Honestly,Square should send their Tech Demos over to TV companies like Samsung etc

It could be a good way to show off new 4K TV capabilities as well as a good way for Square to gain some publicity.

You know those random demos that run on TV Displays.I've actually seen FFXIII CG on display on a Samsung TV once.

They do. Panasonic's 3d in store demo reel used FF14 cutscenes some years ago.
 

Reallink

Member
Point #1:

I found something on a Square Enix employee's resume that could be very interesting, but I'm not sure how reliable it is......... he describes Agni's Philosophy as a "movie teaser."



.... which seems to me to imply that Agni's Pholosophy is a full-length film project along the lines of Spirits Within and Advent Children.

Point #2:
Square Enix has been hiring CG movie producers and directors this year. They had job listing up back in February (now gone) seeking a "game and movie director" for pre-rendered cinematics

SE called itself "a world-class game and movie studio to the global market" in the ad. Rather an odd description for itself when they haven't done a CG movie since 2009.

There is also a listing up now for a game and movie CG producer though it doesn't include the "world class movie studio" bit
https://js01.jposting.net/square-enix/u/producer/job.phtml?job_code=419

Now that could all just be standard Visual Works stuff, but I do wonder if this is a larger CG project being worked on here.

Smells like a real time VR movie project.
 
I wonder if this tech demo is eventually going to be a game. Saw a cinematic during ps4 reveal, looked like a final fantasy game but not sure if it was a real game.
 

salromano

Member
Point #1:

I found something on a Square Enix employee's resume that could be very interesting, but I'm not sure how reliable it is......... he describes Agni's Philosophy as a "movie teaser."



.... which seems to me to imply that Agni's Pholosophy is a full-length film project along the lines of Spirits Within and Advent Children.

Point #2:
Square Enix has been hiring CG movie producers and directors this year. They had job listing up back in February (now gone) seeking a "game and movie director" for pre-rendered cinematics

SE called itself "a world-class game and movie studio to the global market" in the ad. Rather an odd description for itself when they haven't done a CG movie since 2009.

There is also a listing up now for a game and movie CG producer though it doesn't include the "world class movie studio" bit
https://js01.jposting.net/square-enix/u/producer/job.phtml?job_code=419

Now that could all just be standard Visual Works stuff, but I do wonder if this is a larger CG project being worked on here.

"Movie" is a common term for trailers and CG videos, etc. They don't actually mean feature-length movies.

i.e.

https://www.youtube.com/watch?v=nr4jx2u7QlY - "Bravely Second Opening Movie"

https://www.youtube.com/watch?v=-OltkbBY2T8 - "Deception IV: The Nightmare Princess Teaser Movie"

https://www.youtube.com/watch?v=bQbpt2rrwWY - "Stella Glow Teaser Movie"

etc. etc.
 
Top Bottom