• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Project Acoustics - Wave acoustics engine for 3D interactive experiences by Microsoft

Bernkastel

Ask me about my fanboy energy!




If you want to read more about this
Project Acoustics is a wave acoustics engine for 3D interactive experiences. It models wave effects like occlusion, obstruction, portaling and reverberation effects in complex scenes without requiring manual zone markup or CPU intensive raytracing. It also includes game engine and audio middleware integration. Project Acoustics' philosophy is similar to static lighting: bake detailed physics offline to provide a physical baseline, and use a lightweight runtime with expressive design controls to meet your artistic goals for the acoustics of your virtual world.
And its coming to Xbox Series X
Project Acoustics – Incubated over a decade by Microsoft Research, Project Acoustics accurately models sound propagation physics in mixed reality and games, employed by many AAA experiences today. It is unique in simulating wave effects like diffraction in complex scene geometries without straining CPU, enabling a much more immersive and lifelike auditory experience. Plug-in support for both the Unity and Unreal game engines empower the sound designer with expressive controls to mold reality. Developers will be able to easily leverage Project Acoustics with Xbox Series X through the addition of a new custom audio hardware block.
 
Last edited:

Bernkastel

Ask me about my fanboy energy!
If you still want to read more
Or even more papers published by Microsoft's Audio and Acoustics group
 
Last edited:

longdi

Banned
Project vs Engine.
Which is better is all we care about.
Does the Project takes up significant portion of Ryzen core?
 

Bernkastel

Ask me about my fanboy energy!
And then idiots like this guy calling Audio Ray Tracing a 'gimmick'
 
Last edited:

TBiddy

Member
And then idiots like this guy calling it a 'gimmick'

Seems like such an odd take. If that is really his opinion, most raytracing applications is just a gimmick.
 

nikolino840

Member
Project vs Engine.
Which is better is all we care about.
Does the Project takes up significant portion of Ryzen core?
Have a dedicated chip like ps5, don't use the CPU Power
---
“It’s extremely exciting,” senior sound designer Daniele Galante said of the new console. “We’re going to have a dedicated chip to work with audio, which means we finally won’t have to fight with programmers and artists for memory and CPU power.”
---
 

Bernkastel

Ask me about my fanboy energy!
Without any dedicated audio chip, some parts of this project were already present in Gears of War 4 purely through Software


Now, they have a dedicated audio chip for these things. This is the same for Spatial Audio(similar to Sony's 3D Audio), they did it with software along with Dolby Atmos, DTS:X and WiSA for Xbox One X mostly with Software.
 
Last edited:

Bernkastel

Ask me about my fanboy energy!
Project vs Engine.
Which is better is all we care about.
Does the Project takes up significant portion of Ryzen core?
If you are talking about Sony's 3D audio, then Microsoft has Spatial Audio. They also have a dedicated audio chip(look at nikolino840 nikolino840 's post).
Spatial Audio – Spatial Audio delivers deeply immersive audio which enables the player to more accurately pinpoint objects in a 3D play space. With full support for Dolby Atmos, DTS:X and Windows Sonic, Xbox Series X has custom audio hardware to offload audio processing from the CPU, dramatically improving the accessibility, quality and performance of these immersive experiences.
Project Acoustics is a different thing.
 
Last edited:

-kb-

Member
With all those Instagram posts by Halo's audio team I cant wait to see the application of Audio Ray Tracing and Spatial Audio on Halo Infinite.

I am confused, you keep mentioning audio raytracing but the project acoustics page specifically says that it isn't raytracing.


Project Acoustics is a wave acoustics engine for 3D interactive experiences. It models wave effects like occlusion, obstruction, portaling and reverberation effects in complex scenes without requiring manual zone markup or CPU intensive raytracing. It also includes game engine and audio middleware integration. Project Acoustics' philosophy is similar to static lighting: bake detailed physics offline to provide a physical baseline, and use a lightweight runtime with expressive design controls to meet your artistic goals for the acoustics of your virtual world.

From that quote this seems like the audio version of static lighting, you do offline precanned computation of some expensive affects and then play it back when appropriate, this avoids having the need to do expensive calculations in real time but it's not going to change with the environment like a dynamic solution would.
 
Last edited:

Bernkastel

Ask me about my fanboy energy!
I am confused, you keep mentioning audio raytracing but the project acoustics page specifically says that it isn't raytracing.




From that quote this seems like the audio version of static lighting, you do offline precanned computation of some expensive affects and then play it back when appropriate, this avoids having the need to do expensive calculations in real time but it's not going to change with the environment like a dynamic solution would.
From the same link
Ray-based acoustics methods can check for occlusion using a single source-to-listener ray cast, or drive reverb by estimating local scene volume with a few rays. But these techniques can be unreliable because a pebble occludes as much as a boulder. Rays don't account for the way sound bends around objects, a phenomenon known as diffraction. Project Acoustics' simulation captures these effects using a wave-based simulation. The acoustics are more predictable, accurate and seamless.


Project Acoustics' key innovation is to couple real sound wave based acoustic simulation with traditional sound design concepts. It translates simulation results into traditional audio DSP parameters for occlusion, portaling and reverb. The designer uses controls over this translation process. For more details on the core technologies behind Project Acoustics, visit the research project page.
wave-simulation.gif
There is also this podcast from Major Nelson


Jason Ronald:
And now with the introduction of hardware accelerated Ray tracing with the Xbox series X, we're actually able to enable a whole new set of scenarios, whether that's more realistic lighting, better reflections, we can even use it for things like spatial audio and have Ray trace audio so that.

Larry Hryb:
Wait a minute,

Jason Ronald:
You are much more immersed.

Larry Hryb:
Ray tracing audio is the first time I've heard that's interesting.

Jason Ronald:
Absolutely. Absolutely. And that's the thing is what we're really focused on is really driving that next level of immersion.

Larry Hryb:
Yeah.

Jason Ronald:
In your gaming experiences. And that imply, that applies to both the visuals as well as the audio experience that you have.

Larry Hryb:
Yeah, right. I mean Ray tracing is one of those things where people, it's tough to, I mean obviously we're in an audio format, here's a podcast. It's tough to describe but you need to really see it and the texture and just makes scenes and areas just come to life. Right,

Jason Ronald:
Exactly, exactly. And with spatial audio a huge part of it as well as just really putting you in the Play space environment really understanding where the enemies are or being just that much more immersed.

You should also watch the Triton demo. Project Acoustics is just a different approach to Audio Ray Tracing.
 
Last edited:

-kb-

Member
From the same link

There is also this podcast from Major Nelson




You should also watch the Triton demo. Audio Ray Tracing is the term Xbox team used Project Acoustics.


And I'm sure they will be raytracing audio, on the GPU, using radeon rays, but thats unrelated t the project acoustics project as confirmed by the projects own page.
 

Bernkastel

Ask me about my fanboy energy!
Making games and augmented/virtual reality (AR/VR) feel immersive requires realistic-sounding audio, as characters, events, and users move through virtual rooms, environments, and ecosystems. Project Triton is a physics-based audio design system that creates an acoustic rendering that sounds natural, while keeping CPU usage modest and providing designer control. A product of a decade of sustained research into efficient wave propagation, it is the first demonstration that accurate wave acoustics can meet the demanding needs of AAA games like Gears of War.

Project Acoustics is the larger product effort that makes the internal Triton technology available externally for any game, via Unity and Unreal game engine plugins, with offline computation performed in Azure.
Project Triton automatically renders believable environmental effects that transition smoothly as the player moves through the world, illustrated above. These wave effects, such as obstruction and portaling, involve wave diffraction, making them challenging to compute within the tight CPU budget for games. Because of this, audio designers have traditionally built such auditory experiences manually, a tedious and expensive process. Project Triton automates this tedium while retaining designer control. The designer can then modify acoustics for storytelling goals, for example, reduce reverberance to improve speech intelligibility, or increase reverb decay time to make a cave feel spookier. It all fits within ~10 percent of a CPU core.
 
Last edited:

Bernkastel

Ask me about my fanboy energy!
And I'm sure they will be raytracing audio, on the GPU, using radeon rays, but thats unrelated t the project acoustics project as confirmed by the projects own page.
So, there’s a lot of work in room acoustics where people are thinking about, okay, what makes a concert hall sound great? Can you simulate a concert hall before you build it, so you know how it’s going to sound? And, based on the constraints on those areas, people have used a lot of ray tracing approaches which borrow on a lot of literature in graphics. And for graphics, ray tracing is the main technique, and it works really well, because the idea is you’re using a short wavelength approximation. So, light wavelengths are submicron and if they hit something, they get blocked. But the analogy I like to use is sound is very different, the wavelengths are much bigger. So, you can hold your thumb out in front of you and blot out the sun, but you are going to have a hard time blocking out the sound of thunder with a thumb held out in front of your ear because the waves will just wrap around. And, that’s what motivates our approach which is to actually go back to the physical laws and say, instead of doing the short wave length approximation for sound, we revisit and say, maybe sounds needs the more fundamental wave equation to be solved, to actually model these diffraction effects for us. The usual thinking is that, you know, in games, you are thinking about we want a certain set of perceptual cues. We want walls to occlude sound, we want a small room to reverberate less. We want a large hall to reverberate more. And the thought is, why are we solving this expensive partial differential equation again? Can’t we just find some shortcut to jump straight to the answer instead of going through this long-winded route of physics? And our answer has been that you really have to do all the hard work because there’s a ton of information that’s folded in and what seems easy to us as humans isn’t quite so easy for a computer and and there’s no neat trick to get you straight to the perceptual answer you care about.
...
I mean, the basic ideas have been around. People know that, perceptually, this is important, and there are approaches to try to tackle this, but I’d say, because we’re using wave physics, this problem becomes much easier because you just have the waves diffract around the edge. With ray tracing you face the difficult problem that you have to trace out the rays “intelligently” somehow to hit an edge, which is like hitting a bullseye, right?
...
So, the ray can wrap around the edge. So, it becomes really difficult. Most practical ray tracing systems don’t try to deal with this edge diffraction effect because of that. Although there are academic approaches to it, in practice it becomes difficult. But as I worked on this over the years, I’ve kind of realized, these are the real advantages of this. Disadvantages are pretty clear: it’s slow, right? So, you have to precompute. But we’re realizing, over time, that going to physics has these advantages.
Its just different from traditional methods.
 
Last edited:

-kb-

Member
I swear you didn't even read my previous post, so ill quote it for you so you can read it again and put it in nice big bold font for you.

I am confused, you keep mentioning audio raytracing but the project acoustics page specifically says that it isn't raytracing.


From that quote this seems like the audio version of static lighting, you do offline precanned computation of some expensive affects and then play it back when appropriate, this avoids having the need to do expensive calculations in real time but it's not going to change with the environment like a dynamic solution would.
Like I said Project Acoustics is Audio Ray Tracing.

Project Acoustics is a wave acoustics engine for 3D interactive experiences. It models wave effects like occlusion, obstruction, portaling and reverberation effects in complex scenes without requiring manual zone markup or CPU intensive raytracing. It also includes game engine and audio middleware integration. Project Acoustics' philosophy is similar to static lighting: bake detailed physics offline to provide a physical baseline, and use a lightweight runtime with expressive design controls to meet your artistic goals for the acoustics of your virtual world.

The paragraphs you quoted are talking about the reasons they aren't using raytracing did you even read them?.
 
This user has been reply banned for derailing an honest attempt at a thread to discuss MS approach to audio.
It’s funny how Xbox fans downplay every aspect that is better on PS5 but then make topics about the same aspects saying: “You see, Xbox does it better!”

Xbox won the Teraflop battle 12 vs 10, isn’t that all that matters?

SSD speed doesn’t matter
Audio doesn’t matter
RAM bottleneck doesn’t matter

Or maybe:

Are they afraid that those aspects might matter?
 
Last edited:

Bernkastel

Ask me about my fanboy energy!
I swear you didn't even read my previous post, so ill quote it for you so you can read it again and put it in nice big bold font for you.

The paragraphs you quoted are talking about the reasons they aren't using raytracing did you even read them?.
And my previous post was about how Project Acoustics and Triton is related to audio ray tracing. They said they are not using CPU intensive ray tracing because they are using other methods to remove the CPU load.
The Triton page talks about removing CPU load to 10% of a core. Also, in Nikunj Raghuvanshi's interview(which I already posted) he repeatedly calls Project Acoustics a form of ray tracing, but different from traditional methods.
 
Last edited:

Bernkastel

Ask me about my fanboy energy!
It’s funny how Xbox fans downplay every aspect that is better on PS5 but then make topics about the same aspects saying: “You see, Xbox does it better!”

Xbox won the Teraflop battle 12 vs 10, isn’t that all that matters?

SSD speed doesn’t matter
Audio doesn’t matter
RAM bottleneck doesn’t matter

Or maybe:

Are they afraid that those aspects might matter?
Weird that you are posting this in a thread about XSX's Audio Ray Tracing. Unlike PS5's 3D audio Xbox's Spatial Audio also uses Dolby Atmos, DTS:X and Windows Sonic.
Spatial Audio – Spatial Audio delivers deeply immersive audio which enables the player to more accurately pinpoint objects in a 3D play space. With full support for Dolby Atmos, DTS:X and Windows Sonic, Xbox Series X has custom audio hardware to offload audio processing from the CPU, dramatically improving the accessibility, quality and performance of these immersive experiences.
But Sony's Audio Ray Tracing efforts are almost non-existent.
 

ZehDon

Member
Ray Traced audio is actually a very exciting use for ray tracing hardware that doesn’t get the kind of attention I think it deserves. Being able to have an entire soundscape of effects that accurately map sound change across size, distance, material and direction means not just more accurate audio, but significantly more interesting audio, too. Very excited to see this kind of tech develop.

Side note, since console wars are killing every thread: the PS5 might pack less RT hardware, but this is one area where I expect RT across both consoles to really shine, especially with the extra efforts into dedicated audio processing in both next gen systems.
 

-kb-

Member
And my previous post was about how Project Acoustics and Triton is audio ray tracing. They said they are not using CPU intensive ray tracing because they are using other methods to remove the CPU load.
The Triton page talks about removing CPU load to 10% of a core. Also, in Nikunj Raghuvanshi's interview(which I already posted) he repeatedly calls Project Acoustics a form of ray tracing, but different from traditional methods.

Can you quote where he specifically says that Project Acoustics is a form of raytracing and doesn't just compare it to raytracing?.
 

Bernkastel

Ask me about my fanboy energy!
Can you quote where he specifically says that Project Acoustics is a form of raytracing and doesn't just compare it to raytracing?.
So, there’s a lot of work in room acoustics where people are thinking about, okay, what makes a concert hall sound great? Can you simulate a concert hall before you build it, so you know how it’s going to sound? And, based on the constraints on those areas, people have used a lot of ray tracing approaches which borrow on a lot of literature in graphics. And for graphics, ray tracing is the main technique, and it works really well, because the idea is you’re using a short wavelength approximation. So, light wavelengths are submicron and if they hit something, they get blocked. But the analogy I like to use is sound is very different, the wavelengths are much bigger. So, you can hold your thumb out in front of you and blot out the sun, but you are going to have a hard time blocking out the sound of thunder with a thumb held out in front of your ear because the waves will just wrap around. And, that’s what motivates our approach which is to actually go back to the physical laws and say, instead of doing the short wave length approximation for sound, we revisit and say, maybe sounds needs the more fundamental wave equation to be solved, to actually model these diffraction effects for us. The usual thinking is that, you know, in games, you are thinking about we want a certain set of perceptual cues. We want walls to occlude sound, we want a small room to reverberate less. We want a large hall to reverberate more. And the thought is, why are we solving this expensive partial differential equation again? Can’t we just find some shortcut to jump straight to the answer instead of going through this long-winded route of physics? And our answer has been that you really have to do all the hard work because there’s a ton of information that’s folded in and what seems easy to us as humans isn’t quite so easy for a computer and and there’s no neat trick to get you straight to the perceptual answer you care about.
  • In room acoustics, people have used a lot of ray tracing approaches
  • It works really well for graphics, because the idea is you’re using a short wavelength approximation.
  • Light waves are submicron and if they hit something they get blocked
  • Sound is very different, the wavelengths are much bigger. So, you can hold your thumb out in front of you and blot out the sun, but you are going to have a hard time blocking out the sound of thunder with a thumb held out in front of your ear because the waves will just wrap around.
  • So, instead of going for the same approach as ray tracing for light waves, they actually go back to the physical laws and say, instead of doing the short wave length approximation for sound, they revisit and say, maybe sounds needs the more fundamental wave equation to be solved, to actually model these diffraction effects for them.
They are not saying that it is not Audio Ray Tracing. They are saying that you cant use traditional techniques used for light ray tracing on sound waves. Thats what he meant.
 
Last edited:

-kb-

Member
  • In room acoustics, people have used a lot of ray tracing approaches
  • It works really well for graphics, because the idea is you’re using a short wavelength approximation.
  • Light waves are submicron and if they hit something they get blocked
  • Sound is very different, the wavelengths are much bigger. So, you can hold your thumb out in front of you and blot out the sun, but you are going to have a hard time blocking out the sound of thunder with a thumb held out in front of your ear because the waves will just wrap around.
  • So, instead of going for the same approach as ray tracing for light waves, they actually go back to the physical laws and say, instead of doing the short wave length approximation for sound, they revisit and say, maybe sounds needs the more fundamental wave equation to be solved, to actually model these diffraction effects for them.
They are not saying that it is not Audio Ray Tracing. They are saying that you cant use traditional techniques used for light ray tracing on sound waves. Thats what he meant.

The approach isnt raytracing based at all, as it says on the project page it used the same approach as baked lighting in GPU graphics. It is a neat approach, but it is not based around real time raytracing in anyway shape or form, except if you maybe baked the raytracing and then played it back during gameplay which is not a good as a dynamic real time approach.

You have misunderstood what he is saying and for some reason, refuse to believe even the projects own page about how it is approaching the problem.
 

Bernkastel

Ask me about my fanboy energy!
The approach isnt raytracing based at all, as it says on the project page it used the same approach as baked lighting in GPU graphics. It is a neat approach, but it is not based around real time raytracing in anyway shape or form, except if you maybe baked the raytracing and then played it back during gameplay which is not a good as a dynamic real time approach.

You have misunderstood what he is saying and for some reason, refuse to believe even the projects own page about how it is approaching the problem.
Because you cant use the techniques for light ray tracing on audio ray tracing. Thats what he says. He is not saying its not ray tracing, but that sound waves dont work like light waves. You cant block sound waves with your hand. The same path tracing techniques dont apply to sound.
 

Herr Edgy

Member
The paragraphs you quoted are talking about the reasons they aren't using raytracing did you even read them?.
No clue if someone pointed that out for you or not, but, it says CPU (!) raytracing, which is extremely expensive. Raytracing isn't new at all. Just that we can use it at runtime to do whatever we want with it is new.
Saying it doesn't use raytracing on the CPU to avoid the performance cost doesn't mean they aren't using raytracing elsewhere.
 
Last edited:

Captain Hero

The Spoiler Soldier
Project Acoustic is audio raytracig using a dedicated audio chip .. Sony and MS are bringing the same approach but the concept is different , I believe and it’s only an opinion the one that Sony have is going to be better and almost real time .. but both companies will do something great with 3D audio in gaming
 

-kb-

Member
Because you cant use the techniques for light ray tracing on audio ray tracing. Thats what he says. He is not saying its not ray tracing, but that sound waves dont work like light waves. You cant block sound waves with your hand. The same path tracing techniques dont apply to sound.

Do you have some details projects acoustic that not even the researchers have because I cannot understand why this quote is on their page.

Project Acoustics is a wave acoustics engine for 3D interactive experiences. It models wave effects like occlusion, obstruction, portaling and reverberation effects in complex scenes without requiring manual zone markup or CPU intensive raytracing. It also includes game engine and audio middleware integration. Project Acoustics' philosophy is similar to static lighting: bake detailed physics offline to provide a physical baseline, and use a lightweight runtime with expressive design controls to meet your artistic goals for the acoustics of your virtual world.

No clue if someone pointed that out for you or not, but, it says CPU (!) raytracing, which is extremely expensive. Raytracing isn't new at all. Just that we can use it at runtime to do whatever we want with it is new.
Saying it doesn't use raytracing on the CPU to avoid the performance cost doesn't mean they aren't using raytracing elsewhere.

It says CPU raytracing because traditionally GPUs had too high a latency to perform a lot of audio based affects, there's not difference between the two outside of the speed and latency.
 

Bernkastel

Ask me about my fanboy energy!
Project Acoustic is audio raytracig using a dedicated audio chip .. Sony and MS are bringing the same approach but the concept is different , I believe and it’s only an opinion the one that Sony have is going to be better and almost real time .. but both companies will do something great with 3D audio in gaming
And I am getting tired of explaining it to every one
If you are talking about Sony's 3D audio, then Microsoft has Spatial Audio.
Spatial Audio – Spatial Audio delivers deeply immersive audio which enables the player to more accurately pinpoint objects in a 3D play space. With full support for Dolby Atmos, DTS:X and Windows Sonic, Xbox Series X has custom audio hardware to offload audio processing from the CPU, dramatically improving the accessibility, quality and performance of these immersive experiences.
Sony's 3D Audio is like Microsoft's Spatial Audio. Unlike Sony it also fully supports Dolby Atmos, DTS:X and Windows Sonic. They also used it in Xbox One X, but now they have a dedicated audio chip for that. We dont know whose audio chip is better, so lets stop having opinions(people also had opinions about PS5 being 13 TF)
Audio Ray Tracing is something Mark Cerny lightly brushed aside in both the Wired article and the GDC spec reveal. On the other hand Microsoft did the Triton demo way before the Wired article and has been working on Project Acoustics it since 2012.
Project Acoustic infact is a bit different from traditional audio ray tracing.
 
Last edited:

Bernkastel

Ask me about my fanboy energy!
Do you have some details projects acoustic that not even the researchers have because I cannot understand why this quote is on their page.
I already explained it to you(making it as simple as possible) that they are not denying Audio Ray Tracing but that Light Ray Tracing is not same as Audio Ray Tracing, so they cant use the same rules.
 

Bernkastel

Ask me about my fanboy energy!
But it literally says its not raytracing, in any form, audio or not. I cannot put this any simpler to you.
But this does
So, there’s a lot of work in room acoustics where people are thinking about, okay, what makes a concert hall sound great? Can you simulate a concert hall before you build it, so you know how it’s going to sound? And, based on the constraints on those areas, people have used a lot of ray tracing approaches which borrow on a lot of literature in graphics. And for graphics, ray tracing is the main technique, and it works really well, because the idea is you’re using a short wavelength approximation. So, light wavelengths are submicron and if they hit something, they get blocked. But the analogy I like to use is sound is very different, the wavelengths are much bigger. So, you can hold your thumb out in front of you and blot out the sun, but you are going to have a hard time blocking out the sound of thunder with a thumb held out in front of your ear because the waves will just wrap around. And, that’s what motivates our approach which is to actually go back to the physical laws and say, instead of doing the short wave length approximation for sound, we revisit and say, maybe sounds needs the more fundamental wave equation to be solved, to actually model these diffraction effects for us. The usual thinking is that, you know, in games, you are thinking about we want a certain set of perceptual cues. We want walls to occlude sound, we want a small room to reverberate less. We want a large hall to reverberate more. And the thought is, why are we solving this expensive partial differential equation again? Can’t we just find some shortcut to jump straight to the answer instead of going through this long-winded route of physics? And our answer has been that you really have to do all the hard work because there’s a ton of information that’s folded in and what seems easy to us as humans isn’t quite so easy for a computer and and there’s no neat trick to get you straight to the perceptual answer you care about.
  • In room acoustics, people have used a lot of ray tracing approaches
  • It works really well for graphics, because the idea is you’re using a short wavelength approximation.
  • Light waves are submicron and if they hit something they get blocked
  • Sound is very different, the wavelengths are much bigger. So, you can hold your thumb out in front of you and blot out the sun, but you are going to have a hard time blocking out the sound of thunder with a thumb held out in front of your ear because the waves will just wrap around.
  • So, instead of going for the same approach as ray tracing for light waves, they actually go back to the physical laws and say, instead of doing the short wave length approximation for sound, they revisit and say, maybe sounds needs the more fundamental wave equation to be solved, to actually model these diffraction effects for them.
They are not saying that it is not Audio Ray Tracing. They are saying that you cant use traditional techniques used for light ray tracing on sound waves. Thats what he meant.
 

-kb-

Member
But this does

No it doesnt, that talks about the differences between audio and graphics raytracing. This project as the project description says, is not about audio raytracing, this is akin to pre-baked lighting, once again from the projects page. Its a cool project but its nothing to do with what you think it is.
 
Last edited:

Bernkastel

Ask me about my fanboy energy!
No it doesnt, that talks about the differences between audio and graphics raytracing. This project as the project description says, is not about audio raytracing, this is akin to pre-baked lighting, once again from the projects page. Its a cool project but its nothing to do with what you think it is.
Yes it does. At this point I have explained everything I need to. You are either trolling or in severe denial.
 

-kb-

Member
Yes it does. At this point I have explained everything I need to. You are either trolling or in severe denial.

Your the one in severe denial.

You literally disagree with the projects own description.

I made it bigger encase you where missing it.

It literally says there is no raytracing used.

It literally says its pre-baked and similar to pre-baked lighting.

Project Acoustics is a wave acoustics engine for 3D interactive experiences. It models wave effects like occlusion, obstruction, portaling and reverberation effects in complex scenes without requiring manual zone markup or CPU intensive raytracing. It also includes game engine and audio middleware integration. Project Acoustics' philosophy is similar to static lighting: bake detailed physics offline to provide a physical baseline, and use a lightweight runtime with expressive design controls to meet your artistic goals for the acoustics of your virtual world.
 

rnlval

Member




If you want to read more about audio ray tracing

Read https://gpuopen.com/gaming-product/true-audio-next/
AMD's True Audio Next includes audio raytracing via RadeonRays. RDNA 2 has hardware accelerated raytracing and XSX has 52 CUs with RT cores.
 

SrBelmont

Neo Member
I saw the video on Gdc 2019, and stop it. Is not about ray trancing. It's clear use pre computeted data in baked way.
 

Bernkastel

Ask me about my fanboy energy!
I saw the video on Gdc 2019, and stop it. Is not about ray trancing. It's clear use pre computeted data in baked way.
As explained in this
Its not like light ray tracing. Unlike traditional audio ray tracing solutions, they are treating audio as actual sound waves. Everyone else treats Sound waves like light waves in path tracing. They are doing it a bit differently and are more accurate.
Read https://gpuopen.com/gaming-product/true-audio-next/
AMD's True Audio Next includes audio raytracing via RadeonRays. RDNA 2 has hardware accelerated raytracing and XSX has 52 CUs with RT cores.
Thats the hardware. DirectX 12 Ultimate is the software that will power this hardware.
 

Captain Hero

The Spoiler Soldier
You dont even know what you are talking about. And I am getting tired of explaining it to every one

I don’t need you to explain it to me boy .. and don’t care about you being tired or not .. I didn’t even ask you

Man .. fanboys these days are just unbearable
 

Bernkastel

Ask me about my fanboy energy!
I don’t need you to explain it to me boy .. and don’t care about you being tired or not .. I didn’t even ask you

Man .. fanboys these days are just unbearable
If you didnt need explaining then you wouldnt be comparing 3D Audio to Audio Ray Tracing.
 

MDADigital

Neo Member
Hey guys thought I would shim in since my studio is currently implementing Project Acoustics in our game, here is a devlog where you can listen how it sound (use headphones)



Project acoustics is not raycast based in any way. At baketime they simulate how waves propagate the scene, this is much more complex than raycsting, a light ray survives a few bounces and can't not bend around corners.

Thanks to a physics phenomenon called diffraction a sound wave that hits a corner will create a new source for vibration to propagate around said corner.

This would be very expensive todo at runtime and also impossible todo with same fidelity, this is why raycsting solutions take shortcuts, here an exammple with Steam Audio, for example when i walk back and forth in that doorway the propagation is very uneven and the sound changes alot in character just by moving a little

Project acoustics simulates how a sound propagate from predefined points placed by the framework. this simulation takes days or weeks on a single machine (took 10 days on my AMD 3950x for a small/medium sized scene) so you offload it to the cloud. Costs about 50 to 200 USD depending on scene. Anyway this produces a dataset of a few hundred megabytes that is streamed into the memory depending on where the listener are,they then interpolate between points to get a perfect dataset for exactly the acoustics between the listener and audio source at a very little CPU cost
 

Bernkastel

Ask me about my fanboy energy!
Hey guys thought I would shim in since my studio is currently implementing Project Acoustics in our game, here is a devlog where you can listen how it sound (use headphones)



Project acoustics is not raycast based in any way. At baketime they simulate how waves propagate the scene, this is much more complex than raycsting, a light ray survives a few bounces and can't not bend around corners.

Thanks to a physics phenomenon called diffraction a sound wave that hits a corner will create a new source for vibration to propagate around said corner.

This would be very expensive todo at runtime and also impossible todo with same fidelity, this is why raycsting solutions take shortcuts, here an exammple with Steam Audio, for example when i walk back and forth in that doorway the propagation is very uneven and the sound changes alot in character just by moving a little

Project acoustics simulates how a sound propagate from predefined points placed by the framework. this simulation takes days or weeks on a single machine (took 10 days on my AMD 3950x for a small/medium sized scene) so you offload it to the cloud. Costs about 50 to 200 USD depending on scene. Anyway this produces a dataset of a few hundred megabytes that is streamed into the memory depending on where the listener are,they then interpolate between points to get a perfect dataset for exactly the acoustics between the listener and audio source at a very little CPU cost

So, in your opinion its better than Audio Ray Tracing ?
 

Shin

Banned
Lovely GDC presentation, crystal clear explanation and the best take I've seen/heard so far to get every nook and cranny of a space/scene.
It totally makes sense to deploy more "cubes" where need to get a more accurate sound and the voxels approach speak for itself.
I'm not an audiophile (otherwise I wouldn't be on this forum), but I do like a good sound that gives a sense of depth (for the lack of a better word).

Audio is complex, more so than graphics I'd say, different ears and listening ability, so many takes from so many vendors over the past decades.
There's no one size fit all especially with this, both Microsoft and Sony knows what they are doing and both have experience in that department.
 

MDADigital

Neo Member
So, in your opinion its better than Audio Ray Tracing ?

No tech is the silver bullet. PA has it draw backs, for example each probe only simulates by default 45x45 meters around it. In many games that is fine but our game is a shooter and our sounds travel slot further than 45 meters. This means that let say the shooter is in a bunker 500 meter aways but the listener is in open space since the simulation only take 45 meter around the listener into account and he is in open space the sound will be completly unoccluded. We are working on our own solution for this on top of PA.

Also larger maps like open world will suffer from alot of ram and disk usage and long turn around times at high costs when baking.
 

Goliathy

Banned
Hey guys thought I would shim in since my studio is currently implementing Project Acoustics in our game, here is a devlog where you can listen how it sound (use headphones)



Project acoustics is not raycast based in any way. At baketime they simulate how waves propagate the scene, this is much more complex than raycsting, a light ray survives a few bounces and can't not bend around corners.

Thanks to a physics phenomenon called diffraction a sound wave that hits a corner will create a new source for vibration to propagate around said corner.

This would be very expensive todo at runtime and also impossible todo with same fidelity, this is why raycsting solutions take shortcuts, here an exammple with Steam Audio, for example when i walk back and forth in that doorway the propagation is very uneven and the sound changes alot in character just by moving a little

Project acoustics simulates how a sound propagate from predefined points placed by the framework. this simulation takes days or weeks on a single machine (took 10 days on my AMD 3950x for a small/medium sized scene) so you offload it to the cloud. Costs about 50 to 200 USD depending on scene. Anyway this produces a dataset of a few hundred megabytes that is streamed into the memory depending on where the listener are,they then interpolate between points to get a perfect dataset for exactly the acoustics between the listener and audio source at a very little CPU cost


thanks! sounds amazing! can't wait.

Whats the impact on the whole system performance using that tech though? And does your studio also do the same with ps5? how do these compare?!
 

MDADigital

Neo Member
thanks! sounds amazing! can't wait.

Whats the impact on the whole system performance using that tech though? And does your studio also do the same with ps5? how do these compare?!

We are only shipping on windows as of now but are looking into PSVR to reach more VR players, right now there is no indication PA will be ported to ps5, it works for Android (Oculus Quest) Xbox and windows currently.

Mainly its storage and ram that sees impact we are talking a few hundred mega of data depending on size of scene. Also if you need to increase the simulation are you will se increased usage of ram. CPU is really low, I wouldn't say its more than any other spatial DSP which do not have close the the same acoustic fidelity. For example listen here with headphones when I walk down into the subway my footsteps changes, I don't think there is another solution that can do that fidelity

One thing that PA does not have yet is directional sounds, you can here that in my devlog above when my fellow dev turns around and it does not change his speech, they are working on this and that will increase CPU in runtime a bit. But right now I would sat it's more or less no CPU impact compared to other solutions.
 
Top Bottom