• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Why isn't OpenAL more widespread? (Headphones and HRTF inside)

It's so weird to me that OpenAL is still a thing. I created the base API model when at Loki, Bernd wrote the spec, my best friend Joe wrote the original Linux implementation, and Creative contributed their implementation on Windows. I remember going up there and meeting Keith Charley and the EAX team and what not. So long ago.

Oculus is pushing research into 3D audio once again, HRTFs, etc.
 

Tetranet

Member
There is hope for more attention if VR headsets catch on. 3D audio is an extremely important component for the level of immersion that the companies are trying to achieve.


Either way, this topic needs more attention. It's a shame to leave technology like this forgotten.
 
Dolby Headphone works very well for this and it's compatible with any multi-channel sound output. A lot of people who demand this kind of thing buy an outboard box that you connect your sound output to which encodes the multi-channel signal to Dolby Headphone and then you connect your headphones to the box.
 
Third page, one view.
Seems no one cares, I think the question is answered.

I and have cared for years. I miss the days of EAX , and real time 3d positioned audio, that seems missing in all games now. It seems that everything is made for onboard soundchips...uggh.

I remember playing Theif 1 and 2 , dues ex and hell even Battlefield Vietnam that had built in schemes for creative cards. The cards themselves used to come with 8-10 pc games all with great sound. The sound blaster live card did 3d audio so perfect. It even made my PC have better FPS back in those days as it took audio processing completely away from the CPU. I guess today that's not an issue as cpus have multiple cores, but the sound quality has turned to crap since then.

When Windows vista hit all that stuff went out the door. I had to get rid of my sound blaster live as it started causing severed system stability issues. I then noticed that no games from 2006/7 and up were really using any 3d sound or capitalizing on it. It's such a shame!

I remember how just having a quad speaker setup with woofer was amazing with real 3d sound. Or hell a pair of 5.1 headphones. Great sound makes a game like Thief go from a good experience to an amazing one. It's so crazy how many sounds you miss out on without good sound drivers for 3d sound and a good hardware card.

I would love to invest in a new sound card, but what is supported now adays hardware wise? Didn't Microsoft bork all 3d sound options, or was this fixed in windows 7 (which I use)? If you have suggestions on a pre $200 sound card, I am all ears, but if there is no software that takes advantage of it ...what's the point?
 
Bumping this solely for its worthy cause. Audio is the one thing in computing that has seen a huge decline over the past 15 years. I had an Aureal Vortex 2 back in 1998 that was deprecated going from Win9x to the NT kernel. It was the best (by far) audio that I'd ever heard in gaming in the sense that it was accurate. EAX5 was then getting kinda close until Microsoft deprecated the audio stack in Vista.

Now, we're pretty much left with shit. Just contrast this with this. And BF4 has amazing audio next to its contemporaries.
 
With all the people who go out and buy an expensive tv and then use the weak shitty onboard speakers for audio its no surprise audio tech in games have gone backwards.
 

Arkanius

Member
It's so weird to me that OpenAL is still a thing. I created the base API model when at Loki, Bernd wrote the spec, my best friend Joe wrote the original Linux implementation, and Creative contributed their implementation on Windows. I remember going up there and meeting Keith Charley and the EAX team and what not. So long ago.

Oculus is pushing research into 3D audio once again, HRTFs, etc.

Holy shit
Tell us more, I am interested in this.

I and have cared for years. I miss the days of EAX , and real time 3d positioned audio, that seems missing in all games now. It seems that everything is made for onboard soundchips...uggh.

I remember playing Theif 1 and 2 , dues ex and hell even Battlefield Vietnam that had built in schemes for creative cards. The cards themselves used to come with 8-10 pc games all with great sound. The sound blaster live card did 3d audio so perfect. It even made my PC have better FPS back in those days as it took audio processing completely away from the CPU. I guess today that's not an issue as cpus have multiple cores, but the sound quality has turned to crap since then.

When Windows vista hit all that stuff went out the door. I had to get rid of my sound blaster live as it started causing severed system stability issues. I then noticed that no games from 2006/7 and up were really using any 3d sound or capitalizing on it. It's such a shame!

I remember how just having a quad speaker setup with woofer was amazing with real 3d sound. Or hell a pair of 5.1 headphones. Great sound makes a game like Thief go from a good experience to an amazing one. It's so crazy how many sounds you miss out on without good sound drivers for 3d sound and a good hardware card.

I would love to invest in a new sound card, but what is supported now adays hardware wise? Didn't Microsoft bork all 3d sound options, or was this fixed in windows 7 (which I use)? If you have suggestions on a pre $200 sound card, I am all ears, but if there is no software that takes advantage of it ...what's the point?

Microsoft is one of the killers of good audio. Creative was the other with the Aureal death.

Bumping this solely for its worthy cause. Audio is the one thing in computing that has seen a huge decline over the past 15 years. I had an Aureal Vortex 2 back in 1998 that was deprecated going from Win9x to the NT kernel. It was the best (by far) audio that I'd ever heard in gaming in the sense that it was accurate. EAX5 was then getting kinda close until Microsoft deprecated the audio stack in Vista.

Now, we're pretty much left with shit. Just contrast this with this. And BF4 has amazing audio next to its contemporaries.

RIP Aureal

Lots of companies won't use GPL/LGPL software outside of linking to dynamic system libraries on Linux or whatever.

This is one reason why OpenAL Soft hasn't gained traction.

Additionally, many companies prefer a higher-level API for audio. Something like FMod which contains resource management, real-time tuning, parameter-based mixing, etc...


I also miss my Aureal sound card from ages ago :(

So, what you are saying is, the other libraries are better for resources, but not so good for the audio quality per se?
 

Waikis

Member
I was using this thing for a while: http://www.smyth-research.com/technology.html. It measures and records your HRTF.

The original purpose of the device is for recording the sound characteristics of a speaker setup and applying the hrtf to your headphones so that it sounds exactly like the speaker setup.

If you really want your hrtf, I guess this can be a solution.
 

Mindlog

Member
Aureal owners support thread :]
Dolby Headphone works very well for this and it's compatible with any multi-channel sound output. A lot of people who demand this kind of thing buy an outboard box that you connect your sound output to which encodes the multi-channel signal to Dolby Headphone and then you connect your headphones to the box.
I own one and use it almost 100% of the time when gaming with headphones, but it's still not as good as what once was.
 
It's so weird to me that OpenAL is still a thing. I created the base API model when at Loki, Bernd wrote the spec, my best friend Joe wrote the original Linux implementation, and Creative contributed their implementation on Windows. I remember going up there and meeting Keith Charley and the EAX team and what not. So long ago.

Oculus is pushing research into 3D audio once again, HRTFs, etc.

oh, neat! guess I have you and Joe to thank for getting bonus marks at uni for my 3D graphics assignments. Was always extra points up for having sound and since they had to run on linux boxes, and I wanted them to be as platform independent as possible, OpenAL seemed an obvious choice. It was also a piece of cake to get up and running. So thanks!

So, what you are saying is, the other libraries are better for resources, but not so good for the audio quality per se?

Not exactly what he's saying. Being higher-level means essentially they hide some of the more complicated details and also have ready-made ways of eg. loading and managing all your sounds. With OpenAL for instance you have to write the methods that eg. load a compressed .OGG file into memory and decode it into raw PCM for openAL to play back. What openAL provides is basically just the means of playing back a sound in stereo or with a 3D position. So you have to build extra management features on top of that yourself. Note that this is not exclusive to OpenAL, generally all API's like this (eg openGL also) don't provide data structures and file or memory management either. That's why you find higher-level API's or engines built on top of them.
 

Arkanius

Member
oh, neat! guess I have you and Joe to thank for getting bonus marks at uni for my 3D graphics assignments. Was always extra points up for having sound and since they had to run on linux boxes, and I wanted them to be as platform independent as possible, OpenAL seemed an obvious choice. It was also a piece of cake to get up and running. So thanks!



Not exactly what he's saying. Being higher-level means essentially they hide some of the more complicated details and also have ready-made ways of eg. loading and managing all your sounds. With OpenAL for instance you have to write the methods that eg. load a compressed .OGG file into memory and decode it into raw PCM for openAL to play back. What openAL provides is basically just the means of playing back a sound in stereo or with a 3D position. So you have to build extra management features on top of that yourself. Note that this is not exclusive to OpenAL, generally all API's like this (eg openGL also) don't provide data structures and file or memory management either. That's why you find higher-level API's or engines built on top of them.

Developers also complain about OpenGL compared to DirectX as well, but I think it's for a different reason.
Yeah what you said makes sense, but it's funny watching indies adopt OpenAL more than studios, probably due being OpenSource indeed and to cut corners in work.
 

NekoFever

Member
With all the people who go out and buy an expensive tv and then use the weak shitty onboard speakers for audio its no surprise audio tech in games have gone backwards.

And as TVs get thinner, with smaller bezels, there's less space for speakers, which will mean they get even worse.
 
Bad audio makes me sad. I don't have high end gear -- just 100$ headphones and onboard sound -- but it's enough to tell a lot of people are doing it wrong. A friend of mine who went to film school told me that in film, you always save money for the audio, since it's extremely and cost-efficiently evocative if done right. I wish more people were like him and cared about this stuff.

Isn't everyone's HRTF different? Is there any software that will let users calibrate to their specific head, perhaps by playing test tones and letting the user adjust perceived direction with a slider or something? Then the games could use that created profile to deliver true positional audio?

Yep, everyone's function is different, but you can get good enough with a few common presets. Good enough until you try and pass off the illusion in VR. Related -- a friend of mine did some research on using depth cameras to determine ear & head shape (which determine the HRTF function & tables). Yep -- an actual use for Kinect.

It's something that sounds so weird. For example, Unreal 4 supports OpenAL on Linux/OSX (That is a very very pretty recent engine) except on Windows. u w0t m8.

Unity uses fmod sadly, which is a shame, Unity is pretty popular nowadays for Indies.

Hearing that Unreal uses the better API makes me like it even more. Why aren't they using OpenAL on Windows? What are they using instead? Does that do positional audio OK?
 

Arkanius

Member
Bad audio makes me sad. I don't have high end gear -- just 100$ headphones and onboard sound -- but it's enough to tell a lot of people are doing it wrong. A friend of mine who went to film school told me that in film, you always save money for the audio, since it's extremely and cost-efficiently evocative if done right. I wish more people were like him and cared about this stuff.



Yep, everyone's function is different, but you can get good enough with a few common presets. Good enough until you try and pass off the illusion in VR. Related -- a friend of mine did some research on using depth cameras to determine ear & head shape (which determine the HRTF function & tables). Yep -- an actual use for Kinect.



Hearing that Unreal uses the better API makes me like it even more. Why aren't they using OpenAL on Windows? What are they using instead? Does that do positional audio OK?

Have no idea. Is it some contractual agreement?
https://answers.unrealengine.com/questions/64480/a-question-about-sound-class-management.html

XAudio 2 is bad in my opinion. It's very barebones from everygame I have played. Most games that use it are ports from consoles as well. XAudio2 looks at your Windows Speaker setup and gives you 2/5/7 audio streams depending on the setup. And that's it.

Sony at least, with their first party titles have fantastic audio support, with Dynamic Range and proper speaker setup.
 

Lazaro

Member
Its a shame cause Bioshock 1 on PC was the last PC game I recall putting emphasis on not just the next-gen DX10 graphics but the amazing sound technology as well. It sounded quite good too.

From what I understand because most PC Games are console ports and because OpenAL is a pain to implement, most devs just slap Xaudio/Xaudio2 into their games because its easier to implement and much more stable for Windows OS's.

I found this on MSDN and it seems like Microsoft's implementation of HRTF.
Its called X3DAUDIO. I'm not sure if its inferior to OpenAL since apparently its uses a Doppler effect opposed to calculations (I'm not programmer/dev in any way :p) but apparently its new from Microsoft.
 

CTLance

Member
RIP Aureal. :(

Anyway, I also blame Microsoft for their move to a userspace audio stack in Vista that positively obliterated legacy surround sound options and required complete driver rewrites from audio chip makers. The end result was barebones stereo support even for chips that could support positional audio because it was way too much of a hassle. Greatly impeded the already faltering 3D sound adoption (caused by Creative buying/suing/otherwise hindering any competition) and made a whole bunch of otherwise perfectly fine legacy hardware worthless in one fell swoop.

Also, kinda sorta related: remember nVidia's soundstorm in nforce1/2 chips? Now that was a nice onboard chipset feature back in the day. 5.1 upmixing was unheard of for a shoddy onboard chip back in the day.
 

Arkanius

Member
Its a shame cause Bioshock 1 on PC was the last PC game I recall putting emphasis on not just the next-gen DX10 graphics but the amazing sound technology as well. It sounded quite good too.

From what I understand because most PC Games are console ports and because OpenAL is a pain to implement, most devs just slap Xaudio/Xaudio2 into their games because its easier to implement and much more stable for Windows OS's.

I found this on MSDN and it seems like Microsoft's implementation of HRTF.
Its called X3DAUDIO. I'm not sure if its inferior to OpenAL since apparently its uses a Doppler effect opposed to calculations (I'm not programmer/dev in any way :p) but apparently its new from Microsoft.

BioShock 1 used EAX effects for Creative users, and I think it used OpenAL.
 

Calabi

Member
This annoys me as well. Its why I always bought soundcards, because I want decent sound. Its great when you have a game with decent directional sounds it raises to the next level. Everything sounds so flat now.

I dont bother with soundcards anymore because none of them are that good, and very few games do anything with them. I dont even understand how a single company can hold the industry back that much. Even if no one else can do proper 3D position sound(which is ridiculous in my opinion), we barely even have the environment effecting the sounds and your distance or position effecting how things sounds(cant remember exact terms).

Why dont even the consoles do it? Surely Sony can implement their own audio api, that does all this, every game console game should have 3D sounds with Headphone option. I'm playing Demon Souls and the audio from that sounds awful.
 

efyu_lemonardo

May I have a cookie?
Third page, one view.
Seems no one cares, I think the question is answered.

I still care. I just don't even know which company has the desire or the ability to pick up the gauntlet and bring digital audio and proper audio acceleration on PC to the 21st century.

The problem isn't exclusive to gaming either, or even to PC. The fact that there aren't universal standards to replace MIDI on the protocol side, or the lack of new hardware standards for low latency device connectivity, and I'm sure the list goes on.

It's terribly sad, having to rely on 20 year old hacks and hardware that is no longer supported. We have so much more computing power than we used to have, yet practically no one is interested in harnessing it to try to make any big leaps in audio anymore.
 

Arkanius

Member
I still care. I just don't even know which company has the desire or the ability to pick up the gauntlet and bring digital audio and proper audio acceleration on PC to the 21st century.

The problem isn't exclusive to gaming either, or even to PC. The fact that there aren't universal standards to replace MIDI on the protocol side, or the lack of new hardware standards for low latency device connectivity, and I'm sure the list goes on.

It's terribly sad, having to rely on 20 year old hacks and hardware that is no longer supported..

It's sad that only the movie industry focus on sound, because they know it's halfway to grip you on the medium.
Meanwhile on Gaming...

But yes, trying to get good audio quality of games nowadays feels like a mix of trying to unearth Egyptian artifacts.
 
Yep, everyone's function is different, but you can get good enough with a few common presets. Good enough until you try and pass off the illusion in VR. Related -- a friend of mine did some research on using depth cameras to determine ear & head shape (which determine the HRTF function & tables). Yep -- an actual use for Kinect.

Not just the personal HRTF, but the headphones' relation to that. For example in-ear headphones and big, ear enclosing headphones would need a different function to be perfect.

But yeah, I did a very basic HRTF implementation (with a generic HRTF) in Matlab for a university course and it seemed to work reasonably well for most people who tried it.
 
Holy shit
Tell us more, I am interested in this.

Not sure what would be interesting to the thread. For whatever reason Loki was committed to cross-platform standards, which created some amusing arguments internally that the entire point of porting was leveraging native platform functionality, and why were we putting all this energy into making cross-platform libraries like SDL (my old boss was Sam Lantinga, who went on to Blizzard to design the UI for World of Warcraft and still maintains SDL), OpenAL, etc.

The base object model for OpenAL is obviously highly influenced by OpenGL in terms of the integer/handle based opacity of the C-style API, but trying to reduce surface area in terms of actual function entrypoints by coalescing common functionality like object creation/deletion/attribution management. We also kept the context and utility layers in a separate namespace wart, but mandatory and cross-platform to avoid the wgl/glx messiness. Similar with a formal extension query API. I wanted the spatialization to be 'ideal' in a sense, that you would specify a virtual 3D audio space and let the stack/hardware realize the spatialization, but to a large extent that was naivete on my part (I was really young!) given how important multi-channel background audio, etc., is to games, and how it doesn't really fit into the spatialization model.

Mr_Appleby said:
oh, neat! guess I have you and Joe to thank for getting bonus marks at uni for my 3D graphics assignments.

Well, thank Joe (Valenzuela, who later went on to work at Treyarch with me and has been at Insomniac for many years now) and then later of course Ryan (Gordon, of icculus.org fame, and who has been almost single-handedly carrying the Loki/Linux games torch). I didn't do terribly much implementation work, and it's still weird to me, as I mentioned, when I find OpenAL on OS X, etc.
 

efyu_lemonardo

May I have a cookie?
I wanted the spatialization to be 'ideal' in a sense, that you would specify a virtual 3D audio space and let the stack/hardware realize the spatialization, but to a large extent that was naivete on my part (I was really young!) given how important multi-channel background audio, etc., is to games, and how it doesn't really fit into the spatialization model.

Man, that could've been amazing for synthetic audio nowadays, as well as sound acceleration of course.
 
Man, that could've been amazing for synthetic audio nowadays, as well as sound acceleration of course.

Well, that's how the API does work, it just was problematic and there were lots of friction points. You specify a listener position/orientation, and then each audio object has similar attributes, a mono PCM encoding representing that discrete audio event, and the stack needs to sort out the spatialization for the output configuration and mixing. But the problem was how you fit "I want to play a 5.1 bgm track in this API model" and "also, I wan to stream it since it's not an explosion or piece of dialogue and is enormous". Thus alBufferData and the implicit format non-spatialization bits, etc (actually maybe we had a spatialization attribute and otherwise we downmixed multi-channel to mono and then re-spatialized it? can't remember).

HRTFs are fairly straightforward and easily generalizable so that's not a super crazy area that's hard to predict. More difficult stuff is sound tracing in a virtualized 3D collision environment. That stuff is hard to abstract, find a good sense of geometric density, come up with a sane translation layer between game and library acceleration structures for collision, etc. Microsoft did some interesting research recently in pre-generating convolutions for static environments offline.
 

efyu_lemonardo

May I have a cookie?
More difficult stuff is sound tracing in a virtualized 3D collision environment. That stuff is hard to abstract, find a good sense of geometric density, come up with a sane translation
layer between game and library acceleration structures for collision, etc. Microsoft did some interesting research recently in pre-generating convolutions for static environments offline.

My bad, I thought you were referring to a level of abstraction closer to this. Basically a kind of built in 'ray-tracing' simulation for audio where you'd have a high degree of control over the environmental parameters.
 
My bad, I thought you were referring to a level of abstraction closer to this. Basically a kind of built in 'ray-tracing' simulation for audio where you'd have a high degree of control over the environmental parameters.

Aureal attempted this IIRC but they ran into all of the same problems I mentioned above in terms of finding the right API abstraction to be performant, not to mention the enormous work of bootstrapping a software stack alongside a new piece of non-essential hardware for a niche market. Also hilariously I believe their hardware didn't accelerate the occlusion/reflection tracing, it was actually entirely software based, only the application of the final calculated factors as part of the DSP/mix was handled in hardware. C.f. http://members.optushome.com.au/kirben/A3D2_0.pdf for a trip down memory lane.

If you're interested in this I would watch very closely how some of the rendering acceleration work right now is developing:

https://en.wikipedia.org/wiki/OpenRL

I don't see a way to get the API docs publicly though.
 

Arkanius

Member
Aureal attempted this IIRC but they ran into all of the same problems I mentioned above in terms of finding the right API abstraction to be performant, not to mention the enormous work of bootstrapping a software stack alongside a new piece of non-essential hardware for a niche market. Also hilariously I believe their hardware didn't accelerate the occlusion/reflection tracing, it was actually entirely software based, only the application of the final calculated factors as part of the DSP/mix was handled in hardware. C.f. http://members.optushome.com.au/kirben/A3D2_0.pdf for a trip down memory lane.

If you're interested in this I would watch very closely how some of the rendering acceleration work right now is developing:

https://en.wikipedia.org/wiki/OpenRL

I don't see a way to get the API docs publicly though.

Is anyone in the industry considering applying Raytracing for real time graphics?
 

Mindlog

Member
Some really interesting reads on sound up above. Thanks for posting it!
RIP Aureal. :(

Anyway, I also blame Microsoft for their move to a userspace audio stack in Vista that positively obliterated legacy surround sound options and required complete driver rewrites from audio chip makers. The end result was barebones stereo support even for chips that could support positional audio because it was way too much of a hassle. Greatly impeded the already faltering 3D sound adoption (caused by Creative buying/suing/otherwise hindering any competition) and made a whole bunch of otherwise perfectly fine legacy hardware worthless in one fell swoop.

Also, kinda sorta related: remember nVidia's soundstorm in nforce1/2 chips? Now that was a nice onboard chipset feature back in the day. 5.1 upmixing was unheard of for a shoddy onboard chip back in the day.
Also a lot of stuff I agree with. Microsoft's move back then struck a pretty big blow to the segment. NVidia Soundstorm was great and I held onto the mobo for way longer than I would have otherwise. I'm sort of hoping that sound will get rolled into the GPU side of things. AMD/NVidia have a little more interest in pushing tech forward than the traditional sound hardware outlets.
 

TheD

The Detective
I still care. I just don't even know which company has the desire or the ability to pick up the gauntlet and bring digital audio and proper audio acceleration on PC to the 21st century.

The problem isn't exclusive to gaming either, or even to PC. The fact that there aren't universal standards to replace MIDI on the protocol side, or the lack of new hardware standards for low latency device connectivity, and I'm sure the list goes on.

It's terribly sad, having to rely on 20 year old hacks and hardware that is no longer supported. We have so much more computing power than we used to have, yet practically no one is interested in harnessing it to try to make any big leaps in audio anymore.

Uhh, digital audio is what the PC deals with, it can not be anything else and hardware audio acceleration is pointless in this day and age.
 

Arkanius

Member
Bumping this thread. Found it on Google as 2nd hit when searching on advancements in the area...

It's now 2017 and we are still dealing with bad mixing games such as Bethesda most recent games. (Doom, Prey, Wolfenstein, etc)

Binaural audio is still fucked up. A little better since some games started implementing HRTF algorithms (CS:GO, Overwatch), but we are still in a Medieval age
 

Karak

Member
Well, that's how the API does work, it just was problematic and there were lots of friction points. You specify a listener position/orientation, and then each audio object has similar attributes, a mono PCM encoding representing that discrete audio event, and the stack needs to sort out the spatialization for the output configuration and mixing. But the problem was how you fit "I want to play a 5.1 bgm track in this API model" and "also, I wan to stream it since it's not an explosion or piece of dialogue and is enormous". Thus alBufferData and the implicit format non-spatialization bits, etc (actually maybe we had a spatialization attribute and otherwise we downmixed multi-channel to mono and then re-spatialized it? can't remember).

HRTFs are fairly straightforward and easily generalizable so that's not a super crazy area that's hard to predict. More difficult stuff is sound tracing in a virtualized 3D collision environment. That stuff is hard to abstract, find a good sense of geometric density, come up with a sane translation layer between game and library acceleration structures for collision, etc. Microsoft did some interesting research recently in pre-generating convolutions for static environments offline.

mistaken post from bump. Ignore
 

cragarmi

Member
Ninja Theory have been talking up how good their 3D audio is and how it adds to their game Hell Blade. Looking forward to seeing if it does what it says on the tin.
 
As of the Creators Update for Win10, HRTF's / spatial audio are now built into Windows. Right click the speaker icon -> Spatial Sound and select Windows Sonic for Headphones.

The only "trick" you need to do in games is to generally switch them to 5.1 or 7.1 mixes. The way Sonic works is that if the audio engine is given a stereo (or mono) signal, it doesn't run it through HRTF filters since it assumes the audio is given in that format for a reason and doesn't want to make it seem unnecessarily artificial (examples: voice calls, stereo music, etc).

There's also the new Spatial Audio API so developers can add native support for spatial audio without worrying about underlying implementation, that way users can switch from Windows Sonic to Dolby Atmos and all of the software using Spatial Audio works just as well. It's the same API which is probably used on Xbox as well, so it's likely it will start picking up steam.
 

Paragon

Member
As of the Creators Update for Win10, HRTF's / spatial audio are now built into Windows. Right click the speaker icon -> Spatial Sound and select Windows Sonic for Headphones.
The only "trick" you need to do in games is to generally switch them to 5.1 or 7.1 mixes. The way Sonic works is that if the audio engine is given a stereo (or mono) signal, it doesn't run it through HRTF filters since it assumes the audio is given in that format for a reason and doesn't want to make it seem unnecessarily artificial (examples: voice calls, stereo music, etc).
There's also the new Spatial Audio API so developers can add native support for spatial audio without worrying about underlying implementation, that way users can switch from Windows Sonic to Dolby Atmos and all of the software using Spatial Audio works just as well. It's the same API which is probably used on Xbox as well, so it's likely it will start picking up steam.
Is there a list of games which supports this?
I'm also curious if anyone knows what differentiates Dolby Atmos from Windows Sonic, considering that Windows Sonic is free, while Atmos is not.
Windows Sonic appears to handle significantly more objects than Atmos can: 128 objects vs 32.

As for the virtual surround component, I wish there was some sort of indicator to show that it is working.
Most games that actually list the number of channels they are using seem to detect it as a stereo device, not 7.1
Same thing for Atmos - there's no indicator to show that it's working.
 

Arkanius

Member
Is there a list of games which supports this?
I'm also curious if anyone knows what differentiates Dolby Atmos from Windows Sonic, considering that Windows Sonic is free, while Atmos is not.
Windows Sonic appears to handle significantly more objects than Atmos can: 128 objects vs 32.

As for the virtual surround component, I wish there was some sort of indicator to show that it is working.
Most games that actually list the number of channels they are using seem to detect it as a stereo device, not 7.1
Same thing for Atmos - there's no indicator to show that it's working.

I still haven't found a good game to test Windows Sonic.
I always got 2.1 sound instead of virtual 5.1/7.1
 

OuiOuiBa

Member
Has anyone successfully enabled HRTF in Linux games using the instructions from the Reddit thread linked in the OP ?

I followed the instructions, also overrided Steam runtime's libopenal with mine (1.17.2) - I installed both the amd64 and i386 libopenal using apt and then changed the symlinks Steam was using (for both). I can confirm Half-Life 2 has loaded the more recent OpenAL soft library:
me@debian:~$ cat /proc/`pidof hl2_linux`/maps | grep libopenal
e38d4000-e3956000 r-xp 00000000 08:13 533762 /usr/lib/i386-linux-gnu/libopenal.so.1.17.2
e3956000-e3957000 ---p 00082000 08:13 533762 /usr/lib/i386-linux-gnu/libopenal.so.1.17.2
e3957000-e3959000 r--p 00082000 08:13 533762 /usr/lib/i386-linux-gnu/libopenal.so.1.17.2
e3959000-e395a000 rw-p 00084000 08:13 533762 /usr/lib/i386-linux-gnu/libopenal.so.1.17.2

I have executed the recommended in-game console commands prior to playing, however the effect is pretty much not here.
To dismiss any form of bias, I recorded gameplay sound (using record / playdemo, I rotated the camera, nodded the head etc. around sound sources on purpose), to compare both the sound and waveform inside Audacity, only to confirm it was pretty much the same, obviously.
Is it supposed to work with HL2 at all ? While the Reddit post author obviously seems more knowledgeable than me about Linux, I have some doubts about the accuracy of the "more or less ANYTHING which uses 3d sound" statements. Aren't there any games that use FMod, Miles or other, provide bad/poor information to OpenAL, or need tweaking (like these console commands for Source games) ?

Have you tried it and which games could work for you ? Thanks :)

PS. happy new binaural year
 

ancelotti

Member
Another option for PC gamers is this virtual sound sound card: https://spatialsoundcard.com - It's tuned by Tom Ammermann, so I imagine it's roughly on par with Windows Sonic and Atmos. Razer also has their own solution that they were giving away for a while. Basically, there are plenty of options, some of them free, and ever since onboard audio became good enough the mainstream market moved on.
 
Top Bottom