Google today released a new spatial audio software development kit called ‘Resonance Audio’, a cross-platform tool based on technology from their existing VR Audio SDK. Resonance Audio aims to make VR and AR development easier across mobile and desktop platforms.
Google’s spatial audio support for VR is well-established, having introduced the technology to the Cardboard SDK in January 2016, and bringing their audio rendering engine to the main Google VR SDK in May 2016, which saw several improvements in the Daydream 2.0 update earlier this year. Google’s existing VR SDK audio engine already supported multiple platforms, but with platform-specific documentation on how to implement the features. In February, a post on Google’s official blog recognised the “confusing and time-consuming” battle of working with various audio tools, and described the development of streamlined FMOD and Wwise plugins for multiple platforms on both Unity and Unreal Engine.
The new Resonance Audio SDK consolidates these efforts, working ‘at scale’ across mobile and desktop platforms, which should simplify development workflows for spatial audio in any VR/AR game or experience. According to the press release provided to Road to VR, the new SDK supports “the most popular game engines, audio engines, and digital audio workstations” running on Android, iOS, Windows, MacOS, and Linux. Google are providing integrations for “Unity, Unreal Engine, FMOD, Wwise, and DAWs,” along with “native APIs for C/C++, Java, Objective-C, and the web.”
This broader cross-platform support means that developers can implement one sound design for their experience that should perform consistently on both mobile and desktop platforms. In order to achieve this on mobile, where CPU resources are often very limited for audio, Resonance Audio features scalable performance using “highly optimized digital signal processing algorithms based on higher order Ambisonics to spatialize hundreds of simultaneous 3D sound sources, without compromising audio quality.” A new feature in Unity for precomputing reverb effects for a given environment also ‘significantly reduces’ CPU usage during playback.
Much like the existing VR Audio SDK, Resonance Audio is able to model complex sound environments, allowing control over the direction of acoustic wave propagation from individual sound sources. The width of each source can be specified, from a single point to a wall of sound. The SDK will also automatically render near-field effects for sound sources within arm’s reach of the user. Near-field audio rendering takes acoustic diffraction into account, as sound waves travel across the head. By using precise HRTFs, the accuracy of close sound source positioning can be increased. The team have also released an ‘Ambisonic recording tool’ to spatially capture sound design directly within Unity, which can be saved to a file for use elsewhere, such as game engines or YouTube videos.
Resonance Audio documentation is now available on the new developer site.
For PC VR users, Google just dropped Audio Factory on Steam, letting Rift and Vive owners get a taste of an experience that implements the new Resonance Audio SDK. Daydream users can try it out here too.
The post Google Releases ‘Resonance Audio’, a New Multi-Platform Spatial Audio SDK appeared first on Road to VR.
Source: Road to VR