Google's Realistic Resonance Audio Integrates With Unity


Web giant Google and 3D design giant Unity on Monday announced full integration for Resonance Audio, along with a new SDK and accompanying libraries meant to bring all of the characteristics of real audio to VR and other 3D use cases that could benefit from realistic, atmospheric audio. Resonance Audio is capable of creating 3D sound from multiple sources with little CPU usage, in addition to having the sound from those sources react in real time to the environment, including muffling and occlusion, going into different spaces with mapped-out reverberation, and being suddenly cut out or turning on in response to environmental stimuli. Resonance is now available for all developers, and the technology behind it is already powering a good number of the top Unity applications out there, as well as 3D YouTube videos that use spatial audio. Resonance Audio works with Unity's core processes, and is thus available on all Unity-compatible platforms, including Android, iOS, Vive, Rift, and even game consoles.

Resonance Audio works by converting traditional sound source files into smaller, ambisonic workloads which are easier to manipulate from a computational standpoint. This approach allows Resonance Audio to use a universal set of tools with all types of sounds, which means it can treat all sounds the same, just as real life does. Developers can pick where sounds are coming from and assign resonance characteristics to objects and set pieces in the environment, and the SDK will do the rest. Users will get a smooth, realistic audio experience as a result of a successful implementation of the technology, Google and Unity suggested. Alternatively, developers can also create surreal experiences to mess with users' perception by applying unnatural resonance and environmental characteristics to audio, usually making for a very jarring effect in VR and 3D applications. Still, such an approach is also a possibility.

As a bonus, the SDK includes an Ambisonic Soundfield authoring tool and an Ambisonic Decoder, practically eliminating the barrier for creating and using ambisonic sound resources. Developers can create their own ambisonic clips, export them, or convert normal sound clips to ambisonic ones. These can be used alongside spatialized and static audio clips, creating a multi-layered listening experience. This effort fosters things like using a static layer for a game's background music, a spatialized layer for speech from a character's cell phone, and ambisonic audio through the Resonance SDK for environmental and gameplay sounds, to highlight just one possible use case.

Download Resonance Audio SDK For Unity

Share this page

Copyright ©2017 Android Headlines. All Rights Reserved.

This post may contain affiliate links. See our privacy policy for more information.
Senior Staff Writer

Daniel has been writing for Android Headlines since 2015, and is one of the site's Senior Staff Writers. He's been living the Android life since 2010, and has been interested in technology of all sorts since childhood. His personal, educational and professional backgrounds in computer science, gaming, literature, and music leave him uniquely equipped to handle a wide range of news topics for the site. These include the likes of machine learning, Voice assistants, AI technology development news in the Android world. Contact him at [email protected]

View Comments