This is a paid asset, but now you can download Native Audio Free.
Detail this asset from Unity Store: Original Link
Native Audio v5.0.0 (Latest version)
Native Audio – Lower audio latency via direct unmixed audio stream at native side.
For iOS and Android.
Official Website | Ask questions in the forum | Discord
So your Unity game outputs WAY slower audio than other apps even on the same device? Turns out, Unity adds as much as 79% of the total latency you hear. Your devices can do better.
Unity won’t let you use only a subset of its hard-wired audio pipeline. But with Native Audio we can go native and make as much compromises as we want. Simplify, take dirty shortcuts, and aim for the lowest latency. Sacrificing functions and convenience that Unity was designed for as a friendly game engine.
✴️How is this possible?
► Unity didn’t intialize the native source with the lowest latency settings, but with the safest and defensive settings.
► Unity need to mix audio internally with FMOD to enable any concurrent audio at all. Or else you can only hear an individual imported AudioClip at any point. (That would sucks as a game, we aren’t here to make a music player, and we don’t want to fire a single bullet that mutes an entire music, either.)
► Unity has many convenient functions to be worthy enough to be called a game engine. Like AudioMixerGroup, volume faders, effects, limiting, AudioSource priority and spatial position, AudioClip flexible import settings that all “just works”, etc. All combined, it is inevitable having to wait at some point.
Native Audio allows us to break the norm and go for what normally would be considered crippled for a game engine :
► Request the OS more native sources for our own, completely independent of the one Unity requested and is pumping mixed audio to, going through all the bells and whistles.
► Export an AudioClip data to a new memory area that native side could read. Give the native side a pointer to this memory. Warm up and ready source as much as possible like a runner on a starting block ready to sprint.
► At the critical moment, signal from C# to native side for our exclusive native source to play that memory. Just run through it. No mixing. Just want to hear this AudioClip with nothing else, just want to hear this file like I double clicked on it. Instantly at this line of code, not even waiting to the end of this frame. The Play call is now much closer to the HAL.
► With multiple instances of native sources requested, we can have back limited concurrency while staying unmixed.
► Still keep the Unity-way around for audio that aren’t critical but require higher concurrency. Let them take time to mix. Selectively play critical audio in Native Audio’s express lane
Native Audio helps you do this across 2 platforms of iOS and Android, using the same common C# interface.
✴️Various use cases
While not able to mix greatly reduce flexibility, and preparations for native play could be messy, this is very useful for small but critical audio.
Imagine a drumming game that you have to hit at the correct moment. If you hit perfectly and the game says so, the sound will come later. If you hit early the game punishes you, the sound will instead be exact.
Even with something generic like UI response sound, where hearing feedback slower than a certain threshold after the user feel the touchscreen’s glass (which already took so much input latency time) will greatly deteriorate user experience.
The key is response sound. If you can predict the future and know the precise future what time it will play, you can always use PlayScheduled. But if you must wait for input? You must wait for performance judgement? You must wait until the character collect that coin, until you can decide to play or not to play which sound? An objective is to play as fast as possible after that realization moment. The only solution is Native Audio.
✴️Android High-Performance Audio Ready
It improves latency for iOS also, but I guess many came here in hope to fix the horrible Android latency, even before Unity added its things on top.
I am proud to present that Native Audio follows all of Google’s official best practices required to achieve High-Performance Audio in Android. For one thing, Native Audio uses NDK/C-based OpenSL ES and not Java-based MediaPlayer, SoundPool, or AudioTrack. Also because Native Audio is not as versatile, I can additionally fix all the latency-related tradeoffs that Unity have to pay for their more versatile audio engine.
Of course, with publicly available thorough research and confirmations. This means it can perform even better than native Android app that was coded naively/lazily in regarding to audio. Even pure Android developers might not want to go out of Java to NDK/C.
✴️How much faster?
This is not at all a standard latency measurement approach like loopback cable method. The number alone is not comparable between devices too. However the time difference of the same device truly show how much the latency improved because of Native Audio.
XIAOMI MI A2 (2018 / 8.1.0 OREO) 321.2 ms ➡ 78.2 ms (-75.65%)
XPERIA Z5 (2015 / 7.1.1 NOUGAT) 120.6 ms ➡ 69 ms (-42.79%)
LENOVO A1000 (2015 / 5.1 LOLLIPOP) 366.2 ms ➡ 217.4 ms (-40.63%)
ONEPLUS ONE (2014 / 9.0 PIE POSP ROM) 102.6 ms ➡ 59.4 ms (-42.11%)
SAMSUNG GALAXY NOTE 5 (2015 / 7.0 NOUGAT) 260.4 ms ➡ 85.8 ms (-67.05%)
SAMSUNG GALAXY NOTE 8 (2017 / 8.0.0 OREO) 263.6 ms ➡ 65.6 ms (-75.11%)
SAMSUNG GALAXY S7 EDGE (2016 / 8.0.0 OREO) 243.4 ms ➡ 67.4 ms (-72.31%)
SAMSUNG GALAXY S9 PLUS (2018 / 8.0.0 OREO) 98 ms ➡ 56.6 ms (-42.24%)
IPOD TOUCH GEN 5 (2012 / 9.3.5) 94.6 ms ➡ 58.8 ms (-37.84%)
IPAD 3 (2012 / 9.3.5) 115.6 ms ➡ 67.4 ms (-41.7%)