Documentation

1. Getting Started

The package you have extracted can live in the Asset Folder or in the Package folder if you want to repackage it with your own UPM system. It already has a package.json file configured and all the code has its own Assemblies.

Your Game Assembly should add an entry in the Assembly Definition Reference of the ForceMagic’s Toolbox Audio package located in the Runtime folder under the name “ForceMagic.Toolbox.Audio.asmdef”.

2. Initialization

In your game entry point, your first scene or your bootstrap however you call it, you must call the following line which will initialize the AudioManager.

var audioManager = AudioManager.InstantiateGameObject(true);

Then, you could store that instance, which has been marked as DontDestroyOnLoad (true passed in parameters), in your custom system to be able to fetch it later on. 

If you don’t have anything in mind, a simple Binder is provided that follow a lightweight injection pattern. If you decide to use it, you will want to Bind the AudioManager with each of its implemented interface.

Binder.Bind<IAudioProvider, AudioManager>(audioManager);
Binder.Bind<IAudioLayerDispatcher, AudioManager>(audioManager);

The benefits of using the Binder is that you can easily retrieve the interface you need later on using Fetch in any of your custom MonoBehavior.

AudioLayerDispatcher = Binder.Fetch<IAudioLayerDispatcher>();

You can also see an example of the in SampleAudioClipPlayer2D class.

3. Code mindset

All the code has been written with the mindset of being extended, overriden, mocked or even replaced with different implementation of certain interface.

For example, when you Bind the IAudioProvider, feel free the inject another instance of yours if required. The IAudioProvider could be replace with another object implementing its interface function.

AudioSource GetAudioSource();
void ReleaseAudioSource(AudioSource audioSource);

By doing so, you could inject an AudioSourcePool that you have already implemented in your project.

4. Feature usages

Instead of adding AudioSource to your GameObjects, the AudioManager is meant to be used with AudioDataScriptable, AudioData, or CrossFadeAudioData use the one you prefer. Those object actually replicates all the AudioSource default values and properties.

This way, the allocation of the AudioSource are actually made within the Audio system. They are already pooled by default and they will be reuse through the lifecycle of your game or app.

The AudioSourceWrapper class embedded in any AudioLayer will take care of this management of AudioSource and will also detect when the Clip has finished playing. If looping, it will never finish.

Once you have that in mind, you can start using the IAudioLayerDispatcher interface to play clip and interact with the various Layers you define.

A layer has its own volume and will affect the volume of each sound clip played on it. Note that if your AudioData volume is set to 0.6 and the Layer Volume is set to 0.5, the final output volume will be 0.3. Which is a neat feature to make sure all the audio levels remains proportial to their initial settings.

This will allow sound smoothing and crossfade to be cleaner.

Please look at the sample folder which contains a SampleScene on which you can rely on to see some feature in action.

5. Performance Consideration

The second goal of the package is to be lightweigh memory wise, you will see that most call Play, Stop, SetLayerVolume, PauseLayer does not allocate any memory. Same for the AudioSource management, when one finish playing and is being re-used.

This isn’t be luck, the memory footprint is reduced at its minimum on purpose, in some cases like for SmoothLayerVolumeTo and CrossFade it was necessary to allocate since we call StartCoroutine, but this should not hurt your fragmentation, GC, and performance too much.

The default settings for the List size and initial elements of any pooled object in the Audio System is located in the AudioUtils class. Feel free to tweak them, their default value is 16, thus if you go over 16 active AudioSource, the new ones will be allocated on the fly in which case there will be an allocation on the Play.