Audio Design Documentation

From FIFE development wiki
Jump to: navigation, search

This article is part of design documentation. Design 32.png

Design documentation describes how software is implemented or is about to be implemented. It focuses on system structure (e.g. dependencies), module interactions and relevant algorithms. Concepts described in these articles should form the terminology that is used when discussing about the software that forms FIFE.

This article is outdated and needs to be reviewed! Outdated.png

The content of this article is outdated and should be treated as such. We cannot guarantee the accuracy of the information presented here.

Introduction

This document provides general information about the functionality and usage of the Audio Module.

Overview & Current Status

The redesign of the new audio module is finished and the basic functionality now guaranteed. The nexts steps will be to improve the fundamental code, add "special" features and implement model-audio bindings. The main part of the new module has been rewritten from scratch, only some parts of the ogg-decoder were taken from the old module.

The structure can be simplified to SoundDecoders, SoundClips, SoundEmitters and one SoundManager. The SoundManager is responsible for the initialization of OpenAL und the management of SoundEmitters, which represent an instance of an audio file played at a specific position in virtual audio space. SoundClips are holding the buffers of the audio files, there's one SoundClip instance for every file. The SoundClip instances, which are also responsible for multiple streams of the same file, retrieve the buffer data from the SoundDecoders.


Audio module.jpg

A grey background indicates that the corresponding component is using the OpenAL API.

Feature List

  • Support for Ogg files
  • Streaming of big files
  • Memory-saving buffer sharing
  • Support for both positional and non-positional sounds
  • Macro-based OpenAL Error handling
  • Seeking to specific position in file possible
  • SoundEmitters are globally accessible by emitter-id
  • Master Volume / Mute feature
  • EAX Surround Sound on windows platforms

Planned Features

  • Support for FLAC and ACM (Fallout) Files
  • Device Enumeration / Selection
  • Model/Audio interaction
  • EventManager-Bindings
  • SoundGroups - Please have a look at our forums for more information
  • SoundEffects - Perhabs we could adopt some code from the Allacrost Engine and/or use EFX technology
  • Improved memory sharing for streams

Known Issues

  • Due to the missing functionality in OpenAL, setCursor() calls on *nix platforms might not work with non-streamed files.

Quick Manual

First we need to initialize the audio system:

self.soundmanager = self.engine.getSoundManager()
self.soundmanager.init()

Now we can create several SoundEmitters to play some audio files...

#SoundManager is always the owner of the SoundEmitter
soundemitter = self.soundmanager.createEmitter();

# Now it is possible to access the emitter directly via the soundemitter-variable, but
# also access via emitter-id is possible. E.g. we can load a file either with

soundemitter.load('filename.ogg')
# or with

self.soundmanager.getEmitter(soundemitter.getID()).load('filename.ogg')

# This Method can be useful if you have to handle a large amount of emitters. 
# You can simply store the emitter ids in an array.

# If we don't need the emitter any longer, we can release it with
self.soundmanager.releaseEmitter(soundemitter.getID())

#or simply
soundemitter.release()

NOTICE: To be able to hear the sound changing when the listener (or the object) moves, mono files need to be used. Stereo wav files suppress the spatialization effect. See the comment on the page 18 of the OpenAL_Programmers_Guide.pdf :

8-bit PCM data is expressed as an unsigned value over the range 0 to 255, 128 being an audio
output level of zero. 16-bit PCM data is expressed as a signed value over the range -32768 to
32767, 0 being an audio output level of zero. Stereo data is expressed in interleaved format,
left channel first. Buffers containing more than one channel of data will be played without 3D
spatialization.

Questions

  • I think it would be better to support FLAC instead of wav, as flac files are often smaller than wavs and it's a free audio codec. This would also solve the problem with hundreds of unsupported wav-formats. more...

Proposals

Sound Effects

The implementation of SoundEffects (like Fade, Reverb, Delay...) consists of an SoundEffect-interface class, the parent of all SoundEffects, and the effect-classes itself, inheriting the SoundEffect-class.

/** The SoundEffect Interface class
 */
class SoundEffect {
public:

	/** This function is called if the effect has been added 
	 * to a soundemitter-instance
	 */
	virtual void add(ALuint source*);

	/** This function is called before the effect is being removed
	 * from a soundemitter-instance
	 */
	virtual void remove();

	/** activates/deactivates the effect
	 *
	 */
	virtual void setEnabled(bool value);

	/** Returns the id of the sound effect
 	 *
         */
	unsigned int getID() {
		return m_effectid;
	}

protected:

	unsigned int	m_effectid;
	bool		m_enabled;
        ALuint          m_source;
};

To add a reverb effect to a soundemitter, we first will have to create an instance of the ReverbEffect-class (which inherits SoundEffect) and then apply this instance to our emitter:

reverb = ReverbEffect()
reverb.size = 0.1
reverb.amplitude  = 0.5 # set some attributes
emitter.addSoundEffect(reverb)
reverb.setEnabled(true) # activate the effect

When addSoundEffect() is called, the soundemitter-class internally calls the add()-method of the sound effect and passes over the OpenAL source. In the add-method the sound effect then applies it's changes to the OpenAL source. When we later remove the soundeffect from the soundemitter, the remove-method is called and the soundeffect undos all changes on the OpenAL source. The setEnabled-method can be used by to temporarily deactivate the effect. Every soundeffect attached to a soundemitter has a unique id specified by the id in the vector of the emitter (the vector to store the pointers of the attached soundeffects).

The changes in the header file of the soundemitter would look like this:

public:
	/** Adds a SoundEffect
	 * and calls effect->add() on success
	 *
	 * @return The effect id
	 */
	int addSoundEffect(SoundEffect* effect);

	/** Removes a SoundEffect,
	 * calls effect->remove()
	 *
	 * @param effectid The effect id
	 */
	void removeSoundEffect(unsigned int effectid);
private:
	std::vector<SoundEffect*> m_effectvec;  // the vector to store all applied sound effects

Questions of this solution:

  • How to solve the memory problem with python? I'm not a python pro, but afaik a instance created with python (reverb = ReverbEffect()) has no global scope. We could solve this by adding a getInstance() method to the sound effects, but then the effect finally has to be released by the user...
  • I would like to have the possibility to add a effect to more than one sound emitter (especially in regard to sound groups), but I have currently no idea how to handle this (as we cannot use simple effect ids).

Links