Using CMdaAudioOutputStream rather than CMMFDevSound to implement QAudioOutput would not solve the problem: CMdaAudioOutputStream (header, implementation) is itself simply a wrapper over CMMFDevSound.
There are three levels at which buffering can potentially happen:
- In the application, above QAudioOutput. This is up to the application developer to worry about.
- In the QAudioOutput implementation, above the native audio rendering API. The Symbian implementation does not buffer data here.
- Below the native audio rendering API.
There must be some level of buffering at the level 3, and the amount of buffering is one factor which determines the playback latency, i.e. the delay between the QAudioOutput implementation sending data from the app to the native API, and the sound coming out of the speaker. For uncompressed audio such as WAV files, we can assume that this completely determines the latency; for compressed audio, there will be some overhead incurred by the decoder.
So, to minimise latency (at the cost of causing underflows close to the hardware), we need to reduce the size of the buffer at level 3. The problem on Symbian is that it isn't possible to do this - CMMFDevSound::SetConfigL() takes a data structure which includes a buffer size, but it is ignored, meaning that the audio driver always allocates a buffer of fixed size. The size varies depending on the device, but on most devices it is 4096 bytes, i.e. a latency of 0.256s for 8k mono PCM