Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Getting Started..
The complete running example is there
To play back something you must instanciate a player. Most of the time, you will need just one player, and you can place this instanciation in the variables initialisation of your class :
Before calling startPlayer()
you must open the Session.
When you have finished with it, you must close the session. A good places to put those verbs are in the procedures initState()
and dispose()
.
To play a sound you call startPlayer()
. To stop a sound you call stopPlayer()
The complete running example is there
To play back something you must instanciate a recorder. Most of the time, you will need just one recorder, and you can place this instanciation in the variables initialisation of your class :
Before calling startRecorder()
you must open the Session.
When you have finished with it, you must close the session. A god place to pute those verbs is in the procedures initState()
and dispose()
.
To record something you call startRecorder()
. To stop the recorder you call stopRecorder()
Recording PCM-16 to a Dart Stream.
Please, remember that actually, Flutter Sound does not support Floating Point PCM data, nor records with more that one audio channel. On Flutter Sound, Raw PCM is only PCM-LINEAR 16 monophony
To record a Live PCM file, when calling the verb startRecorder()
, you specify the parameter toStream:
with you Stream sink, instead of the parameter toFile:
. This parameter is a StreamSink that you can listen to, for processing the input data.
This new functionnality needs, at least, an Android SDK >= 21
This new functionnality works better with Android minSdk >= 23, because previous SDK was not able to do UNBLOCKING write
.
Example
You can look to the provided with Flutter Sound.
Recording PCM.
Please, remember that actually, Flutter Sound does not support Floating Point PCM data, nor records with more that one audio channel.
To record a Raw PCM16 file, you use the regular startRecorder()
API verb. To play a Raw PCM16 file, you can either add a Wave header in front of the file with pcm16ToWave()
verb, or call the regular startPlayer()
API verb. If you do the later, you must provide the sampleRate
and numChannels
parameter during the call. You can look to the simple example provided with Flutter Sound. [TODO]
Example
Raw PCM and Wave files.
Raw PCM is not an audio format. Raw PCM files store the raw data without any envelope. A simple way for playing a Raw PCM file, is to add a Wave
header in front of the data before playing it. To do that, the helper verb pcmToWave()
is convenient. You can also call directely the startPlayer()
verb. If you do that, do not forget to provide the sampleRate
and numChannels
parameters.
A Wave file is just PCM data in a specific file format.
The Wave audio file format has a terrible drawback : it cannot be streamed. The Wave file is considered not valid, until it is closed. During the construction of the Wave file, it is considered as corrupted because the Wave header is still not written.
Note the following limitations in the current Flutter Sound version :
The stream is PCM-Integer Linear 16
with just one channel. Actually, Flutter Sound does not manipulate Raw PCM with floating point PCM data nor with more than one audio channel.
FlutterSoundHelper duration()
does not work with Raw PCM file
startPlayer()
does not return the record duration.
withUI
parameter in openAudioSession()
is actually incompatible with Raw PCM files.
Playing PCM-16 from a Dart Stream.
Please, remember that actually, Flutter Sound does not support Floating Point PCM data, nor records with more that one audio channel.
To play live stream, you start playing with the verb startPlayerFromStream()
instead of the regular startPlayer()
verb:
The first thing you have to do if you want to play live audio is to answer this question: Do I need back pressure from Flutter Sound, or not?
The App does just myPlayer.foodSink.add( FoodData(aBuffer) )
each time it wants to play some data. No need to await, no need to verify if the previous buffers have finished to be played. All the buffers added to foodSink
are buffered, an are played sequentially. The App continues to work without knowing when the buffers are really played.
This means two things :
If the App is very fast adding buffers to foodSink
it can consume a lot of memory for the waiting buffers.
When the App has finished feeding the sink, it cannot just do myPlayer.stopPlayer()
, because there is perhaps many buffers not yet played.
If it does a stopPlayer()
, all the waiting buffers will be flushed which is probably not what it wants.
But there is a mechanism if the App wants to resynchronize with the output Stream. To resynchronize with the current playback, the App does myPlayer.foodSink.add( FoodEvent(aCallback) );
Example:
You can look to this simple example provided with Flutter Sound.
If the App wants to keep synchronization with what is played, it uses the verb feedFromStream()
to play data. It is really very important not to call another feedFromStream()
before the completion of the previous future. When each Future is completed, the App can be sure that the provided data are correctely either played, or at least put in low level internal buffers, and it knows that it is safe to do another one.
Example:
You can look to this example and this example
You probably will await
or use then()
for each call to feedFromStream()
.
This new functionnality needs, at least, an Android SDK >= 21
This new functionnality works better with Android minSdk >= 23, because previous SDK was not able to do UNBLOCKING write
.
Examples You can look to the provided examples :
This example shows how to play Live data, with Back Pressure from Flutter Sound
This example shows how to play Live data, without Back Pressure from Flutter Sound
This example shows how to play some real time sound effects.
This example play live stream what is recorded from the microphone.
Controls on the lock-screen.
A number of Platforms (android/IOS) support the concept of a 'Shade' or 'notification' area with the ability to control audio playback via the Shade.
When using a Shade a Platform may also allow the user to control the media playback from the Platform's 'Lock' screen.
Using a Shade does not stop you from also displaying an in app Widget to control audio. The SoundPlayerUI widget will work in conjunction with the Shade.
A Shade often allows the user to pause and resume audio as well skip forward a track and skip backward to the prior Track.
τ allows you to enable the Shade controls when you start playback. It also allows you (where the Platform supports it) to control which of the media buttons are displayed (pause, resume, skip forward, skip backwards).
To start audio playback using the Shade use:
The withShadeUI
constuctor allows you to control which of the Shade buttons are displayed. The Platform MAY choose to ignore any of the button choices you make.
If you allow the Shade to display the Skip Forward and Skip Back buttons you must provide callbacks for the onSkipForward and on onSkipBackward methods. When the user clicks the respective buttons you will receive the relevant callback.
Supported codecs.
Actually, the following codecs are supported by flutter_sound:
This table will eventually be upgraded when more codecs will be added.
✅ (*) : The codec is supported by Flutter Sound, but with a File Format Conversion. This has several drawbacks :
Needs FFmpeg. FFmpeg is not included in the LITE flavor of Flutter Sound
Can add some delay before Playing Back the file, or after stopping the recording. This delay can be substancial for very large records.
✅ (1) : needs MinSDK >=23
Webkit is bull shit : you cannot record anything with Safari, or even Firefox/Chrome on iOS.
Opus WEBM is a great Codec. It works on everything (mobile and Web Browsers), except Apple
Edge is same as Chrome
The Shade may also display information contained in the such as Album, Artist of artwork.
iOS encoder
iOS decoder
Android encoder
Android decoder
AAC ADTS
✅
✅
✅ (1)
✅
Opus OGG
✅ (*)
✅ (*)
❌
✅ (1)
Opus CAF
✅
✅
❌
✅ (*) (1)
MP3
❌
✅
❌
✅
Vorbis OGG
❌
❌
❌
✅
PCM16
✅
✅
✅ (1)
✅
PCM Wave
✅
✅
✅ (1)
✅
PCM AIFF
❌
✅
❌
✅ (*)
PCM CAF
✅
✅
❌
✅ (*)
FLAC
✅
✅
❌
✅
AAC MP4
✅
✅
✅ (1)
✅
AMR NB
❌
❌
✅ (1)
✅
AMR WB
❌
❌
✅ (1)
✅
PCM8
❌
❌
❌
❌
PCM F32
❌
❌
❌
❌
PCM WEBM
❌
❌
❌
❌
Opus WEBM
❌
❌
✅
✅
Vorbis WEBM
❌
❌
❌
✅
Chrome encoder
Chrome decoder
Firefox encoder
Firefox decoder
Webkit encoder (safari)
Webkit decoder (Safari)
AAC ADTS
❌
✅
❌
✅
❌
✅
Opus OGG
❌
✅
✅
✅
❌
❌
Opus CAF
❌
❌
❌
❌
❌
✅
MP3
❌
✅
❌
✅
❌
✅
Vorbis OGG
❌
✅
❌
✅
❌
❌
PCM16
❌
✅
❌
✅
❌
❌
(must be verified)
PCM Wave
❌
✅
❌
✅
❌
❌
PCM AIFF
❌
❌
❌
❌
❌
❌
PCM CAF
❌
❌
❌
❌
❌
✅
FLAC
❌
✅
❌
✅
❌
✅
AAC MP4
❌
✅
❌
✅
❌
✅
AMR NB
❌
❌
❌
❌
❌
❌
AMR WB
❌
❌
❌
❌
❌
❌
PCM8
❌
❌
❌
❌
❌
❌
PCM F32
❌
❌
❌
❌
❌
❌
PCM WEBM
❌
❌
❌
❌
❌
❌
Opus WEBM
✅
✅
✅
✅
❌
❌
Vorbis WEBM
❌
✅
❌
✅
❌
❌
Various guides about The τ Project.
Actually, the following codecs are supported by flutter_sound:
iOS encoder
iOS decoder
Android encoder
Android decoder
AAC ADTS
✅
✅
✅ (1)
✅
Opus OGG
✅ (*)
✅ (*)
❌
✅ (1)
Opus CAF
✅
✅
❌
✅ (*) (1)
MP3
❌
✅
❌
✅
Vorbis OGG
❌
❌
❌
✅
PCM16
✅
✅
✅ (1)
✅
PCM Wave
✅
✅
✅ (1)
✅
PCM AIFF
❌
✅
❌
✅ (*)
PCM CAF
✅
✅
❌
✅ (*)
FLAC
✅
✅
❌
✅
AAC MP4
✅
✅
✅ (1)
✅
AMR NB
❌
❌
✅ (1)
✅
AMR WB
❌
❌
✅ (1)
✅
PCM8
❌
❌
❌
❌
PCM F32
❌
❌
❌
❌
PCM WEBM
❌
❌
❌
❌
Opus WEBM
❌
❌
✅
✅
Vorbis WEBM
❌
❌
❌
✅
This table will eventually be upgraded when more codecs will be added.
✅ (*) : The codec is supported by Flutter Sound, but with a File Format Conversion. This has several drawbacks :
Needs FFmpeg. FFmpeg is not included in the LITE flavor of Flutter Sound
Can add some delay before Playing Back the file, or after stopping the recording. This delay can be substancial for very large records.
✅ (1) : needs MinSDK >=23
Chrome encoder
Chrome decoder
Firefox encoder
Firefox decoder
Webkit encoder (safari)
Webkit decoder (Safari)
AAC ADTS
❌
✅
❌
✅
❌
✅
Opus OGG
❌
✅
✅
✅
❌
❌
Opus CAF
❌
❌
❌
❌
❌
✅
MP3
❌
✅
❌
✅
❌
✅
Vorbis OGG
❌
✅
❌
✅
❌
❌
PCM16
❌
✅
❌
✅
❌
❌
(must be verified)
PCM Wave
❌
✅
❌
✅
❌
❌
PCM AIFF
❌
❌
❌
❌
❌
❌
PCM CAF
❌
❌
❌
❌
❌
✅
FLAC
❌
✅
❌
✅
❌
✅
AAC MP4
❌
✅
❌
✅
❌
✅
AMR NB
❌
❌
❌
❌
❌
❌
AMR WB
❌
❌
❌
❌
❌
❌
PCM8
❌
❌
❌
❌
❌
❌
PCM F32
❌
❌
❌
❌
❌
❌
PCM WEBM
❌
❌
❌
❌
❌
❌
Opus WEBM
✅
✅
✅
✅
❌
❌
Vorbis WEBM
❌
✅
❌
✅
❌
❌
Webkit is bull shit : you cannot record anything with Safari, or even Firefox/Chrome on iOS.
Opus WEBM is a great Codec. It works on everything (mobile and Web Browsers), except Apple
Edge is same as Chrome
Raw PCM is not an audio format. Raw PCM files store the raw data without any envelope. A simple way for playing a Raw PCM file, is to add a Wave
header in front of the data before playing it. To do that, the helper verb pcmToWave()
is convenient. You can also call directely the startPlayer()
verb. If you do that, do not forget to provide the sampleRate
and numChannels
parameters.
A Wave file is just PCM data in a specific file format.
The Wave audio file format has a terrible drawback : it cannot be streamed. The Wave file is considered not valid, until it is closed. During the construction of the Wave file, it is considered as corrupted because the Wave header is still not written.
Note the following limitations in the current Flutter Sound version :
The stream is PCM-Integer Linear 16
with just one channel. Actually, Flutter Sound does not manipulate Raw PCM with floating point PCM data nor with more than one audio channel.
FlutterSoundHelper duration()
does not work with Raw PCM file
startPlayer()
does not return the record duration.
withUI
parameter in openAudioSession()
is actually incompatible with Raw PCM files.
Please, remember that actually, Flutter Sound does not support Floating Point PCM data, nor records with more that one audio channel.
To record a Raw PCM16 file, you use the regular startRecorder()
API verb. To play a Raw PCM16 file, you can either add a Wave header in front of the file with pcm16ToWave()
verb, or call the regular startPlayer()
API verb. If you do the later, you must provide the sampleRate
and numChannels
parameter during the call. You can look to the simple example provided with Flutter Sound. [TODO]
Example
Please, remember that actually, Flutter Sound does not support Floating Point PCM data, nor records with more that one audio channel. On Flutter Sound, Raw PCM is only PCM-LINEAR 16 monophony
To record a Live PCM file, when calling the verb startRecorder\(\)
, you specify the parameter toStream:
with you Stream sink, instead of the parameter toFile:
. This parameter is a StreamSink that you can listen to, for processing the input data.
This new functionnality needs, at least, an Android SDK >= 21
This new functionnality works better with Android minSdk >= 23, because previous SDK was not able to do UNBLOCKING write
.
Example
You can look to the simple example provided with Flutter Sound.
Please, remember that actually, Flutter Sound does not support Floating Point PCM data, nor records with more that one audio channel.
To play live stream, you start playing with the verb startPlayerFromStream
instead of the regular startPlayer()
verb:
The first thing you have to do if you want to play live audio is to answer this question: Do I need back pressure from Flutter Sound, or not
?
The App does just myPlayer.foodSink.add\( FoodData\(aBuffer\) \)
each time it wants to play some data. No need to await, no need to verify if the previous buffers have finished to be played. All the buffers added to foodSink
are buffered, an are played sequentially. The App continues to work without knowing when the buffers are really played.
This means two things :
If the App is very fast adding buffers to foodSink
it can consume a lot of memory for the waiting buffers.
When the App has finished feeding the sink, it cannot just do myPlayer.stopPlayer()
, because there is perhaps many buffers not yet played.
If it does a stopPlayer()
, all the waiting buffers will be flushed which is probably not what it wants.
But there is a mechanism if the App wants to resynchronize with the output Stream. To resynchronize with the current playback, the App does myPlayer.foodSink.add\( FoodEvent\(aCallback\) \);
Example:
You can look to this simple example provided with Flutter Sound.
If the App wants to keep synchronization with what is played, it uses the verb feedFromStream
to play data. It is really very important not to call another feedFromStream()
before the completion of the previous future. When each Future is completed, the App can be sure that the provided data are correctely either played, or at least put in low level internal buffers, and it knows that it is safe to do another one.
Example:
You can look to this example and this example
You probably will await
or use then()
for each call to feedFromStream()
.
This new functionnality needs, at least, an Android SDK >= 21
This new functionnality works better with Android minSdk >= 23, because previous SDK was not able to do UNBLOCKING write
.
Examples You can look to the provided examples :
This example shows how to play Live data, with Back Pressure from Flutter Sound
This example shows how to play Live data, without Back Pressure from Flutter Sound
This example shows how to play some real time sound effects.
This example play live stream what is recorded from the microphone.
The τ built-in widgets.
The easiest way to start with Sounds is to use one of the built in Widgets.
SoundPlayerUI
SoundRecorderUI
RecorderPlaybackController
If you don't like any of the provided Widgets you can build your own from scratch.
The Sounds widgets are all built using the public Sounds API and also provide working examples when building your own widget.
The player displays a loading indicator and allows the user to pause/resume/skip via the progress bar.
You can also pause/resume the player via an api call to SoundPlayerUI's state using a GlobalKey.
The SoundPlayerUI api documentation provides examples on using the SoundPlayerUI widget.
The SoundRecorderUI widget provide a simple UI for recording audio.
The audio is recorded to a Track.
TODO: add image here.
The SoundRecorderUI api documentation provides examples on using the SoundRecorderUI widget.
The RecorderPlaybackController is a specialised Widget which is used to co-ordinate a paired SoundPlayerUI and a SoundRecorderUI widgets.
Often when providing an interface to record audio you will want to allow the user to playback the audio after recording it. However you don't want the user to try and start the playback before the recording is complete.
The RecorderPlaybackController widget does not have a UI (its actually an InheritedWidget) but rather is used to as a bridge to allow the paired SoundPlayerUI and SoundRecorderUI to communicate with each other.
The RecorderPlaybackController co-ordinates the UI state between the two components so that playback and recording cannot happen at the same time.
See the API documenation on RecorderPlaybackController for examples of how to use it.
The SoundPlayerUI widget provides a Playback widget styled after the HTML 5 audio player.