Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
setUIProgressBar()
setUIProgressBar()
Dart API: setUIProgressBar().
This verb is used if the App wants to control itself the Progress Bar on the lock screen. By default, this progress bar is handled automaticaly by Flutter Sound. Remark setUIProgressBar()
is implemented only on iOS.
Example:
</div>
pausePlayer()
pausePlayer()
Dart API: pausePlayer().
Use this verbe to pause the current playback. An exception is thrown if the player is not in the "playing" state.
Example:
</div>
setAudioFocus.
setAudioFocus()
Dart API: setAudioFocus.
focus:
parameter possible values areAudioFocus.requestFocus (request focus, but do not do anything special with others App)
AudioFocus.requestFocusAndStopOthers (your app will have exclusive use of the output audio)
AudioFocus.requestFocusAndDuckOthers (if another App like Spotify use the output audio, its volume will be lowered)
AudioFocus.requestFocusAndKeepOthers (your App will play sound above others App)
AudioFocus.requestFocusAndInterruptSpokenAudioAndMixWithOthers
AudioFocus.requestFocusTransient (for Android)
AudioFocus.requestFocusTransientExclusive (for Android)
AudioFocus.abandonFocus (Your App will not have anymore the audio focus)
Please look to openAudioSession() to understand the meaning of the other parameters
Example:
</div>
nowPlaying()
nowPlaying()
Dart API: .
This verb is used to set the Lock screen fields without starting a new playback. The fields 'dataBuffer' and 'trackPath' of the Track parameter are not used. Please refer to 'startPlayerFromTrack' for the meaning of the others parameters. Remark setUIProgressBar()
is implemented only on iOS.
Example:
</div>
onProgress.
onProgress
Dart API: onProgress.
The stream side of the Food Controller : this is a stream on which FlutterSound will post the player progression. You may listen to this Stream to have feedback on the current playback.
PlaybackDisposition has two fields :
Duration duration (the total playback duration)
Duration position (the current playback position)
Example:
</div>
seekToPlayer()
seekToPlayer()
Dart API: seekToPlayer().
To seek to a new location. The player must already be playing or paused. If not, an exception is thrown.
Example:
</div>
Food.
Food
This are the objects that you can add
to foodSink
The Food class has two others inherited classes :
FoodData (the buffers that you want to play)
FoodEvent (a call back to be called after a resynchronisation)
Example:
This example shows how to play Live data, without Back Pressure from Flutter Sound
</div>
startPlayerFromTrack().
startPlayerFromTrack()
Dart API: startPlayerFromTrack().
Use this verb to play data from a track specification and display controls on the lock screen or an Apple Watch. The Audio Session must have been open with the parameter withUI
.
track
parameter is a simple structure which describe the sound to play. Please see here the Track structure specification
whenFinished:()
: A function for specifying what to do when the playback will be finished.
onPaused:()
: this parameter can be :
a call back function to call when the user hit the Skip Pause button on the lock screen
null
: The pause button will be handled by Flutter Sound internal
onSkipForward:()
: this parameter can be :
a call back function to call when the user hit the Skip Forward button on the lock screen
null
: The Skip Forward button will be disabled
onSkipBackward:()
: this parameter can be :
a call back function to call when the user hit the Skip Backward button on the lock screen
: The Skip Backward button will be disabled
removeUIWhenStopped
: is a boolean to specify if the UI on the lock screen must be removed when the sound is finished or when the App does a stopPlayer()
. Most of the time this parameter must be true. It is used only for the rare cases where the App wants to control the lock screen between two playbacks. Be aware that if the UI is not removed, the button Pause/Resume, Skip Backward and Skip Forward remain active between two playbacks. If you want to disable those button, use the API verb nowPlaying()
. Remark: actually this parameter is implemented only on iOS.
defaultPauseResume
: is a boolean value to specify if Flutter Sound must pause/resume the playback by itself when the user hit the pause/resume button. Set this parameter to FALSE if the App wants to manage itself the pause/resume button. If you do not specify this parameter and the onPaused
parameter is specified then Flutter Sound will assume FALSE
. If you do not specify this parameter and the onPaused
parameter is not specified then Flutter Sound will assume TRUE
. Remark: actually this parameter is implemented only on iOS.
startPlayerFromTrack()
returns a Duration Future, which is the record duration.
Example:
</div>
isDecoderSupported()
isDecoderSupported()
Dart API: isDecoderSupported().
This verb is useful to know if a particular codec is supported on the current platform. Returns a Future.
Example:
</div>
stopPlayer()
stopPlayer()
Dart API: stopPlayer().
Use this verb to stop a playback. This verb never throw any exception. It is safe to call it everywhere, for example when the App is not sure of the current Audio State and want to recover a clean reset state.
Example:
</div>
The τ player API.
Player
instance.Dart API: constructor.
This is the first thing to do, if you want to deal with playbacks. The instanciation of a new player does not do many thing. You are safe if you put this instanciation inside a global or instance variable initialization.
Example:
</div>
resumePlayer()
resumePlayer()
Dart API: resumePlayer().
Use this verbe to resume the current playback. An exception is thrown if the player is not in the "paused" state.
Example:
</div>
getProgress()
getProgress()
Dart API: getProgress().
This verb is used to get the current progress of a playback. It returns a Map
with two Duration entries : 'progress'
and 'duration'
. Remark : actually only implemented on iOS.
Example:
</div>
getPlayerState()
playerState
, isPlaying
, isPaused
, isStopped
. getPlayerState()
Dart API: getPlayerState().
Dart API: isPlaying.
Dart API: isPaused.
Dart API: isStopped.
Dart API: playerState.
This four verbs is used when the app wants to get the current Audio State of the player.
playerState
is an attribut which can have the following values :
isStopped /// Player is stopped
isPlaying /// Player is playing
isPaused /// Player is paused
isPlaying is a boolean attribut which is true
when the player is in the "Playing" mode.
isPaused is a boolean atrribut which is true
when the player is in the "Paused" mode.
isStopped is a boolean atrribut which is true
when the player is in the "Stopped" mode.
Flutter Sound shows in the playerState
attribut the last known state. When the Audio State of the background OS engine changes, the playerState
parameter is not updated exactly at the same time. If you want the exact background OS engine state you must use PlayerState theState = await myPlayer.getPlayerState()
. Acutually getPlayerState()
is only implemented on iOS.
Example:
</div>
startPlayer().
startPlayer()
Dart API: startPlayer().
You can use startPlayer
to play a sound.
startPlayer()
has three optional parameters, depending on your sound source :
fromUri:
(if you want to play a file or a remote URI)
fromDataBuffer:
(if you want to play from a data buffer)
sampleRate
is mandatory if codec
== Codec.pcm16
. Not used for other codecs.
You must specify one or the three parameters : fromUri
, fromDataBuffer
, fromStream
.
You use the optional parametercodec:
for specifying the audio and file format of the file. Please refer to the Codec compatibility Table to know which codecs are currently supported.
whenFinished:()
: A lambda function for specifying what to do when the playback will be finished.
Very often, the codec:
parameter is not useful. Flutter Sound will adapt itself depending on the real format of the file provided. But this parameter is necessary when Flutter Sound must do format conversion (for example to play opusOGG on iOS).
startPlayer()
returns a Duration Future, which is the record duration.
Hint: path_provider can be useful if you want to get access to some directories on your device.
Example:
</div>
Example:
</div>
foodSink.
foodSink
Dart API: foodSink.
The sink side of the Food Controller that you use when you want to play asynchronously live data. This StreamSink accept two kinds of objects :
FoodData (the buffers that you want to play)
FoodEvent (a call back to be called after a resynchronisation)
Example:
This example shows how to play Live data, without Back Pressure from Flutter Sound
</div>
setVolume()
setVolume()
Dart API: setVolume().
The parameter is a floating point number between 0 and 1. Volume can be changed when player is running. Manage this after player starts.
Example:
</div>
`recorderState`, `isRecording`, `isPaused`, `isStopped`.
recorderState
, isRecording
, isPaused
, isStopped
Dart API: recorderState
Dart API: isRecording
Dart API: isPaused
Dart API: isStopped
This four attributs is used when the app wants to get the current Audio State of the recorder.
recorderState
is an attribut which can have the following values :
isStopped /// Recorder is stopped
isRecording /// Recorder is recording
isPaused /// Recorder is paused
isRecording is a boolean attribut which is true
when the recorder is in the "Recording" mode.
isPaused is a boolean atrribut which is true
when the recorder is in the "Paused" mode.
isStopped is a boolean atrribut which is true
when the recorder is in the "Stopped" mode.
Example:
</div>
feedFromStream().
feedFromStream()
Dart API: feedFromStream().
This is the verb that you use when you want to play live PCM data synchronously. This procedure returns a Future. It is very important that you wait that this Future is completed before trying to play another buffer.
Example:
This example shows how to play Live data, with Back Pressure from Flutter Sound
This example shows how to play some real time sound effects synchronously.
</div>
setSubscriptionDuration()
setSubscriptionDuration()
Dart API: setSubscriptionDuration().
This verb is used to change the default interval between two post on the "Update Progress" stream. (The default interval is 0 (zero) which means "NO post")
Example:
</div>
setSubscriptionDuration()
setSubscriptionDuration()
Dart API: setSubscriptionDuration
This verb is used to change the default interval between two post on the "Update Progress" stream. (The default interval is 0 (zero) which means "NO post")
Example:
</div>
`openAudioSession()` and `closeAudioSession()`.
openAudioSession()
and closeAudioSession()
Dart API: openAudioSession.
Dart API: closeAudioSession.
A player must be opened before used. A player correspond to an Audio Session. With other words, you must open the Audio Session before using it. When you have finished with a Player, you must close it. With other words, you must close your Audio Session. Opening a player takes resources inside the OS. Those resources are freed with the verb closeAudioSession()
. It is safe to call this procedure at any time.
If the Player is not open, this verb will do nothing
If the Player is currently in play or pause mode, it will be stopped before.
focus:
parameterfocus
is an optional parameter can be specified during the opening : the Audio Focus. This parameter can have the following values :
AudioFocus.requestFocusAndStopOthers (your app will have exclusive use of the output audio)
AudioFocus.requestFocusAndDuckOthers (if another App like Spotify use the output audio, its volume will be lowered)
AudioFocus.requestFocusAndKeepOthers (your App will play sound above others App)
AudioFocus.requestFocusAndInterruptSpokenAudioAndMixWithOthers (for Android)
AudioFocus.requestFocusTransient (for Android)
AudioFocus.requestFocusTransientExclusive (for Android)
AudioFocus.doNotRequestFocus (useful if you want to mangage yourself the Audio Focus with the verb setAudioFocus()
)
The Audio Focus is abandoned when you close your player. If your App must play several sounds, you will probably open your player just once, and close it when you have finished with the last sound. If you close and reopen an Audio Session for each sound, you will probably get unpleasant things for the ears with the Audio Focus.
category
category
is an optional parameter used only on iOS. This parameter can have the following values :
ambient
multiRoute
playAndRecord
playback
record
soloAmbient
audioProcessing
See iOS documentation to understand the meaning of this parameter.
mode
mode
is an optional parameter used only on iOS. This parameter can have the following values :
modeDefault
modeGameChat
modeMeasurement
modeMoviePlayback
modeSpokenAudio
modeVideoChat
modeVideoRecording
modeVoiceChat
modeVoicePrompt
See iOS documentation to understand the meaning of this parameter.
audioFlags
are a set of optional flags (used on iOS):
outputToSpeaker
allowHeadset
allowEarPiece
allowBlueTooth
allowAirPlay
allowBlueToothA2DP
device
is the output device (used on Android)
speaker
headset,
earPiece,
blueTooth,
blueToothA2DP,
airPlay
withUI
is a boolean that you set to true
if you want to control your App from the lock-screen (using startPlayerFromTrack() during your Audio Session).
You MUST ensure that the player has been closed when your widget is detached from the UI. Overload your widget's dispose()
method to closeAudioSession the player when your widget is disposed. In this way you will reset the player and clean up the device resources, but the player will be no longer usable.
You may not open many Audio Sessions without closing them. You will be very bad if you try something like :
openAudioSession()
and closeAudioSession()
return Futures. You may not use your Player before the end of the initialization. So probably you will await
the result of openAudioSession()
. This result is the Player itself, so that you can collapse instanciation and initialization together with myPlayer = await FlutterSoundPlayer().openAudioSession();
Example:
</div>
setAudioFocus()
setAudioFocus()
Dart API: setAudioFocus
focus:
parameter possible values areAudioFocus.requestFocus (request focus, but do not do anything special with others App)
AudioFocus.requestFocusAndStopOthers (your app will have exclusive use of the output audio)
AudioFocus.requestFocusAndDuckOthers (if another App like Spotify use the output audio, its volume will be lowered)
AudioFocus.requestFocusAndKeepOthers (your App will play sound above others App)
AudioFocus.requestFocusAndInterruptSpokenAudioAndMixWithOthers
AudioFocus.requestFocusTransient (for Android)
AudioFocus.requestFocusTransientExclusive (for Android)
AudioFocus.abandonFocus (Your App will not have anymore the audio focus)
Please look to openAudioSession() to understand the meaning of the other parameters
Example:
</div>
onProgress
onProgress
Dart API: onProgress
The attribut onProgress
is a stream on which FlutterSound will post the recorder progression. You may listen to this Stream to have feedback on the current recording.
Example:
</div>
resumeRecorder()
Dart API: resumeRecorder
On Android this API verb needs al least SDK-24. An exception is thrown if the Recorder is not currently paused.
Example:
</div>
constructor
instanciation
Dart API: constructor
You do not need to instanciate the Flutter Sound Helper module. To use this module, you can just use the singleton offers by the module : flutterSoundHelper
.
Example:
</div>
startPlayerFromStream().
startPlayerFromStream()
Dart API: startPlayerFromStream().
This functionnality needs, at least, and Android SDK >= 21
The only codec supported is actually Codec.pcm16
.
The only value possible for numChannels
is actually 1.
SampleRate is the sample rate of the data you want to play.
Please look to the following notice
Example You can look to the three provided examples :
This example shows how to play Live data, with Back Pressure from Flutter Sound
This example shows how to play Live data, without Back Pressure from Flutter Sound
This example shows how to play some real time sound effects.
Example 1:
</div>
Example 2:
</div>
pauseRecorder()
Dart API: pauseRecorder
On Android this API verb needs al least SDK-24. An exception is thrown if the Recorder is not currently recording.
Example:
</div>
isFFmpegAvailable()
isFFmpegAvailable()
Dart API: isFFmpegAvailable()
This verb is used to know during runtime if FFmpeg is linked with the App.
Example:
</div>
getLastFFmpegReturnCode()
getLastFFmpegReturnCode()
Dart API: getLastFFmpegReturnCode()
This simple verb is used to get the result of the last FFmpeg command
Example:
</div>
stopRecorder()
stopRecorder()
Dart API: stopRecorder
Use this verb to stop a record. This verb never throws any exception. It is safe to call it everywhere, for example when the App is not sure of the current Audio State and want to recover a clean reset state.
Example:
</div>
waveToPCMBuffer()
waveToPCMBuffer()
Dart API: waveToPCMBuffer()
This verb is usefull to convert a Wave buffer to a Raw PCM buffer. Note that this verb is not asynchronous and does not return a Future.
It removes the Wave
envelop from the PCM buffer.
Example:
</div>
duration()
duration()
Dart API: duration()
This verb is used to get an estimation of the duration of a sound file. Be aware that it is just an estimation, based on the Codec used and the sample rate.
Note : this verb uses FFmpeg and is not available int the LITE flavor of Flutter Sound.
Example:
</div>
getLastFFmpegCommandOutput()
getLastFFmpegCommandOutput()
Dart API: getLastFFmpegCommandOutput()
This simple verb is used to get the output of the last FFmpeg command
Example:
</div>
startRecorder()
startRecorder()
Dart API:
You use startRecorder()
to start recording in an open session. startRecorder()
has the destination file path as parameter. It has also 7 optional parameters to specify :
codec: The codec to be used. Please refer to the to know which codecs are currently supported.
toFile: a path to the file being recorded
toStream: if you want to record to a Dart Stream. Please look to . This new functionnality needs, at least, Android SDK >= 21 (23 is better)
sampleRate: The sample rate in Hertz
numChannels: The number of channels (1=monophony, 2=stereophony)
bitRate: The bit rate in Hertz
audioSource : possible value is :
defaultSource
microphone
voiceDownlink (if someone can explain me what it is, I will be grateful ;-) )
can be useful if you want to get access to some directories on your device.
Flutter Sound does not take care of the recording permission. It is the App responsability to check or require the Recording permission. is probably useful to do that.
Example:
</div>
`openAudioSession()` and `closeAudioSession()`
openAudioSession()
and closeAudioSession()
Dart API: openAudioSession
Dart API: closeAudioSession
A recorder must be opened before used. A recorder correspond to an Audio Session. With other words, you must open the Audio Session before using it. When you have finished with a Recorder, you must close it. With other words, you must close your Audio Session. Opening a recorder takes resources inside the OS. Those resources are freed with the verb closeAudioSession()
.
You MUST ensure that the recorder has been closed when your widget is detached from the UI. Overload your widget's dispose()
method to close the recorder when your widget is disposed. In this way you will reset the player and clean up the device resources, but the recorder will be no longer usable.
You maynot openAudioSession many recorders without releasing them. You will be very bad if you try something like :
openAudioSession()
and closeAudioSession()
return Futures. You may not use your Recorder before the end of the initialization. So probably you will await
the result of openAudioSession()
. This result is the Recorder itself, so that you can collapse instanciation and initialization together with myRecorder = await FlutterSoundPlayer().openAudioSession();
The four optional parameters are used if you want to control the Audio Focus. Please look to FlutterSoundPlayer openAudioSession() to understand the meaning of those parameters
Example:
</div>
ffMpegGetMediaInformation()
ffMpegGetMediaInformation()
Dart API: ffMpegGetMediaInformation()
This verb is used to get various informations on a file.
The informations got with FFmpegGetMediaInformation() are documented here.
Example:
</div>
waveToPCM()
waveToPCM()
Dart API: waveToPCM()
This verb is usefull to convert a Wave file to a Raw PCM file.
It removes the Wave
envelop from the PCM file.
Example:
</div>
pcmToWave()
pcmToWave()
Dart API: pcmToWave()
This verb is usefull to convert a Raw PCM file to a Wave file.
It adds a Wave
envelop to the PCM file, so that the file can be played back with startPlayer()
.
Note: the parameters numChannels
and sampleRate
are mandatory, and must match the actual PCM data. See here a discussion about Raw PCM
and WAVE
file format.
Example:
</div>
UIPlayer
First import the modules import 'flutter_sound.dart
The SoundPlayerUI provides a Playback widget styled after the HTML 5 audio player.
The player displays a loading indicator and allows the user to pause/resume/skip via the progress bar.
You can also pause/resume the player via an api call to SoundPlayerUI's state using a GlobalKey.
The SoundPlayerUI widget allows you to playback audio from multiple sources:
File
Asset
URL
Buffer
When using the SoundPlayerUI
you MUST pass a Track
that has been initialised with a supported MediaFormat
.
The Widget needs to obtain the duration of the audio that it is play and that can only be done if we know the MediaFormat
of the Widget.
If you pass a Track
that wasn't constructed with a MediaFormat
then a MediaFormatException
will be thrown.
The MediaFormat
must also be natively supported by the OS. See mediaformat.md
for additional details on checking for a supported format.
Sounds
uses Track as the primary method of handing around audio data.
You can also dynamically load a Track
when the user clicks the 'Play' button on the SoundPlayerUI
widget. This allows you to delay the decision on what Track is going to be played until the user clicks the 'Play' button.
UIRecorder
First import the modules import 'flutter_sound.dart
The SoundPlayerUI provides a Playback widget styled after the HTML 5 audio player.
The player displays a loading indicator and allows the user to pause/resume/skip via the progress bar.
You can also pause/resume the player via an api call to SoundPlayerUI's state using a GlobalKey.
The SoundPlayerUI widget allows you to playback audio from multiple sources:
File
Asset
URL
Buffer
When using the SoundPlayerUI
you MUST pass a Track
that has been initialised with a supported MediaFormat
.
The Widget needs to obtain the duration of the audio that it is play and that can only be done if we know the MediaFormat
of the Widget.
If you pass a Track
that wasn't constructed with a MediaFormat
then a MediaFormatException
will be thrown.
The MediaFormat
must also be natively supported by the OS. See mediaformat.md
for additional details on checking for a supported format.
You can also dynamically load a Track
when the user clicks the 'Play' button on the SoundPlayerUI
widget. This allows you to delay the decision on what Track is going to be played until the user clicks the 'Play' button.
Sounds
uses as the primary method of handing around audio data.
convertFile()
convertFile()
Dart API: convertFile()
This verb is useful to convert a sound file to a new format.
infile
is the file path of the file you want to convert
codecin
is the actual file format
outfile
is the path of the file you want to create
codecout
is the new file format
Be careful : outfile
and codecout
must be compatible. The output file extension must be a correct file extension for the new format.
Note : this verb uses FFmpeg and is not available int the LITE flavor of Flutter Sound.
Example:
</div>
pcmToWaveBuffer()
pcmToWaveBuffer()
Dart API: pcmToWaveBuffer()
This verb is usefull to convert a Raw PCM buffer to a Wave buffer.
It adds a Wave
envelop in front of the PCM buffer, so that the file can be played back with startPlayerFromBuffer()
.
Note: the parameters numChannels
and sampleRate
are mandatory, and must match the actual PCM data. See here a discussion about Raw PCM
and WAVE
file format.
Example:
</div>
executeFFmpegWithArguments()
executeFFmpegWithArguments()
Dart API:
This verb is a wrapper for the great FFmpeg application. The command "man ffmpeg" (if you have installed ffmpeg on your computer) will give you many informations. If you do not have ffmpeg
on your computer you will find easyly on internet many documentation on this great program.
Example:
</div>
The τ UI Widgets.
First import the modules import 'flutter_sound.dart
The SoundPlayerUI provides a Playback widget styled after the HTML 5 audio player.
The player displays a loading indicator and allows the user to pause/resume/skip via the progress bar.
You can also pause/resume the player via an api call to SoundPlayerUI's state using a GlobalKey.
The SoundPlayerUI widget allows you to playback audio from multiple sources:
File
Asset
URL
Buffer
When using the SoundPlayerUI
you MUST pass a Track
that has been initialised with a supported MediaFormat
.
The Widget needs to obtain the duration of the audio that it is play and that can only be done if we know the MediaFormat
of the Widget.
If you pass a Track
that wasn't constructed with a MediaFormat
then a MediaFormatException
will be thrown.
The MediaFormat
must also be natively supported by the OS. See mediaformat.md
for additional details on checking for a supported format.
Sounds
uses Track as the primary method of handing around audio data.
You can also dynamically load a Track
when the user clicks the 'Play' button on the SoundPlayerUI
widget. This allows you to delay the decision on what Track is going to be played until the user clicks the 'Play' button.
UIController
First import the modules import 'flutter_sound.dart
The SoundPlayerUI provides a Playback widget styled after the HTML 5 audio player.
The player displays a loading indicator and allows the user to pause/resume/skip via the progress bar.
You can also pause/resume the player via an api call to SoundPlayerUI's state using a GlobalKey.
The SoundPlayerUI widget allows you to playback audio from multiple sources:
File
Asset
URL
Buffer
When using the SoundPlayerUI
you MUST pass a Track
that has been initialised with a supported MediaFormat
.
The Widget needs to obtain the duration of the audio that it is play and that can only be done if we know the MediaFormat
of the Widget.
If you pass a Track
that wasn't constructed with a MediaFormat
then a MediaFormatException
will be thrown.
The MediaFormat
must also be natively supported by the OS. See mediaformat.md
for additional details on checking for a supported format.
Sounds
uses Track as the primary method of handing around audio data.
You can also dynamically load a Track
when the user clicks the 'Play' button on the SoundPlayerUI
widget. This allows you to delay the decision on what Track is going to be played until the user clicks the 'Play' button.
The τ API.
Ï„ is composed with 4 modules :
FlutterSoundPlayer
, wich deal with everything about playbacks
FlutterSoundRecorder
, which deal with everything about recording
FlutterSoundHelper
, which offers some convenients tools
FlutterSoundUI
, which offer some Widget ready to be used out of the box
To use Flutter Sound you just do :
This will import all the necessaries dart interfaces.
Instance one ore more players. A good place to do that is in your init()
function. It is also possible to instanciate the players "on the fly", when needed.
Open it. You cannot do anything on a close Player. An audio-session is then created.
Use the various verbs implemented by the players.
startPlayer()
startPlayerFromStream()
startPlayerFromBuffer()
setVolume()
FlutterSoundPlayer.stopPlayer()
...
Close your players.
This is important to close every player open for freeing the resources taken by the audio session.
A good place to do that is in the dispose()
procedure.
Instance your recorder. A good place to do that is in your init()
function.
Open it. You cannot do anything on a close Recorder. An audio-session is then created.
Use the various verbs implemented by the players.
startRecorder()
pauseRecorder()
resumeRecorder()
stopRecorder()
...
Close your recorder.
This is important to close it for freeing the resources taken by the audio session.
A good place to do that is in the dispose()
procedure.
The τ utilities API.
instanciation
Dart definition (prototype) :
You do not need to instanciate the Flutter Sound Helper module. To use this module, you can just use the singleton offers by the module : flutterSoundHelper
.
Example:
convertFile()
Dart definition (prototype) :
This verb is useful to convert a sound file to a new format.
infile
is the file path of the file you want to convert
codecin
is the actual file format
outfile
is the path of the file you want to create
codecout
is the new file format
Be careful : outfile
and codecout
must be compatible. The output file extension must be a correct file extension for the new format.
Note : this verb uses FFmpeg and is not available int the LITE flavor of Flutter Sound.
Example:
pcmToWave()
Dart definition (prototype) :
This verb is usefull to convert a Raw PCM file to a Wave file.
It adds a Wave
envelop to the PCM file, so that the file can be played back with startPlayer()
.
Note: the parameters numChannels
and sampleRate
are mandatory, and must match the actual PCM data. See here a discussion about Raw PCM
and WAVE
file format.
Example:
pcmToWaveBuffer()
Dart definition (prototype) :
This verb is usefull to convert a Raw PCM buffer to a Wave buffer.
It adds a Wave
envelop in front of the PCM buffer, so that the file can be played back with startPlayerFromBuffer()
.
Note: the parameters numChannels
and sampleRate
are mandatory, and must match the actual PCM data. See here a discussion about Raw PCM
and WAVE
file format.
Example:
waveToPCM()
Dart definition (prototype) :
This verb is usefull to convert a Wave file to a Raw PCM file.
It removes the Wave
envelop from the PCM file.
Example:
waveToPCMBuffer()
Dart definition (prototype) :
This verb is usefull to convert a Wave buffer to a Raw PCM buffer. Note that this verb is not asynchronous and does not return a Future.
It removes the Wave
envelop from the PCM buffer.
Example:
duration()
Dart definition (prototype) :
This verb is used to get an estimation of the duration of a sound file. Be aware that it is just an estimation, based on the Codec used and the sample rate.
Note : this verb uses FFmpeg and is not available int the LITE flavor of Flutter Sound.
Example:
isFFmpegAvailable()
Dart definition (prototype) :
This verb is used to know during runtime if FFmpeg is linked with the App.
Example:
executeFFmpegWithArguments()
Dart definition (prototype) :
This verb is a wrapper for the great FFmpeg application. The command "man ffmpeg" (if you have installed ffmpeg on your computer) will give you many informations. If you do not have ffmpeg
on your computer you will find easyly on internet many documentation on this great program.
Example:
getLastFFmpegReturnCode()
Dart definition (prototype) :
This simple verb is used to get the result of the last FFmpeg command
Example:
getLastFFmpegCommandOutput()
Dart definition (prototype) :
This simple verb is used to get the output of the last FFmpeg command
Example:
FFmpegGetMediaInformation
Dart definition (prototype) :
This verb is used to get various informations on a file.
The informations got with FFmpegGetMediaInformation() are documented here.
Example: