Web Speech API

This is an experimental technology
Check the Browser compatibility table carefully before using this in production.

The Web Speech API enables you to incorporate voice data into web apps. The Web Speech API has two parts: SpeechSynthesis (Text-to-Speech), and SpeechRecognition (Asynchronous Speech Recognition.)

Web Speech Concepts and Usage

The Web Speech API makes web apps able to handle voice data. There are two components to this API:

  • Speech recognition is accessed via the SpeechRecognition interface, which provides the ability to recognize voice context from an audio input (normally via the device's default speech recognition service) and respond appropriately. Generally you'll use the interface's constructor to create a new SpeechRecognition object, which has a number of event handlers available for detecting when speech is input through the device's microphone. The SpeechGrammar interface represents a container for a particular set of grammar that your app should recognise. Grammar is defined using JSpeech Grammar Format (JSGF.)
  • Speech synthesis is accessed via the SpeechSynthesis interface, a text-to-speech component that allows programs to read out their text content (normally via the device's default speech synthesiser.) Different voice types are represented by SpeechSynthesisVoice objects, and different parts of text that you want to be spoken are represented by SpeechSynthesisUtterance objects. You can get these spoken by passing them to the SpeechSynthesis.speak() method.

For more details on using these features, see Using the Web Speech API.

Web Speech API Interfaces

Speech recognition

SpeechRecognition
The controller interface for the recognition service; this also handles the SpeechRecognitionEvent sent from the recognition service.
SpeechRecognitionAlternative
Represents a single word that has been recognised by the speech recognition service.
SpeechRecognitionError
Represents error messages from the recognition service.
SpeechRecognitionEvent
The event object for the result and nomatch events, and contains all the data associated with an interim or final speech recognition result.
SpeechGrammar
The words or patterns of words that we want the recognition service to recognize.
SpeechGrammarList
Represents a list of SpeechGrammar objects.
SpeechRecognitionResult
Represents a single recognition match, which may contain multiple SpeechRecognitionAlternative objects.
SpeechRecognitionResultList
Represents a list of SpeechRecognitionResult objects, or a single one if results are being captured in continuous mode.

Speech synthesis

SpeechSynthesis
The controller interface for the speech service; this can be used to retrieve information about the synthesis voices available on the device, start and pause speech, and other commands besides.
SpeechSynthesisErrorEvent
Contains information about any errors that occur while processing SpeechSynthesisUtterance objects in the speech service.
SpeechSynthesisEvent
Contains information about the current state of SpeechSynthesisUtterance objects that have been processed in the speech service.
SpeechSynthesisUtterance
Represents a speech request. It contains the content the speech service should read and information about how to read it (e.g. language, pitch and volume.)
SpeechSynthesisVoice
Represents a voice that the system supports. Every SpeechSynthesisVoice has its own relative speech service including information about language, name and URI.
Window.speechSynthesis
Specced out as part of a [NoInterfaceObject] interface called SpeechSynthesisGetter, and Implemented by the Window object, the speechSynthesis property provides access to the SpeechSynthesis controller, and therefore the entry point to speech synthesis functionality.

Examples

The Web Speech API repo on GitHub contains demos to illustrate speech recognition and synthesis.

Specifications

Specification Status Comment
Web Speech API Draft Initial definition

Browser compatibility

SpeechRecognition

DesktopMobile
ChromeEdgeFirefoxInternet ExplorerOperaSafariAndroid webviewChrome for AndroidFirefox for AndroidOpera for AndroidSafari on iOSSamsung Internet
SpeechRecognition
Experimental
Chrome Full support 33
Prefixed Notes
Full support 33
Prefixed Notes
Prefixed Implemented with the vendor prefix: webkit
Notes You'll need to serve your code through a web server for recognition to work.
Edge Full support ≤79
Prefixed Notes
Full support ≤79
Prefixed Notes
Prefixed Implemented with the vendor prefix: webkit
Notes You'll need to serve your code through a web server for recognition to work.
Firefox No support NoIE No support NoOpera No support NoSafari No support NoWebView Android Full support 4.4.3
Prefixed Notes
Full support 4.4.3
Prefixed Notes
Prefixed Implemented with the vendor prefix: webkit
Notes You'll need to serve your code through a web server for recognition to work.
Chrome Android Full support 33
Prefixed Notes
Full support 33
Prefixed Notes
Prefixed Implemented with the vendor prefix: webkit
Notes You'll need to serve your code through a web server for recognition to work.
Firefox Android No support NoOpera Android No support NoSafari iOS No support NoSamsung Internet Android Full support 2.0
Prefixed Notes
Full support 2.0
Prefixed Notes
Prefixed Implemented with the vendor prefix: webkit
Notes You'll need to serve your code through a web server for recognition to work.

Legend

Full support
Full support
No support
No support
Experimental. Expect behavior to change in the future.
Experimental. Expect behavior to change in the future.
See implementation notes.
See implementation notes.
Requires a vendor prefix or different name for use.
Requires a vendor prefix or different name for use.

SpeechSynthesis

DesktopMobile
ChromeEdgeFirefoxInternet ExplorerOperaSafariAndroid webviewChrome for AndroidFirefox for AndroidOpera for AndroidSafari on iOSSamsung Internet
SpeechSynthesis
Experimental
Chrome Full support 33Edge Full support ≤18Firefox Full support 49IE No support NoOpera Full support 21Safari Full support 7WebView Android Full support 4.4.3Chrome Android Full support 33Firefox Android Full support 62
Full support 62
No support 61 — 62
Disabled
Disabled From version 61 until version 62 (exclusive): this feature is behind the media.webspeech.synth.enabled preference (needs to be set to true). To change preferences in Firefox, visit about:config.
Opera Android No support NoSafari iOS Full support 7Samsung Internet Android Full support 3.0

Legend

Full support
Full support
No support
No support
Experimental. Expect behavior to change in the future.
Experimental. Expect behavior to change in the future.
User must explicitly enable this feature.
User must explicitly enable this feature.

See also