Search completed in 1.15 seconds.
168 results for "voice":
Your results are loading. Please wait...
SpeechSynthesisVoice.voiceURI - Web APIs
the voiceuri read-only property of the speechsynthesisvoice interface returns the type of uri and location of the speech synthesis service for this voice.
... syntax var myvoiceuri = speechsynthesisvoiceinstance.voiceuri; value a domstring representing the uri of the voice.
... examples for(i = 0; i < voices.length ; i++) { var option = document.createelement('option'); option.textcontent = voices[i].name + ' (' + voices[i].lang + ')'; if(voices[i].default) { option.textcontent += ' -- default'; } console.log(voices[i].voiceuri); // on mac, this returns urns, for example 'urn:moz-tts:osx:com.apple.speech.synthesis.voice.daniel' option.setattribute('data-lang', voices[i].lang); option.setattribute('data-name', voices[i].name); voiceselect.appendchild(option); } specifications specification status comment web speech apithe definition of 'voiceuri' in that specification.
SpeechSynthesisVoice - Web APIs
the speechsynthesisvoice interface of the web speech api represents a voice that the system supports.
... every speechsynthesisvoice has its own relative speech service including information about language, name and uri.
... properties speechsynthesisvoice.default read only a boolean indicating whether the voice is the default voice for the current app language (true), or not (false.) speechsynthesisvoice.lang read only returns a bcp 47 language tag indicating the language of the voice.
...And 4 more matches
SpeechSynthesis.getVoices() - Web APIs
the getvoices() method of the speechsynthesis interface returns a list of speechsynthesisvoice objects representing all the available voices on the current device.
... syntax speechsynthesisinstance.getvoices(); parameters none.
... return value a list (array) of speechsynthesisvoice objects.
...And 3 more matches
SpeechSynthesisUtterance.voice - Web APIs
the voice property of the speechsynthesisutterance interface gets and sets the voice that will be used to speak the utterance.
... this should be set to one of the speechsynthesisvoice objects returned by speechsynthesis.getvoices().
... if not set by the time the utterance is spoken, the voice used will be the most suitable default voice available for the utterance's lang setting.
...And 3 more matches
RTCRtpSynchronizationSource.voiceActivityFlag - Web APIs
the read-only voiceactivityflag property of the rtcrtpsynchronizationsource interface indicates whether or not the most recent rtp packet on the source includes voice activity.
... this is only present if the stream is using the voice activity detection feature; see the rtcofferoptions flag voiceactivitydetection.
... syntax var voiceactivity = rtcrtpsynchronizationsource.voiceactivityflag value a boolean value which is true if voice activity is present in the most recently received rtp packet played by the associated source, or false if voice activity is not present.
...And 2 more matches
SpeechSynthesis.onvoiceschanged - Web APIs
the onvoiceschanged property of the speechsynthesis interface represents an event handler that will run when the list of speechsynthesisvoice objects that would be returned by the speechsynthesis.getvoices() method has changed (when the voiceschanged event fires.) this may occur when speech synthesis is being done on the server-side and the voices list is being determined asynchronously, or when client-side voices are installed/uninstalled while a speech synthesis application is running.
... syntax speechsynthesisinstance.onvoiceschanged = function() { ...
... }; examples this could be used to populate a list of voices that the user can choose between when the event fires (see our speak easy synthesis demo.) note that firefox doesn't support it at present, and will just return a list of voices when speechsynthesis.getvoices() is fired.
...And 2 more matches
RTCOfferAnswerOptions.voiceActivityDetection - Web APIs
the voiceactivitydetection property of the rtcofferansweroptions dictionary is used to specify whether or not to use automatic voice detection for the audio on an rtcpeerconnection.
... the default value, true, indicates that voice detection should be used and that if possible, the user agent should automatically disable or mute outgoing audio when the audio source is not sensing a human voice.
... syntax var options = { voiceactivitydetection: trueorfalse }; value a boolean value indicating whether or not the connection should use voice detection once running.
SpeechSynthesis: voiceschanged event - Web APIs
the voiceschanged event of the web speech api is fired when the list of speechsynthesisvoice objects that would be returned by the speechsynthesis.getvoices() method has changed (when the voiceschanged event fires.) bubbles no cancelable no interface event event handler property onvoiceschanged examples this could be used to repopulate a list of voices that the user can choose between when the event fires.
... you can use the voiceschanged event in an addeventlistener method: var synth = window.speechsynthesis; synth.addeventlistener('voiceschanged', function() { var voices = synth.getvoices(); for(i = 0; i < voices.length ; i++) { var option = document.createelement('option'); option.textcontent = voices[i].name + ' (' + voices[i].lang + ')'; option.setattribute('data-lang', voices[i].lang); option.setattribute('data-name', voices[i].name); voiceselect.appendchild(option); } }); or use the onvoiceschanged event handler property: synth.onvoiceschanged = function() { var voices = synth.getvoices(); for(i = 0; i < voices.length ; i++) { var option = document.createelement('option'); option.textcontent = voices[i].name + ' (' + voices[i].lang + ')'; option.
...setattribute('data-lang', voices[i].lang); option.setattribute('data-name', voices[i].name); voiceselect.appendchild(option); } } specifications specification status comment web speech apithe definition of 'speech synthesis events' in that specification.
SpeechSynthesisVoice.default - Web APIs
the default read-only property of the speechsynthesisvoice interface returns a boolean indicating whether the voice is the default voice for the current app (true), or not (false.) note: for some devices, it might be the default voice for the voice's language.
... syntax var amidefault = speechsynthesisvoiceinstance.default; value a boolean.
... examples for(i = 0; i < voices.length ; i++) { var option = document.createelement('option'); option.textcontent = voices[i].name + ' (' + voices[i].lang + ')'; if(voices[i].default) { option.textcontent += ' -- default'; } option.setattribute('data-lang', voices[i].lang); option.setattribute('data-name', voices[i].name); voiceselect.appendchild(option); } specifications specification status comment web speech apithe definition of 'default' in that specification.
SpeechSynthesisVoice.lang - Web APIs
the lang read-only property of the speechsynthesisvoice interface returns a bcp 47 language tag indicating the language of the voice.
... syntax var mylang = speechsynthesisvoiceinstance.lang; value a domstring representing the language of the device.
... examples for(i = 0; i < voices.length ; i++) { var option = document.createelement('option'); option.textcontent = voices[i].name + ' (' + voices[i].lang + ')'; if(voices[i].default) { option.textcontent += ' -- default'; } option.setattribute('data-lang', voices[i].lang); option.setattribute('data-name', voices[i].name); voiceselect.appendchild(option); } specifications specification status comment web speech apithe definition of 'lang' in that specification.
SpeechSynthesisVoice.localService - Web APIs
the localservice read-only property of the speechsynthesisvoice interface returns a boolean indicating whether the voice is supplied by a local speech synthesizer service (true), or a remote speech synthesizer service (false.) this property is provided to allow differentiation in the case that some voice options are provided by a remote service; it is possible that remote voices might have extra latency, bandwidth or cost associated with them, so such distinction may be useful.
... syntax var amilocal = speechsynthesisvoiceinstance.localservice; value a boolean.
... examples for(i = 0; i < voices.length ; i++) { var option = document.createelement('option'); option.textcontent = voices[i].name + ' (' + voices[i].lang + ')'; if(voices[i].default) { option.textcontent += ' -- default'; } console.log(voices[i].localservice); option.setattribute('data-lang', voices[i].lang); option.setattribute('data-name', voices[i].name); voiceselect.appendchild(option); } specifications specification status comment web speech apithe definition of 'localservice' in that specification.
SpeechSynthesisVoice.name - Web APIs
the name read-only property of the speechsynthesisvoice interface returns a human-readable name that represents the voice.
... syntax var voicename = speechsynthesisvoiceinstance.name; value a domstring representing the name of the voice.
... examples for(i = 0; i < voices.length ; i++) { var option = document.createelement('option'); option.textcontent = voices[i].name + ' (' + voices[i].lang + ')'; if(voices[i].default) { option.textcontent += ' -- default'; } option.setattribute('data-lang', voices[i].lang); option.setattribute('data-name', voices[i].name); voiceselect.appendchild(option); } specifications specification status comment web speech apithe definition of 'name' in that specification.
ARIA Test Cases - Accessibility
tested ua/at combinations: dragon 10 with firefox 3 and ie 8 beta 2 jaws 9 & 10 with firefox 3 jaws 9 & 10 with ie beta 2 nvda 0.6p2 with firefox 3 orca with firefox 3 window-eyes 7 with ie 8 beta 2 and firefox 3 voiceover (leopard) with safari 4.0.2 zoom (leopard) with safari 4.0.2, firefox 3.x and opera 9.x zoomtext 9.1 with firefox 3 and ie 8 beta 2 test case structure test cases are organized as follows: test case links test details expected at behavior markup notes results table at firefox ie opera safari jaws 9 - - - ...
...fail - - voiceover (leopard) n/a n/a - fail window-eyes - - - - nvda - - - - zoom (leopard) pass n/a pass pass zoomtext - - - - orca - - - - table legend - no info/test yet n/a not applicable (not supported technically) pass expected behaviour met fail expected behaviour notmet 1.
... markup used: role="alert" notes: results: at firefox ie opera safari jaws 9 passed fail n/a n/a jaws 10 passed fail - - voiceover (leopard) n/a n/a - fail window-eyes passed- not announced as "alert" fail - - nvda passed n/a - - zoom (leopard) pass n/a pass pass zoomtext - - - - orca - - - - ffd - an interesting thing to note is that, when focus moves to an alert, jaws loses its place on the page...
...And 47 more matches
Using the Web Speech API - Web APIs
generally, the default speech recognition system available on the device will be used for the speech recognition — most modern oses have a speech recognition system for issuing voice commands.
... the web speech api has a main controller interface for this — speechsynthesis — plus a number of closely-related interfaces for representing text to be synthesised (known as utterances), voices to be used for the utterance, etc.
...this includes a set of form controls for entering text to be synthesised, and setting the pitch, rate, and voice to use when the text is uttered.
...And 12 more matches
Web audio codec guide - Web media technologies
codec name (short) full codec name container support aac advanced audio coding mp4, adts, 3gp alac apple lossless audio codec mp4, quicktime (mov) amr adaptive multi-rate 3gp flac free lossless audio codec mp4, ogg, flac g.711 pulse code modulation (pcm) of voice frequencies rtp / webrtc g.722 7 khz audio coding within 64 kbps (for telephony/voip) rtp / webrtc mp3 mpeg-1 audio layer iii mp4, adts, mpeg1, 3gp opus opus webm, mp4, ogg vorbis vorbis webm, ogg [1] when mpeg-1 audio layer iii codec data is stored in an mpeg file, and there is no video track on the file, the file is typically...
...in addition to being used for real-time telephony, amr audio may be used for voicemail and other short audio recordings.
... as a speech-specific codec, amr is essentially useless for any other content, including audio containing only singing voices.
...And 12 more matches
Using COM from js-ctypes
speech synthesis example let's start with following c++ code, which invokes microsoft speech api and says "hello, firefox!" with system default voice, then wait until the speaking done.
... #include <sapi.h> int main(void) { if (succeeded(coinitialize(null))) { ispvoice* pvoice = null; hresult hr = cocreateinstance(clsid_spvoice, null, clsctx_all, iid_ispvoice, (void**)&pvoice); if (succeeded(hr)) { pvoice->speak(l"hello, firefox!", spf_default, null); pvoice->release(); } } // msdn documentation says that even if coinitalize fails, counitialize // must be called couninitialize(); return 0; } to run the code, save it as test.cpp, and run following command in the directory (needs visual studio).
... #include <sapi.h> int main(void) { if (succeeded(coinitialize(null))) { struct ispvoice* pvoice = null; hresult hr = cocreateinstance(&clsid_spvoice, null, clsctx_all, &iid_ispvoice, (void**)&pvoice); if (succeeded(hr)) { pvoice->lpvtbl->speak(pvoice, l"hello, firefox!", 0, null); pvoice->lpvtbl->release(pvoice); } } // msdn documentation says that even if coinitalize fails, couni...
...And 11 more matches
Index - Web APIs
WebAPIIndex
3389 rtcofferansweroptions.voiceactivitydetection audio, configuration, offer, options, property, rtcofferansweroptions, rtcpeerconnection, reference, sdp, voice, webrtc, webrtc api, answer, detection the voiceactivitydetection property of the rtcofferansweroptions dictionary is used to specify whether or not to use automatic voice detection for the audio on an rtcpeerconnection.
... the default value, true, indicates that voice detection should be used and that if possible, the user agent should automatically disable or mute outgoing audio when the audio source is not sensing a human voice.
... 3513 rtcrtpsynchronizationsource.voiceactivityflag api, media, property, rtcrtpsynchronizationsource, rtp, voice activity detection, voice detection, webrtc, voiceactivityflag the read-only voiceactivityflag property of the rtcrtpsynchronizationsource interface indicates whether or not the most recent rtp packet on the source includes voice activity.
...And 11 more matches
Mobile accessibility - Learn web development
let's look at the main two: talkback on android and voiceover on ios.
... ios voiceover a mobile version of voiceover is built into the ios operating system.
... to turn it on, go to your settings app and select accessibility > voiceover.
...And 7 more matches
SpeechSynthesis - Web APIs
the speechsynthesis interface of the web speech api is the controller interface for the speech service; this can be used to retrieve information about the synthesis voices available on the device, start and pause speech, and other commands besides.
... speechsynthesis.getvoices() returns a list of speechsynthesisvoice objects representing all the available voices on the current device.
... voiceschanged fired when the list of speechsynthesisvoice objects that would be returned by the speechsynthesis.getvoices() method has changed.
...And 5 more matches
Web Speech API - Web APIs
the web speech api enables you to incorporate voice data into web apps.
... the web speech api has two parts: speechsynthesis (text-to-speech), and speechrecognition (asynchronous speech recognition.) web speech concepts and usage the web speech api makes web apps able to handle voice data.
... there are two components to this api: speech recognition is accessed via the speechrecognition interface, which provides the ability to recognize voice context from an audio input (normally via the device's default speech recognition service) and respond appropriately.
...And 4 more matches
MathML Accessibility in Mozilla
mac: mathml support is provided by voiceover.
...hence basic support is available in gecko 41.0 (firefox 41.0 / thunderbird 41.0 / seamonkey 2.38) and we are still trying to keep in sync with webkit/voiceover.
... note that voiceover is proprietary so we do not have control on improvements to mathml accessibility in voiceover.
...And 3 more matches
Using Objective-C from js-ctypes
it uses the default system voice and waits until the speaking is done.
... #import <appkit/appkit.h> int main(void) { nsspeechsynthesizer* synth = [[nsspeechsynthesizer alloc] initwithvoice: nil]; [synth startspeakingstring: @"hello, firefox!"]; // wait until start speaking.
... id nsspeechsynthesizer = (id)objc_getclass("nsspeechsynthesizer"); id tmp = objc_msgsend(nsspeechsynthesizer, alloc); selector for a method with arguments in this case, [nsspeechsynthesizer initwithvoice:] takes one argument; the selector name with a trailing colon.
...And 3 more matches
Advanced techniques: Creating and sequencing audio - Web APIs
there are four different sounds, or voices, which can be played.
... each voice has four buttons, which represent four beats in one bar of music.
... each voice also has local controls, which allow you to manipulate the effects or parameters particular to each technique we are using to create those voices.
...And 3 more matches
Digital audio concepts - Web media technologies
this is especially useful for voice-only audio signals.
...in particular, the waveform for music is almost always more complex than that of an audio sample that contains only human voices.
... in addition, the human voice uses a small portion of the range of audio frequencies the human ear can detect.
...And 3 more matches
Index - MDN Web Docs Glossary: Definitions of Web-related terms
224 isp glossary, isp, internet service provider, web, webmechanics an isp (internet service provider) sells internet access, and sometimes email, web hosting, and voice over ip, either by a dial-up connection over a phone line (formerly more common), or through a broadband connection such as a cable modem or dsl service.
...mathml has other applications also including scientific content and voice synthesis.
... 401 screen reader accessibility, glossary, screen reader, voice over, voiceover screen readers are software applications that attempt to convey what is seen on a screen display in a non-visual way, usually as text to speech, but also into braille or sound icons.
...And 2 more matches
Screen reader - MDN Web Docs Glossary: Definitions of Web-related terms
voiceover macos comes with voiceover, a built-in screen reader.
... to access voiceover, go to system preferences > accessibility > voiceover.
... you can also toggle voiceover on and off with fn+command + f5.
...And 2 more matches
Arrays - Learn web development
maybe we've got a series of product items and their prices stored in an array, and we want to loop through them all and print them out on an invoice, while totaling all the prices together and printing out the total price at the bottom.
...if we had 10 items to add to the invoice it would already be annoying, but what about 100 items, or 1000?
... let's return to the example we described earlier — printing out product names and prices on an invoice, then totaling the prices and printing them at the bottom.
...And 2 more matches
Handling common accessibility problems - Learn web development
some are built into the operating system, like voiceover (mac os x and ios), chromevox (on chromebooks), and talkback (android).
... voiceover voiceover (vo) comes free with your mac/iphone/ipad, so it's useful for testing on desktop/mobile if you use apple products.
...in the keyboard shortcuts, "vo" means "the voiceover modifier".
...And 2 more matches
Software accessibility: Where are we today?
some examples of these assistive devices and software include: screen reading software, which speaks text displayed on the screen using hardware or software text-to-speech, and which allows a blind person to use the keyboard to simulate mouse actions alternate input devices, which allow people with physical disabilities to use alternatives to a keyboard and mouse voice recognition software, which allows a person to simulate typing on a keyboard or selecting with a mouse by speaking into the computer screen magnification software, which allows a low-vision computer user to more easily read portions of the screen comprehension software, which allows a dyslexic or learning disabled computer user to see and hear text as it is manipulated on the computer screen ...
...additionally, text-to-speech is used by those who cannot speak, in place of their own voice.
...similarly, voice recognition software often needs information about the context of a user's interaction, in order to make sense of what the user is speaking.
...realizing that complete accessibility was not possible without cooperation between applications and accessibility aids such as screen reading software or voice recognition software, microsoft active accessibility defines a windows-based standard by which applications can communicate context and other pertanent information to accessibility aids.
Key Values - Web APIs
gdk_key_dead_voiced_sound (0xfe5e) qt::key_dead_voiced_sound (0x0100125e) ゙ gdk_key_dead_semivoiced_sound (0xfe5f) qt::key_dead_semivoiced_sound (0x0100125f) ゚ gdk_key_dead_belowdot (0xfe60) qt::key_dead_belowdot (0x01001260) ̣̣ gdk_key_dead_hook (0xfe61) qt::key_dead_hook (0x01001261) ̡ gdk_key_dead_horn (0xfe62) qt::key_d...
... keycode_manner_mode (205) "voicedial" the voice dial key.
... initiates voice dialing.
... qt::key_voicedial (0x01100008) keycode_voice_assist (231) [1] prior to firefox 37, the home button generated a key code of "exit".
SpeechSynthesisErrorEvent.error - Web APIs
language-unavailable no appropriate voice was available for the language set in speechsynthesisutterance.lang.
... voice-unavailable the voice set in speechsynthesisutterance.voice was not available.
... examples var synth = window.speechsynthesis; var inputform = document.queryselector('form'); var inputtxt = document.queryselector('input'); var voiceselect = document.queryselector('select'); var voices = synth.getvoices(); ...
... inputform.onsubmit = function(event) { event.preventdefault(); var utterthis = new speechsynthesisutterance(inputtxt.value); var selectedoption = voiceselect.selectedoptions[0].getattribute('data-name'); for(i = 0; i < voices.length ; i++) { if(voices[i].name === selectedoption) { utterthis.voice = voices[i]; } } synth.speak(utterthis); utterthis.onerror = function(event) { console.error('an error has occurred with the speech synthesis: ' + event.error); } inputtxt.blur(); } specifications specification status comment web speech apithe definition of 'error' in that specification.
SpeechSynthesisUtterance.pitch - Web APIs
it can range between 0 (lowest) and 2 (highest), with 1 being the default pitch for the current platform or voice.
... some speech synthesis engines or voices may constrain the minimum and maximum rates further.
... examples var synth = window.speechsynthesis; var inputform = document.queryselector('form'); var inputtxt = document.queryselector('input'); var voiceselect = document.queryselector('select'); var voices = synth.getvoices(); ...
... inputform.onsubmit = function(event) { event.preventdefault(); var utterthis = new speechsynthesisutterance(inputtxt.value); var selectedoption = voiceselect.selectedoptions[0].getattribute('data-name'); for(i = 0; i < voices.length ; i++) { if(voices[i].name === selectedoption) { utterthis.voice = voices[i]; } } utterthis.pitch = 1.5; synth.speak(utterthis); inputtxt.blur(); } specifications specification status comment web speech apithe definition of 'pitch' in that specification.
SpeechSynthesisUtterance.rate - Web APIs
it can range between 0.1 (lowest) and 10 (highest), with 1 being the default pitch for the current platform or voice, which should correspond to a normal speaking rate.
... some speech synthesis engines or voices may constrain the minimum and maximum rates further.
... examples var synth = window.speechsynthesis; var inputform = document.queryselector('form'); var inputtxt = document.queryselector('input'); var voiceselect = document.queryselector('select'); var voices = synth.getvoices(); ...
... inputform.onsubmit = function(event) { event.preventdefault(); var utterthis = new speechsynthesisutterance(inputtxt.value); var selectedoption = voiceselect.selectedoptions[0].getattribute('data-name'); for(i = 0; i < voices.length ; i++) { if(voices[i].name === selectedoption) { utterthis.voice = voices[i]; } } utterthis.rate = 1.5; synth.speak(utterthis); inputtxt.blur(); } specifications specification status comment web speech apithe definition of 'rate' in that specification.
SpeechSynthesisUtterance - Web APIs
speechsynthesisutterance.voice gets and sets the voice that will be used to speak the utterance.
...after defining some necessary variables, we retrieve a list of the voices available using speechsynthesis.getvoices() and populate a select menu with them so the user can choose what voice they want.
... inside the inputform.onsubmit handler, we stop the form submitting with preventdefault(), use the constructor to create a new utterance instance containing the text from the text <input>, set the utterance's voice to the voice selected in the <select> element, and start the utterance speaking via the speechsynthesis.speak() method.
... var synth = window.speechsynthesis; var voices = synth.getvoices(); var inputform = document.queryselector('form'); var inputtxt = document.queryselector('input'); var voiceselect = document.queryselector('select'); for(var i = 0; i < voices.length; i++) { var option = document.createelement('option'); option.textcontent = voices[i].name + ' (' + voices[i].lang + ')'; option.value = i; voiceselect.appendchild(option); } inputform.onsubmit = function(event) { event.preventdefault(); var utterthis = new speechsynthesisutterance(inputtxt.value); utterthis.voice = voices[voiceselect.value]; synth.speak(utterthis); inputtxt.blur(); } specifications specification status comment web speech apithe definition of 'speechsynthesisutterance' in that specifi...
Visualizations with Web Audio API - Web APIs
note: you can find working examples of all the code snippets in our voice-change-o-matic demo.
... creating a waveform/oscilloscope to create the oscilloscope visualisation (hat tip to soledad penadés for the original code in voice-change-o-matic), we first follow the standard pattern described in the previous section to set up the buffer: analyser.fftsize = 2048; var bufferlength = analyser.frequencybincount; var dataarray = new uint8array(bufferlength); next, we clear the canvas of what had been drawn on it before to get ready for the new visualization display: canvasctx.clearrect(0, 0, width, height); we now define t...
...we have one available in voice-change-o-matic; let's look at how it's done.
...for working examples showing analysernode.getfloatfrequencydata() and analysernode.getfloattimedomaindata(), refer to our voice-change-o-matic-float-data demo (see the source code too) — this is exactly the same as the original voice-change-o-matic, except that it uses float data, not unsigned byte data.
Window.speechSynthesis - Web APIs
after defining some necessary variables, we retrieve a list of the voices available using speechsynthesis.getvoices() and populate a select menu with them so the user can choose what voice they want.
... inside the inputform.onsubmit handler, we stop the form submitting with preventdefault(), create a new speechsynthesisutterance instance containing the text from the text <input>, set the utterance's voice to the voice selected in the <select> element, and start the utterance speaking via the speechsynthesis.speak() method.
... var synth = window.speechsynthesis; var inputform = document.queryselector('form'); var inputtxt = document.queryselector('input'); var voiceselect = document.queryselector('select'); function populatevoicelist() { voices = synth.getvoices(); for(i = 0; i < voices.length ; i++) { var option = document.createelement('option'); option.textcontent = voices[i].name + ' (' + voices[i].lang + ')'; if(voices[i].default) { option.textcontent += ' -- default'; } option.setattribute('data-lang', voices[i].lang); option.setattribute('data-name', voices[i].name); voiceselect.appendchild(option); } } populatevoicelist(); if (speechsynthesis.onvoiceschanged !== undefined) { speechsynthesis.onvoiceschanged = populatevoicelist; } inputform.onsubmit = function...
...(event) { event.preventdefault(); var utterthis = new speechsynthesisutterance(inputtxt.value); var selectedoption = voiceselect.selectedoptions[0].getattribute('data-name'); for(i = 0; i < voices.length ; i++) { if(voices[i].name === selectedoption) { utterthis.voice = voices[i]; } } synth.speak(utterthis); inputtxt.blur(); } specifications specification status comment web speech apithe definition of 'speechsynthesis' in that specification.
Using CSS animations - CSS: Cascading Style Sheets
<p>the caterpillar and alice looked at each other for some time in silence: at last the caterpillar took the hookah out of its mouth, and addressed her in a languid, sleepy voice.</p> note: reload page to see the animation, or click the codepen button to see the animation in the codepen environment.
...imation-name: slidein; } @keyframes slidein { from { margin-left: 100%; width: 300%; } 75% { font-size: 300%; margin-left: 25%; width: 150%; } to { margin-left: 0%; width: 100%; } } <p>the caterpillar and alice looked at each other for some time in silence: at last the caterpillar took the hookah out of its mouth, and addressed her in a languid, sleepy voice.</p> this tells the browser that 75% of the way through the animation sequence, the header should have its left margin at 25% and the width should be 150%.
...imation-name: slidein; animation-iteration-count: infinite; } adding it to the existing code: @keyframes slidein { from { margin-left: 100%; width: 300%; } to { margin-left: 0%; width: 100%; } } <p>the caterpillar and alice looked at each other for some time in silence: at last the caterpillar took the hookah out of its mouth, and addressed her in a languid, sleepy voice.</p> making it move back and forth that made it repeat, but it’s very odd having it jump back to the start each time it begins animating.
...mation-iteration-count: infinite; animation-direction: alternate; } and the rest of the code: @keyframes slidein { from { margin-left: 100%; width: 300%; } to { margin-left: 0%; width: 100%; } } <p>the caterpillar and alice looked at each other for some time in silence: at last the caterpillar took the hookah out of its mouth, and addressed her in a languid, sleepy voice.</p> using animation shorthand the animation shorthand is useful for saving space.
Codecs used by WebRTC - Web media technologies
other audio codecs codec name browser compatibility g.722 chrome, firefox, safari ilbc[1] chrome, safari isac[2] chrome, safari [1] internet low bitrate codec (ilbc) is an open-source narrow-band codec developed by global ip solutions and now google, designed specifically for streaming voice audio.
...it's used by google talk, qq, and other instant messaging clients and is specifically designed for voice transmissions which are encapsulated within an rtp stream.
...this helps to avoid a jarring effect that can occur when voice activation and similar features cause a stream to stop sending data temporarily—a capability known as discontinuous transmission (dtx).
...for voice-only connections in a constrained environment, using g.711 at an 8 khz sample rate can provide an acceptable experience for conversation, but typically you'll use g.711 as a fallback option, since there are other options which are more efficient and sound better, such as opus in its narrowband mode.
HTML: A good basis for accessibility - Learn web development
the following screenshot shows our controls being listed by voiceover on mac.
...large head with lots of sharp teeth." title="the mozilla red dinosaur"> <img src="dinosaur.png" aria-labelledby="dino-label"> <p id="dino-label">the mozilla red tyrannosaurus rex: a two legged dinosaur standing upright like a human, with small arms, and a large head with lots of sharp teeth.</p> the first image, when viewed by a screen reader, doesn't really offer the user much help — voiceover for example reads out "/dinosaur.png, image".
... skip links are especially useful for people who navigate with the aid of assistive technology such as switch control, voice command, or mouth sticks/head wands, where the act of moving through repetitive links can be a laborious task.
HTML: A good basis for accessibility - Learn web development
the following screenshot shows our controls being listed by voiceover on mac.
...large head with lots of sharp teeth." title="the mozilla red dinosaur"> <img src="dinosaur.png" aria-labelledby="dino-label"> <p id="dino-label">the mozilla red tyrannosaurus rex: a two legged dinosaur standing upright like a human, with small arms, and a large head with lots of sharp teeth.</p> the first image, when viewed by a screen reader, doesn't really offer the user much help — voiceover for example reads out "/dinosaur.png, image".
... skip links are especially useful for people who navigate with the aid of assistive technology such as switch control, voice command, or mouth sticks/head wands, where the act of moving through repetitive links can be a laborious task.
WAI-ARIA basics - Learn web development
for example, voiceover gives you the following: on the <header> element — "banner, 2 items" (it contains a heading and the <nav>).
... if you go to voiceover's landmarks menu (accessed using voiceover key + u and then using the cursor keys to cycle through the menu choices), you'll see that most of the elements are nicely listed so they can be accessed quickly.
... <input type="search" name="q" placeholder="search query" aria-label="search through site content"> now if we use voiceover to look at this example, we get some improvements: the search form is called out as a separate item, both when browsing through the page, and in the landmarks menu.
RTCPeerConnection.createOffer() - Web APIs
voiceactivitydetection optional some codecs and hardware are able to detect when audio begins and ends by watching for "silence" (or relatively low sound levels) to occur.
...for example, in the case of music or other non-voice transmission, this can cause loss of important low-volume sounds.
...this option defaults to true (voice activity detection enabled).
Web applications and ARIA FAQ - Accessibility
live region support requires safari 5 with voiceover on ios5 or os x lion opera 9.5+ requires voiceover on os x.
...(tbd) voiceover osx 10.5, ios 4 os x 10.7 ios 5 jaws 8 10 window-eyes 7 no live region support currently zoomtext ?
...they include: orca for linux nvda for windows voiceover is built into os x when you're testing with a screen reader, keep two key points in mind: casual testing with a screen reader user can never replace feedback, testing, and help from real users.
Implementing a Microsoft Active Accessibility (MSAA) Server - Accessibility
third party assistive technology, such as screen readers, screen magnifiers and voice input software, want to track what's happening inside mozilla.
...finally, voice dictation software needs to know what's in the current document or ui in order to implement "say what you see" kinds of features.
...try to use unique names for each item in a dialog so that voice dictation software doesn't have to deal with extra ambiguity.[important] get_accvalue: get the "value" of the iaccessible, for example a number in a slider, a url for a link, the text a user entered in a field.
Browser Feature Detection - Archive of obsolete content
aknumeral true false false speakpunctuation true false false speechrate true false true stress true false false tablelayout true true true textshadow true false true top true true true unicodebidi true true true visibility true true true voicefamily true false true volume true false true widows true false true zindex true true true test code // document properties that are used to determine // support levels var _features = { 'domcore1': [ {name: 'doctype', 'supported': false}, {name: 'implementation', 'supported': false}, {name: 'documentelement', 'supported...
...me: 'speaknumeral', 'supported': false}, {name: 'speakpunctuation', 'supported': false}, {name: 'speechrate', 'supported': false}, {name: 'stress', 'supported': false}, {name: 'tablelayout', 'supported': false}, {name: 'textshadow', 'supported': false}, {name: 'top', 'supported': false}, {name: 'unicodebidi', 'supported': false}, {name: 'visibility', 'supported': false}, {name: 'voicefamily', 'supported': false}, {name: 'volume', 'supported': false}, {name: 'widows', 'supported': false}, {name: 'zindex', 'supported': false} ] }; function supports(object, featureset) { var i; var features = _features[featureset]; var level = 0; if (!features) return level; for (i = 0; i < features.length; i++) if (typeof(object[features[i].name]) != 'undefined') { fea...
WebRTC - MDN Web Docs Glossary: Definitions of Web-related terms
webrtc (web real-time communication) is an api that can be used by video-chat, voice-calling, and p2p-file-sharing web apps.
... rtcpeerconnection an interface to configure video chat or voice calls.
What is accessibility? - Learn web development
some are built into the operating system, like voiceover (macos, ipados, ios), narrator (microsoft windows), chromevox (on chrome os), and talkback (android).
...hearing-impaired people do use ats (see assistive devices for people with hearing, voice, speech, or language disorders), but there are not really special ats specific for computer/web use.
HTML text fundamentals - Learn web development
as well as making the document more interesting to read, these are recognised by screen readers and spoken out in a different tone of voice.
...as well as making the document more useful, again these are recognized by screen readers and spoken in a different tone of voice.
Mozilla accessibility architecture
accessibility apis are used by 3rd party software like screen readers, screen magnifiers, and voice dictation software, which need information about document content and ui controls, as well as important events like changes of focus.
...unfortunately, we still are not fully working with any major screen reader, screen magnifier or voice dictation product on the market.
Information for Assistive Technology Vendors
this makes it possible for the vendors of windows accessibility software, such as screen readers, voice dictation packages and screen magnifiers to provide support for mozilla.
...this makes it possible for the developers of linux and unix accessibility software, such as screen readers, voice dictation packages and screen magnifiers to provide support for mozilla.
AT APIs Support
but in the meantime it more up-to-date and contains more details than existed analogues for at-spi and msaa this documentation explains how makers of screen readers, voice dictation packages, onscreen keyboards, magnification software and other assitive technologies can support gecko-based software.
... windows platform jaws windows eyes nvda linux/unix platform orca os x platform voiceover contacts please discuss accessibility issues on the mozilla accessibility groups or on the mozilla accessibility irc channel.
BaseAudioContext.createGain() - Web APIs
the below snippet wouldn't work as is — for a complete working example, check out our voice-change-o-matic demo (view source.) <div> <button class="mute">mute button</button> </div> var audioctx = new (window.audiocontext || window.webkitaudiocontext)(); var gainnode = audioctx.creategain(); var mute = document.queryselector('.mute'); var source; if (navigator.mediadevices.getusermedia) { navigator.mediadevices.getusermedia ( // constraints - only audio needed for this app {...
... mute.onclick = voicemute; function voicemute() { if(mute.id == "") { // 0 means mute.
GainNode.gain - Web APIs
WebAPIGainNodegain
the below snippet wouldn't work as is — for a complete working example, check out our voice-change-o-matic demo (view source.) <div> <button class="mute">mute button</button> </div> var audioctx = new (window.audiocontext || window.webkitaudiocontext)(); var gainnode = audioctx.creategain(); var mute = document.queryselector('.mute'); var source; if (navigator.mediadevices.getusermedia) { navigator.mediadevices.getusermedia ( // constraints - only audio needed for this app {...
... mute.onclick = voicemute; function voicemute() { if(mute.id == "") { // 0 means mute.
GainNode - Web APIs
WebAPIGainNode
the below snippet wouldn't work as is — for a complete working example, check out our voice-change-o-matic demo (view source.) <div> <button class="mute">mute button</button> </div> var audioctx = new (window.audiocontext || window.webkitaudiocontext)(); var gainnode = audioctx.creategain(); var mute = document.queryselector('.mute'); var source; if (navigator.mediadevices.getusermedia) { navigator.mediadevices.getusermedia ( // constraints - only audio needed for this app {...
... mute.onclick = voicemute; function voicemute() { if(mute.id == "") { // 0 means mute.
RTCDTMFSender.toneBuffer - Web APIs
using tone buffer strings for example, if you're writing code to control a voicemail system by sending dtmf codes, you might use a string such as "*,1,5555".
... in this example, we would send "*" to request access to the vm system, then, after a pause, send a "1" to start playback of voicemail messages, then after a pause, dial "5555" as a pin number to open the messages.
RTCRtpSynchronizationSource - Web APIs
voiceactivityflag optional a boolean value indicating whether or not voice activity is included in the last rtp packet played from the source.
... if the peer has indicated that it's not supporting voice activity detection, this field is not provided.
SpeechSynthesisErrorEvent - Web APIs
examples var synth = window.speechsynthesis; var inputform = document.queryselector('form'); var inputtxt = document.queryselector('input'); var voiceselect = document.queryselector('select'); var voices = synth.getvoices(); ...
... inputform.onsubmit = function(event) { event.preventdefault(); var utterthis = new speechsynthesisutterance(inputtxt.value); var selectedoption = voiceselect.selectedoptions[0].getattribute('data-name'); for(i = 0; i < voices.length ; i++) { if(voices[i].name === selectedoption) { utterthis.voice = voices[i]; } } synth.speak(utterthis); utterthis.onerror = function(event) { console.log('an error has occurred with the speech synthesis: ' + event.error); } inputtxt.blur(); } specifications specification status comment web speech apithe definition of 'speechsynthesiserrorevent' in that specification.
SpeechSynthesisUtterance.SpeechSynthesisUtterance() - Web APIs
var synth = window.speechsynthesis; var inputform = document.queryselector('form'); var inputtxt = document.queryselector('input'); var voiceselect = document.queryselector('select'); var voices = synth.getvoices(); ...
... inputform.onsubmit = function(event) { event.preventdefault(); var utterthis = new speechsynthesisutterance(inputtxt.value); var selectedoption = voiceselect.selectedoptions[0].getattribute('data-name'); for(i = 0; i < voices.length ; i++) { if(voices[i].name === selectedoption) { utterthis.voice = voices[i]; } } synth.speak(utterthis); inputtxt.blur(); } specifications specification status comment web speech apithe definition of 'speechsynthesisutterance()' in that specification.
SpeechSynthesisUtterance.lang - Web APIs
examples var synth = window.speechsynthesis; var inputform = document.queryselector('form'); var inputtxt = document.queryselector('input'); var voiceselect = document.queryselector('select'); var voices = synth.getvoices(); ...
... inputform.onsubmit = function(event) { event.preventdefault(); var utterthis = new speechsynthesisutterance(inputtxt.value); var selectedoption = voiceselect.selectedoptions[0].getattribute('data-name'); for(i = 0; i < voices.length ; i++) { if(voices[i].name === selectedoption) { utterthis.voice = voices[i]; } } utterthis.lang = 'en-us'; synth.speak(utterthis); inputtxt.blur(); } specifications specification status comment web speech apithe definition of 'lang' in that specification.
SpeechSynthesisUtterance.onboundary - Web APIs
}; examples var synth = window.speechsynthesis; var inputform = document.queryselector('form'); var inputtxt = document.queryselector('input'); var voiceselect = document.queryselector('select'); var voices = synth.getvoices(); ...
... inputform.onsubmit = function(event) { event.preventdefault(); var utterthis = new speechsynthesisutterance(inputtxt.value); var selectedoption = voiceselect.selectedoptions[0].getattribute('data-name'); for(i = 0; i < voices.length ; i++) { if(voices[i].name === selectedoption) { utterthis.voice = voices[i]; } } synth.speak(utterthis); utterthis.onboundary = function(event) { console.log(event.name + ' boundary reached after ' + event.elapsedtime + ' milliseconds.'); } inputtxt.blur(); } specifications specification status comment web speech apithe definition of 'onboundary' in that specification.
SpeechSynthesisUtterance.onend - Web APIs
}; examples var synth = window.speechsynthesis; var inputform = document.queryselector('form'); var inputtxt = document.queryselector('input'); var voiceselect = document.queryselector('select'); var voices = synth.getvoices(); ...
... inputform.onsubmit = function(event) { event.preventdefault(); var utterthis = new speechsynthesisutterance(inputtxt.value); var selectedoption = voiceselect.selectedoptions[0].getattribute('data-name'); for(i = 0; i < voices.length ; i++) { if(voices[i].name === selectedoption) { utterthis.voice = voices[i]; } } synth.speak(utterthis); utterthis.onend = function(event) { console.log('utterance has finished being spoken after ' + event.elapsedtime + ' milliseconds.'); } inputtxt.blur(); } specifications specification status comment web speech apithe definition of 'onend' in that specification.
SpeechSynthesisUtterance.onerror - Web APIs
}; examples var synth = window.speechsynthesis; var inputform = document.queryselector('form'); var inputtxt = document.queryselector('input'); var voiceselect = document.queryselector('select'); var voices = synth.getvoices(); ...
... inputform.onsubmit = function(event) { event.preventdefault(); var utterthis = new speechsynthesisutterance(inputtxt.value); var selectedoption = voiceselect.selectedoptions[0].getattribute('data-name'); for(i = 0; i < voices.length ; i++) { if(voices[i].name === selectedoption) { utterthis.voice = voices[i]; } } synth.speak(utterthis); utterthis.onerror = function(event) { console.log('an error has occurred with the speech synthesis: ' + event.error); } inputtxt.blur(); } specifications specification status comment web speech apithe definition of 'onerror' in that specification.
SpeechSynthesisUtterance.onmark - Web APIs
}; examples var synth = window.speechsynthesis; var inputform = document.queryselector('form'); var inputtxt = document.queryselector('input'); var voiceselect = document.queryselector('select'); var voices = synth.getvoices(); ...
... inputform.onsubmit = function(event) { event.preventdefault(); var utterthis = new speechsynthesisutterance(inputtxt.value); var selectedoption = voiceselect.selectedoptions[0].getattribute('data-name'); for(i = 0; i < voices.length ; i++) { if(voices[i].name === selectedoption) { utterthis.voice = voices[i]; } } synth.speak(utterthis); utterthis.onmark = function(event) { console.log('a mark was reached: ' + event.name); } inputtxt.blur(); } specifications specification status comment web speech apithe definition of 'onmark' in that specification.
SpeechSynthesisUtterance.onpause - Web APIs
}; examples var synth = window.speechsynthesis; var inputform = document.queryselector('form'); var inputtxt = document.queryselector('input'); var voiceselect = document.queryselector('select'); var voices = synth.getvoices(); ...
... inputform.onsubmit = function(event) { event.preventdefault(); var utterthis = new speechsynthesisutterance(inputtxt.value); var selectedoption = voiceselect.selectedoptions[0].getattribute('data-name'); for(i = 0; i < voices.length ; i++) { if(voices[i].name === selectedoption) { utterthis.voice = voices[i]; } } synth.speak(utterthis); utterthis.onpause = function(event) { console.log('speech paused after ' + event.elapsedtime + ' milliseconds.'); } inputtxt.blur(); } specifications specification status comment web speech apithe definition of 'onpause' in that specification.
SpeechSynthesisUtterance.onresume - Web APIs
}; examples var synth = window.speechsynthesis; var inputform = document.queryselector('form'); var inputtxt = document.queryselector('input'); var voiceselect = document.queryselector('select'); var voices = synth.getvoices(); ...
... inputform.onsubmit = function(event) { event.preventdefault(); var utterthis = new speechsynthesisutterance(inputtxt.value); var selectedoption = voiceselect.selectedoptions[0].getattribute('data-name'); for(i = 0; i < voices.length ; i++) { if(voices[i].name === selectedoption) { utterthis.voice = voices[i]; } } synth.speak(utterthis); utterthis.onresume = function(event) { console.log('speech resumed after ' + event.elapsedtime + ' milliseconds.'); } inputtxt.blur(); } specifications specification status comment web speech apithe definition of 'onresume' in that specification.
SpeechSynthesisUtterance.onstart - Web APIs
}; examples var synth = window.speechsynthesis; var inputform = document.queryselector('form'); var inputtxt = document.queryselector('input'); var voiceselect = document.queryselector('select'); var voices = synth.getvoices(); ...
... inputform.onsubmit = function(event) { event.preventdefault(); var utterthis = new speechsynthesisutterance(inputtxt.value); var selectedoption = voiceselect.selectedoptions[0].getattribute('data-name'); for(i = 0; i < voices.length ; i++) { if(voices[i].name === selectedoption) { utterthis.voice = voices[i]; } } synth.speak(utterthis); utterthis.onstart = function(event) { console.log('we have started uttering this speech: ' + event.utterance.text); } inputtxt.blur(); } specifications specification status comment web speech apithe definition of 'onstart' in that specification.
SpeechSynthesisUtterance.text - Web APIs
examples var synth = window.speechsynthesis; var inputform = document.queryselector('form'); var inputtxt = document.queryselector('input'); var voiceselect = document.queryselector('select'); var voices = synth.getvoices(); ...
... inputform.onsubmit = function(event) { event.preventdefault(); var utterthis = new speechsynthesisutterance(inputtxt.value); var selectedoption = voiceselect.selectedoptions[0].getattribute('data-name'); for(i = 0; i < voices.length ; i++) { if(voices[i].name === selectedoption) { utterthis.voice = voices[i]; } } console.log(utterthis.text); synth.speak(utterthis); inputtxt.blur(); } specifications specification status comment web speech apithe definition of 'text' in that specification.
SpeechSynthesisUtterance.volume - Web APIs
examples var synth = window.speechsynthesis; var inputform = document.queryselector('form'); var inputtxt = document.queryselector('input'); var voiceselect = document.queryselector('select'); var voices = synth.getvoices(); ...
... inputform.onsubmit = function(event) { event.preventdefault(); var utterthis = new speechsynthesisutterance(inputtxt.value); var selectedoption = voiceselect.selectedoptions[0].getattribute('data-name'); for(i = 0; i < voices.length ; i++) { if(voices[i].name === selectedoption) { utterthis.voice = voices[i]; } } utterthis.volume = 0.5; synth.speak(utterthis); inputtxt.blur(); } specifications specification status comment web speech apithe definition of 'volume' in that specification.
Web Video Text Tracks Format (WebVTT) - Web APIs
example 19 - ruby text tag <ruby>www<rt>world wide web</rt>oui<rt>yes</rt></ruby> voice tag (<v></v>) similar to class tag, also used to style the contained text using css.
... example 20 - voice tag <v bob>text</v> interfaces there are two interfaces or apis used in webvtt which are: vttcue interface it is used for providing an interface in document object model api, where different attributes supported by it can be used to prepare and alter the cues in number of ways.
Inputs and input sources - Web APIs
voice commands using speech recognition.
...this input may be a button, trigger, trackpad tap or click, a voice command, or special hand gesture, or possibly some other form of input.
Using the Web Audio API - Web APIs
the voice-change-o-matic is a fun voice manipulator and sound visualization web app that allows you to choose different effects and visualizations.
...(run the voice-change-o-matic live).
ARIA live regions - Accessibility
here is a screenshot of voiceover on mac announcing the update (via subtitles) to the live region: preferring specialized live region roles in the following well-known predefined cases it is better to use a specific provided "live region role": role description compatibility notes log chat, error, game or other type of log to maximize compatibility, add a redundant aria-live="polite" whe...
...however, adding both aria-live and role="alert" causes double speaking issues in voiceover on ios.
ARIA: contentinfo role - Accessibility
developers should always prefer using the correct semantic html element over using aria, making sure to test for known issues in voiceover.
... best practices prefer html when it is an immediate descendant of the <body> element, using the <footer> element will automatically communicate a section has a role of contentinfo (save for a known issue in voiceover).
Operable - Accessibility
the conformance criteria under this guideline ensures that users are able to interact with digital technology using different input methods beyond a keyboard or mouse (including touchscreen, voice, device motion, or alternative input devices.
... understanding target size 2.5.6 concurrent input mechanisms (aaa) added in 2.1 make sure people can use and switch between different modes of input when interacting with digital content including touchscreen, keyboard, mouse, voice commands, or alternative input devices.
<frequency> - CSS: Cascading Style Sheets
WebCSSfrequency
the <frequency> css data type represents a frequency dimension, such as the pitch of a speaking voice.
... note: this data type was initially introduced in css level 2 for the now-obsolete aural media type, where it was used to define the pitch of the voice.
list-style-type - CSS: Cascading Style Sheets
e -moz-ethiopic-halehame-am example ethiopic-halehame-ti-er -moz-ethiopic-halehame-ti-er example ethiopic-halehame-ti-et -moz-ethiopic-halehame-ti-et example hangul -moz-hangul example example example hangul-consonant -moz-hangul-consonant example example example urdu -moz-urdu example accessibility concerns the voiceover screen reader has an issue where unordered lists with a list-style-type value of none applied to them will not be announced as a list.
... ul { list-style: none; } ul li::before { content: "\200b"; } voiceover and list-style-type: none – unfettered thoughts mdn understanding wcag, guideline 1.3 explanations understanding success criterion 1.3.1 | w3c understanding wcag 2.0 formal definition initial valuediscapplies tolist itemsinheritedyescomputed valueas specifiedanimation typediscrete formal syntax <counter-style> | <string> | nonewhere <counter-style> = <counter-style-name> | symbols()where <counter-style-name> = <custom-ident> examples setting list item markers html list 1 <ol clas...
Event reference
events dommenuitemactive dommenuiteminactive window events close popup events popuphidden popuphiding popupshowing popupshown tab events visibilitychange battery events chargingchange chargingtimechange dischargingtimechange levelchange call events alerting busy callschanged cfstatechange connecting dialing disconnected disconnecting error held, holding incoming resuming statechange voicechange sensor events compassneedscalibration devicemotion deviceorientation orientationchange smartcard events icccardlockerror iccinfochange smartcard-insert smartcard-remove stkcommand stksessionend cardstatechange sms and ussd events delivered received sent ussdreceived frame events mozbrowserclose mozbrowsercontextmenu mozbrowsererror mozbrowsericonchange mozbrowserlocationchange mozbr...
... voiceschanged event web speech api the list of speechsynthesisvoice objects that would be returned by the speechsynthesis.getvoices() method has changed (when the voiceschanged event fires.) versionchange indexeddb a versionchange transaction completed.
<kbd>: The Keyboard Input element - HTML: Hypertext Markup Language
WebHTMLElementkbd
the html keyboard input element (<kbd>) represents a span of inline text denoting textual user input from a keyboard, voice input, or any other text entry device.
... recommendation expanded to include any user input, like voice input and individual keystrokes.
<summary>: The Disclosure Summary element - HTML: Hypertext Markup Language
WebHTMLElementsummary
basic example a simple example showing the use of <summary> in a <details> element: <details open> <summary>overview</summary> <ol> <li>cash on hand: $500.00</li> <li>current invoice: $75.30</li> <li>due date: 5/6/19</li> </ol> </details> summaries as headings you can use heading elements in <summary>, like this: <details open> <summary><h4>overview</h4></summary> <ol> <li>cash on hand: $500.00</li> <li>current invoice: $75.30</li> <li>due date: 5/6/19</li> </ol> </details> this currently has some spacing issues that could be addressed using ...
... html in summaries this example adds some semantics to the <summary> element to indicate the label as important: <details open> <summary><strong>overview</strong></summary> <ol> <li>cash on hand: $500.00</li> <li>current invoice: $75.30</li> <li>due date: 5/6/19</li> </ol> </details> specifications specification status comment html living standardthe definition of '<summary>' in that specification.
Index - Archive of obsolete content
this is important in that it provides a natural way to tell several voices apart, as each can be positioned to originate at a different location on the sound stage.
List of Mozilla-Based Applications - Archive of obsolete content
edia blog post about projects wine implementation of the windows api uses mozilla spidermonkey and the gecko activex control worksmart.net suite of web-based workplace apps uses prism wxwebconnect web browser control library wyzo browser xb browser anonymous web browser xbusiness create and send branded invoices, quotes or estimates xdf billing and quotes software xiphos bible study software xmldbeditor database editor xpud linux desktop xpud: linux with an xul interface, 10 second boot time xrap xulrunner application packager xul daim image tool xul explorer development tool xulrunner appli...
azimuth - Archive of obsolete content
ArchiveWebCSSazimuth
this is important in that it provides a natural way to tell several voices apart, as each can be positioned to originate at a different location on the sound stage.
CSS - Archive of obsolete content
ArchiveWebCSS
this is important in that it provides a natural way to tell several voices apart, as each can be positioned to originate at a different location on the sound stage.
Index - Game development
34 unconventional controls controls, doppler, games, javascript, makey makey, proximity, tv leap motion, voice i hope you liked the experiments — if you have any others that you think might interest other people, feel free to add details of them here.
ISP - MDN Web Docs Glossary: Definitions of Web-related terms
an isp (internet service provider) sells internet access, and sometimes email, web hosting, and voice over ip, either by a dial-up connection over a phone line (formerly more common), or through a broadband connection such as a cable modem or dsl service.
MathML - MDN Web Docs Glossary: Definitions of Web-related terms
mathml has other applications also including scientific content and voice synthesis.
VoIP - MDN Web Docs Glossary: Definitions of Web-related terms
voip (voice over internet protocol) is a technology used to transmit voice messages over ip (internet protocol) networks.
Accessibility - Learn web development
sites should be accessible to keyboard, mouse, and touch screen users, and any other way users access the web, including screen readers and voice assistants like alexa and google home.
Beginner's guide to media queries - Learn web development
the value none means the user has no pointing device; perhaps they are navigating with the keyboard only or with voice commands.
What is CSS? - Learn web development
there are also other people, known as invited experts, who act as independent voices; they are not linked to a member organization.
What is accessibility? - Learn web development
visual impairment again, provide a text transcript that a user can consult without needing to play the video, and an audio-description (an off-screen voice that describes what is happening in the video).
How to build custom form controls - Learn web development
here is the final result of all these changes (you'll get a better feel for this by trying it with an assistive technology such as nvda or voiceover): live example check out the final source code if you want to move forward, the code in this example needs some improvement before it becomes generic and reusable.
How to structure a web form - Learn web development
this was tested in voiceover (and nvda behaves similarly).
Graceful asynchronous programming with Promises - Learn web development
since getusermedia() has to ensure that the user has permission to use those devices and ask the user which microphone to use and which camera to use (or whether to be a voice-only call, among other possible options), it can block until not only all of those decisions are made, but also the camera and microphone have been engaged.
Introduction to web APIs - Learn web development
the twilio api, which provides a framework for building voice and video call functionality into your app, sending sms/mms from your apps, and more.
Working with Svelte stores - Learn web development
you have several options, like nvda for windows, chromevox for chrome, orca on linux, and voiceover for mac os x and ios, among other options.
Accessibility Features in Firefox
in recent articles from both afb's access world and nfb's voice of the nation's blind, reviewers found no significant roadblocks in moving to firefox from internet explorer for screen reader users.
CSUN Firefox Materials
in recent articles from both afb's access world and nfb's voice of the nation's blind, reviewers found no significant roadblocks in moving to firefox from internet explorer for screen reader users.
Embedding API for Accessibility
the audio file could be a clip of a human voice saying "you've got video" or even a simple beep.
Gecko info for Windows accessibility vendors
this faq explains how makers of windows screen readers, voice dictation packages and magnification software can support gecko-based software.
Information for users
assistive technology compatibility this is a wiki page which users can edit to provide up to date information on any issues related to compatibility with assistive technologies such as screen readers, screen magnifiers, voice input software and on screen keyboards.
Accessible Toolkit Checklist
expose your ui - a way for assistive technologies such as screen readers, screen magnifiers and voice dictation software to understand your software.
Accessibility and Mozilla
in recent articles from both afb's access world and nfb's voice of the nation's blind, reviewers found no significant roadblocks in moving to firefox from internet explorer for screen reader users.
Mozilla Content Localized in Your Language
voice what is the appropriate form of expressing voice in your language?
Phishing: a short definition
a relatively simple, yet effective, phishing scheme is sending an email with a fake invoice of a person’s favorite shopping site.
Accessibility API Implementation Details
at apis supportthis documentation explains how makers of screen readers, voice dictation packages, onscreen keyboards, magnification software and other assitive technologies can support gecko-based software.
Index
MozillaTechXPCOMIndex
actions are needed more for ats that assist the mobility impaired, such as on-screen keyboards and voice command software.
IAccessibleAction
actions are needed more for ats that assist the mobility impaired, such as on-screen keyboards and voice command software.
Web Audio Editor - Firefox Developer Tools
two good demos are: the voice-change-o-matic, which can apply various effects to the microphone input and also provides a visualisation of the result the violent theremin, which changes the pitch and volume of a sine wave as you move the mouse pointer visualizing the graph the web audio editor will now display the graph for the loaded audio context.
AnalyserNode.fftSize - Web APIs
for more complete applied examples/information, check out our voice-change-o-matic demo (see app.js lines 128–205 for relevant code).
AnalyserNode.frequencyBinCount - Web APIs
for more complete applied examples/information, check out our voice-change-o-matic demo (see app.js lines 128–205 for relevant code).
AnalyserNode.getByteFrequencyData() - Web APIs
for more examples/information, check out our voice-change-o-matic demo (see app.js lines 128–205 for relevant code).
AnalyserNode.getByteTimeDomainData() - Web APIs
for more complete applied examples/information, check out our voice-change-o-matic demo (see app.js lines 128–205 for relevant code).
AnalyserNode.getFloatFrequencyData() - Web APIs
for more complete applied examples/information, check out our voice-change-o-matic-float-data demo (see the source code too).
AnalyserNode.getFloatTimeDomainData() - Web APIs
for more complete applied examples/information, check out our voice-change-o-matic-float-data demo (see the source code too).
AnalyserNode.maxDecibels - Web APIs
for more complete applied examples/information, check out our voice-change-o-matic demo (see app.js lines 128–205 for relevant code).
AnalyserNode.minDecibels - Web APIs
for more complete applied examples/information, check out our voice-change-o-matic demo (see app.js lines 128–205 for relevant code).
AnalyserNode.smoothingTimeConstant - Web APIs
for more complete applied examples/information, check out our voice-change-o-matic demo (see app.js lines 128–205 for relevant code).
AnalyserNode - Web APIs
for more complete applied examples/information, check out our voice-change-o-matic demo (see app.js lines 128–205 for relevant code).
AudioDestinationNode.maxChannelCount - Web APIs
ng would set up a simple audio graph, featuring an audiodestinationnode with maxchannelcount of 2: var audioctx = new audiocontext(); var source = audioctx.createmediaelementsource(mymediaelement); source.connect(gainnode); audioctx.destination.maxchannelcount = 2; gainnode.connect(audioctx.destination); to see a more complete implementation, check out one of our mdn web audio examples, such as voice-change-o-matic or violent theremin.
AudioDestinationNode - Web APIs
their speakers), so you can get it hooked up inside an audio graph using only a few lines of code: var audioctx = new audiocontext(); var source = audioctx.createmediaelementsource(mymediaelement); source.connect(gainnode); gainnode.connect(audioctx.destination); to see a more complete implementation, check out one of our mdn web audio examples, such as voice-change-o-matic or violent theremin.
BaseAudioContext.createAnalyser() - Web APIs
for more complete applied examples/information, check out our voice-change-o-matic demo (see app.js lines 128–205 for relevant code).
BaseAudioContext.createBiquadFilter() - Web APIs
for a complete working example, check out our voice-change-o-matic demo (look at the source code too).
BaseAudioContext.createConvolver() - Web APIs
for applied examples/information, check out our voice-change-o-matic demo (see app.js for relevant code).
BaseAudioContext.createWaveShaper() - Web APIs
for applied examples/information, check out our voice-change-o-matic demo (see app.js for relevant code).
BaseAudioContext.destination - Web APIs
example note: for a full example implementation, see one of our web audio demos on the mdn github repo, like voice-change-o-matic.
BiquadFilterNode.Q - Web APIs
for a complete working example, check out our voice-change-o-matic demo (look at the source code too).
BiquadFilterNode.detune - Web APIs
for a complete working example, check out our voice-change-o-matic demo (look at the source code too).
BiquadFilterNode.frequency - Web APIs
for a complete working example, check out our voice-change-o-matic demo (look at the source code too).
BiquadFilterNode.gain - Web APIs
for a complete working example, check out our voice-change-o-matic demo (look at the source code too).
BiquadFilterNode.type - Web APIs
for a complete working example, check out our voice-change-o-matic demo (look at the source code too).
BiquadFilterNode - Web APIs
for a complete working example, check out our voice-change-o-matic demo (look at the source code too).
PaymentDetailsBase - Web APIs
these represent the line items on a receipt or invoice.
PaymentDetailsUpdate - Web APIs
these represent the line items on a receipt or invoice.
PaymentRequestUpdateEvent.updateWith() - Web APIs
these represent the line items on a receipt or invoice.
RTCDTMFSender - Web APIs
the primary purpose for webrtc's dtmf support is to allow webrtc-based communication clients to be connected to a public-switched telephone network (pstn) or other legacy telephone service, including extant voice over ip (voip) services.
RTCOfferAnswerOptions - Web APIs
properties voiceactivitydetection optional for configurations of systems and codecs that are able to detect when the user is speaking and toggle muting on and off automatically, this option enables and disables that behavior.
RTCRtpEncodingParameters - Web APIs
dtx only used for an rtcrtpsender whose kind is audio, this property indicates whether or not to use discontinuous transmission (a feature by which a phone is turned off or the microphone muted automatically in the absence of voice activity).
RTCRtpReceiver.getSynchronizationSources() - Web APIs
the synchronization source objects add a voiceactivityflag property, which indicates if the last rtp packet received contained voice activity.
RTCRtpSendParameters.encodings - Web APIs
dtx only used for an rtcrtpsender whose kind is audio, this property indicates whether or not to use discontinuous transmission (a feature by which a phone is turned off or the microphone muted automatically in the absence of voice activity).
Screen Wake Lock API - Web APIs
there are plenty of use cases for keeping a screen on, including reading an ebook, map navigation, following a recipe, presenting to an audience, scanning a qr/barcode or applications that use voice or gesture control, rather than tactile input (the default way to keep a screen awake).
SpeechSynthesis.speak() - Web APIs
inputform.onsubmit = function(event) { event.preventdefault(); var utterthis = new speechsynthesisutterance(inputtxt.value); var selectedoption = voiceselect.selectedoptions[0].getattribute('data-name'); for(i = 0; i < voices.length ; i++) { if(voices[i].name === selectedoption) { utterthis.voice = voices[i]; } } synth.speak(utterthis); inputtxt.blur(); } specifications specification status comment web speech apithe definition of 'speak()' in that specification.
WaveShaperNode.curve - Web APIs
for applied examples/information, check out our voice-change-o-matic demo (see app.js for relevant code).
WaveShaperNode.oversample - Web APIs
for applied examples/information, check out our voice-change-o-matic demo (see app.js for relevant code).
WaveShaperNode - Web APIs
for applied examples/information, check out our voice-change-o-matic demo (see app.js for relevant code).
Using DTMF with WebRTC - Web APIs
webrtc currently ignores these payloads; this is because webrtc's dtmf support is primarily intended for use with legacy telephone services that rely on dtmf tones to perform tasks such as: teleconferencing systems menu systems voicemail systems entry of credit card or other payment information passcode entry note: while the dtmf is not sent to the remote peer as audio, browsers may choose to play the corresponding tone to the local user as part of their user experience, since users are typically used to hearing their phone play the tones audibly.
Window.open() - Web APIs
WebAPIWindowopen
voice browsers) and several web-aware applications (e.g.
Web APIs
WebAPI
sharedworkerglobalscope slottable sourcebuffer sourcebufferlist speechgrammar speechgrammarlist speechrecognition speechrecognitionalternative speechrecognitionerror speechrecognitionerrorevent speechrecognitionevent speechrecognitionresult speechrecognitionresultlist speechsynthesis speechsynthesiserrorevent speechsynthesisevent speechsynthesisutterance speechsynthesisvoice staticrange stereopannernode storage storageestimate storageevent storagemanager storagequota stylepropertymap stylepropertymapreadonly stylesheet stylesheetlist submitevent subtlecrypto syncevent syncmanager t taskattributiontiming text textdecoder textencoder textmetrics textrange texttrack texttrackcue texttracklist timeevent timeranges touch touchevent touchlist trackdefaul...
ARIA Screen Reader Implementors Guide - Accessibility
ideas for settings and heuristics allow for a different voice (in text-to-speech) or other varying presentational characteristics to set live changes apart.
ARIA annotations - Accessibility
aria annotation roles and objects are currently exposed in: firefox from version 75 onwards, on windows and linux (on macos, we are first waiting for apple to define what safari will expose as apple-dialect attributes to voiceover, and will then follow suit.) chrome from version 81 onwards, currently behind the #enable-accessibility-expose-aria-annotations flag (go to chrome://flags to enable this.) unfortunately, you won’t be able to use any of these yet, as screenreader support is currently not there.
ARIA - Accessibility
live regions are also supported by nvda with firefox, and voiceover with safari.
Cognitive accessibility - Accessibility
using active voice in the present tense.
Understanding the Web Content Accessibility Guidelines - Accessibility
buttons must be clickable in some way — mouse, keyboard, voice command, etc.).
aural - CSS: Cascading Style Sheets
WebCSS@mediaaural
examples basic example @media aural { body { voice-family: paul } } specifications specification status comment css level 2 (revision 2)the definition of 'aural' in that specification.
<frequency-percentage> - CSS: Cascading Style Sheets
the pitch of a speaking voice, are not currently used in any css properties.
list-style - CSS: Cascading Style Sheets
ul { list-style: none; } ul li::before { content: "\200b"; } voiceover and list-style-type: none – unfettered thoughts mdn understanding wcag, guideline 1.3 explanations understanding success criterion 1.3.1 | w3c understanding wcag 2.0 formal definition initial valueas each of the properties of the shorthand:list-style-type: disclist-style-position: outsidelist-style-image: noneapplies tolist itemsinheritedyescomputed valueas each of the properties of th...
text-transform - CSS: Cascading Style Sheets
unlike regular (full-width) katakana characters, a letter with dakuten (voiced sound mark) is represented as two code points, the body of letter and dakuten.
Adding captions and subtitles to HTML5 video - Developer guides
there are only a handful of css properties that can be applied to a text cue: color opacity visibility text-decoration text-shadow background shorthand properties outline shorthand properties font shorthand properties, including line-height white-space for example, to change the text colour of the text track cues you can write: ::cue { color:#ccc; } if the webvtt file uses voice spans, which allow cues to be defined as having a particular "voice": 0 00:00:00.000 --> 00:00:12.000 <v test>[test]</v> then this specific 'voice' will be stylable like so: ::cue(v[voice='test']) { color:#fff; background:#0095dd; } note: some of the styling of cues with ::cue currently works on chrome, opera, and safari, but not yet on firefox.
Using HTML sections and outlines - Developer guides
semantic sectioning elements are specifically designed to communicate structural meaning to browsers and other technologies interpreting the document on behalf of users, such as screen readers and voice assistants.
HTML attribute: capture - HTML: Hypertext Markup Language
<p> <label for="soundfile">what does your voice sound like?:</label> <input type="file" id="soundfile" capture="user" accept="audio/*"> </p> <p> <label for="videofile">upload a video:</label> <input type="file" id="videofile" capture="environment" accept="video/*"> </p> <p> <label for="imagefile">upload a photo of yourself:</label> <input type="file" id="imagefile" capture="user" accept="image/*"> </p> note these work better on mo...
<a>: The Anchor element - HTML: Hypertext Markup Language
WebHTMLElementa
skip links are especially useful for people who navigate with the aid of assistive technology such as switch control, voice command, or mouth sticks/head wands, where the act of moving through repetitive links can be laborious.
<dl>: The Description List element - HTML: Hypertext Markup Language
WebHTMLElementdl
some screen readers, such as voiceover on ios, will not announce that <dl> content is a list.
<footer> - HTML: Hypertext Markup Language
WebHTMLElementfooter
</footer> accessibility concerns the voiceover screen reader has an issue where the footer landmark role is not announced in the landmark rotor.
<i>: The Idiomatic Text element - HTML: Hypertext Markup Language
WebHTMLElementi
among the use cases for the <i> element are spans of text representing a different quality or mode of text, such as: alternative voice or mood taxonomic designations (such as the genus and species "homo sapiens") idiomatic terms from another language (such as "et cetera"); these should include the lang attribute to identify the language technical terms transliterations thoughts (such as "she wondered,what is this writer talking about, anyway?") ship or vessel names in western writing systems (such as "they sear...
HTML elements reference - HTML: Hypertext Markup Language
WebHTMLElement
<kbd> the html keyboard input element (<kbd>) represents a span of inline text denoting textual user input from a keyboard, voice input, or any other text entry device.
autocapitalize - HTML: Hypertext Markup Language
instead, it affects the behavior of other input mechanisms, such as virtual keyboards on mobile devices and voice input.
HTML documentation index - HTML: Hypertext Markup Language
WebHTMLIndex
144 <kbd>: the keyboard input element displaying input, displaying keys, displaying keystrokes, displaying user input, element, html, html text-level semantics, keyboard input, keystroke, reference, web, keyboard, user input the html keyboard input element (<kbd>) represents a span of inline text denoting textual user input from a keyboard, voice input, or any other text entry device.
Media container formats (file types) - Web media technologies
through quicktime, mac applications (including web browsers, through the quicktime plugin or direct quicktime integration) were able to read and write audio formats including aac, aiff, mp3, pcm, and qualcomm purevoice; and video formats including avi, dv, pixlet, prores, flac, cinepak, 3gp, h.261 through h.265, mjpeg, mpeg-1 and mpeg-4 part 2, sorenson, and many more.