Search completed in 1.18 seconds.
12 results for "getTracks":
MediaStream.getTracks() - Web APIs
WebAPIMediaStreamgetTracks
the gettracks() method of the mediastream interface returns a sequence that represents all the mediastreamtrack objects in this stream's track set, regardless of mediastreamtrack.kind.
... syntax var mediastreamtracks = mediastream.gettracks() parameters none.
... example navigator.mediadevices.getusermedia({audio: false, video: true}) .then(mediastream => { document.queryselector('video').srcobject = mediastream; // stop the stream after 5 seconds settimeout(() => { const tracks = mediastream.gettracks() tracks[0].stop() }, 5000) }) specifications specification status comment media capture and streamsthe definition of 'gettracks()' in that specification.
... desktopmobilechromeedgefirefoxinternet exploreroperasafariandroid webviewchrome for androidfirefox for androidopera for androidsafari on iossamsung internetgettracks experimentalchrome full support 45edge full support 12firefox full support yesie no support noopera full support yessafari full support ...
RTCPeerConnection.addTrack() - Web APIs
WebAPIRTCPeerConnectionaddTrack
here's an example showing a function that uses getusermedia() to obtain a stream from a user's camera and microphone, then adds each track from the stream to the peer connection, without specifying a stream for each track: async opencall(pc) { const gumstream = await navigator.mediadevices.getusermedia( {video: true, audio: true}); for (const track of gumstream.gettracks()) { pc.addtrack(track); } } the result is a set of tracks being sent to the remote peer, with no stream associations.
... for example, consider this function that an application might use to begin streaming a device's camera and microphone input over an rtcpeerconnection to a remote peer: async opencall(pc) { const gumstream = await navigator.mediadevices.getusermedia( {video: true, audio: true}); for (const track of gumstream.gettracks()) { pc.addtrack(track, gumstream); } } the remote peer might then use a track event handler that looks like this: pc.ontrack = ({streams: [stream]} => videoelem.srcobject = stream; this sets the video element's current stream to the one that contains the track that's been added to the connection.
... var mediaconstraints = { audio: true, // we want an audio track video: true // ...and we want a video track }; var desc = new rtcsessiondescription(sdp); pc.setremotedescription(desc).then(function () { return navigator.mediadevices.getusermedia(mediaconstraints); }) .then(function(stream) { previewelement.srcobject = stream; stream.gettracks().foreach(track => pc.addtrack(track, stream)); }) this code takes sdp which has been received from the remote peer and constructs a new rtcsessiondescription to pass into setremotedescription().
...this is done by adding each track in the stream by iterating over the list returned by mediastream.gettracks() and passing them to addtrack() along with the stream which they're a component of.
Signaling and video calling - Web APIs
WebAPIWebRTC APISignaling and video calling
that would be weird."); return; } targetusername = clickedusername; createpeerconnection(); navigator.mediadevices.getusermedia(mediaconstraints) .then(function(localstream) { document.getelementbyid("local_video").srcobject = localstream; localstream.gettracks().foreach(track => mypeerconnection.addtrack(track, localstream)); }) .catch(handlegetusermediaerror); } } this begins with a basic sanity check: is the user already connected?
...ocalstream = null; targetusername = msg.name; createpeerconnection(); var desc = new rtcsessiondescription(msg.sdp); mypeerconnection.setremotedescription(desc).then(function () { return navigator.mediadevices.getusermedia(mediaconstraints); }) .then(function(stream) { localstream = stream; document.getelementbyid("local_video").srcobject = localstream; localstream.gettracks().foreach(track => mypeerconnection.addtrack(track, localstream)); }) .then(function() { return mypeerconnection.createanswer(); }) .then(function(answer) { return mypeerconnection.setlocaldescription(answer); }) .then(function() { var msg = { name: myusername, target: targetusername, type: "video-answer", sdp: mypeerconnection.localdescription ...
...our handler for "removetrack" is: function handleremovetrackevent(event) { var stream = document.getelementbyid("received_video").srcobject; var tracklist = stream.gettracks(); if (tracklist.length == 0) { closevideocall(); } } this code fetches the incoming video mediastream from the "received_video" <video> element's srcobject attribute, then calls the stream's gettracks() method to get an array of the stream's tracks.
...ypeerconnection.onremovetrack = null; mypeerconnection.onremovestream = null; mypeerconnection.onicecandidate = null; mypeerconnection.oniceconnectionstatechange = null; mypeerconnection.onsignalingstatechange = null; mypeerconnection.onicegatheringstatechange = null; mypeerconnection.onnegotiationneeded = null; if (remotevideo.srcobject) { remotevideo.srcobject.gettracks().foreach(track => track.stop()); } if (localvideo.srcobject) { localvideo.srcobject.gettracks().foreach(track => track.stop()); } mypeerconnection.close(); mypeerconnection = null; } remotevideo.removeattribute("src"); remotevideo.removeattribute("srcobject"); localvideo.removeattribute("src"); remotevideo.removeattribute("srcobject"); document.getelemen...
MediaStreamTrack.stop() - Web APIs
WebAPIMediaStreamTrackstop
function stopstreamedvideo(videoelem) { const stream = videoelem.srcobject; const tracks = stream.gettracks(); tracks.foreach(function(track) { track.stop(); }); videoelem.srcobject = null; } this works by obtaining the video element's stream from its srcobject property.
... then the stream's track list is obtained by calling its gettracks() method.
Using the Screen Capture API - Web APIs
WebAPIScreen Capture APIUsing Screen Capture
it stops the stream by getting its track list using mediastream.gettracks(), then calling each track's {domxref("mediastreamtrack.stop, "stop()")}} method.
... function stopcapture(evt) { let tracks = videoelem.srcobject.gettracks(); tracks.foreach(track => track.stop()); videoelem.srcobject = null; } dumping configuration information for informational purposes, the startcapture() method shown above calls a method named dumpoptions(), which outputs the current track settings as well as the constraints that were placed upon the stream when it was created.
Using DTMF with WebRTC - Web APIs
WebAPIWebRTC APIUsing DTMF
disconnecting."); callerpc.getlocalstreams().foreach(function(stream) { stream.gettracks().foreach(function(track) { track.stop(); }); }); receiverpc.getlocalstreams().foreach(function(stream) { stream.gettracks().foreach(function(track) { track.stop(); }); }); audio.pause(); audio.srcobject = null; receiverpc.close(); callerpc.close(); } } the tonechange event is used both to indicate when an individual tone has play...
...this is done by stopping each stream on both the caller and the receiver by iterating over each rtcpeerconnection's track list (as returned by its gettracks() method) and calling each track's stop() method.
Graceful asynchronous programming with Promises - Learn web development
LearnJavaScriptAsynchronousPromises
the code that the video chat application would use might look something like this: function handlecallbutton(evt) { setstatusmessage("calling..."); navigator.mediadevices.getusermedia({video: true, audio: true}) .then(chatstream => { selfviewelem.srcobject = chatstream; chatstream.gettracks().foreach(track => mypeerconnection.addtrack(track, chatstream)); setstatusmessage("connected"); }).catch(err => { setstatusmessage("failed to connect"); }); } this function starts by using a function called setstatusmessage() to update a status display with the message "calling...", indicating that a call is being attempted.
Index - Web APIs
WebAPIIndex
2434 mediastream.gettracks() api, experimental, media streams api, mediastream, mediastreamtrack, method, reference, gettracks the gettracks() method of the mediastream interface returns a sequence that represents all the mediastreamtrack objects in this stream's track set, regardless of mediastreamtrack.kind.
MediaStream - Web APIs
WebAPIMediaStream
mediastream.gettracks() returns a list of all mediastreamtrack objects stored in the mediastream object, regardless of the value of the kind attribute.
Recording a media element - Web APIs
WebAPIMediaStream Recording APIRecording a media element
stopping the input stream the stop() function simply stops the input media: function stop(stream) { stream.gettracks().foreach(track => track.stop()); } this works by calling mediastream.gettracks(), using foreach() to call mediastreamtrack.stop() on each track in the stream.
RTCPeerConnection.addStream() - Web APIs
WebAPIRTCPeerConnectionaddStream
navigator.mediadevices.getusermedia({video:true, audio:true}, function(stream) { var pc = new rtcpeerconnection(); pc.addstream(stream); }); migrating to addtrack() compatibility allowing, you should update your code to instead use the addtrack() method: navigator.getusermedia({video:true, audio:true}, function(stream) { var pc = new rtcpeerconnection(); stream.gettracks().foreach(function(track) { pc.addtrack(track, stream); }); }); the newer addtrack() api avoids confusion over whether later changes to the track-makeup of a stream affects a peer connection (they do not).
Establishing a connection: The WebRTC perfect negotiation pattern - Web APIs
WebAPIWebRTC APIPerfect negotiation
async function start() { try { const stream = await navigator.mediadevices.getusermedia(constraints); for (const track of stream.gettracks()) { pc.addtrack(track, stream); } selfvideo.srcobject = stream; } catch(err) { console.error(err); } } this isn't appreciably different from older webrtc connection establishment code.