Search completed in 0.96 seconds.
29 results for "getPose":
Your results are loading. Please wait...
XRFrame.getPose() - Web APIs
WebAPIXRFramegetPose
the xrframe method getpose() returns the relative position and orientation—the pose—of one xrspace to that of another space.
... syntax var xrpose = xrframe.getpose(space, basespace); parameters space an xrspace specifying the space for which to obtain an xrpose describing the item's position and orientation.
... specifications specification status comment webxr device apithe definition of 'xrframe.getpose()' in that specification.
Inputs and input sources - Web APIs
you can easily obtain the target ray corresponding to the targetrayspace from within the drawing handler for a given frame using xrframe's getpose() method.
...thus, for an input controller primaryinput: let targetraypose = frame.session.getpose(primaryinput.targetrayspace, viewerrefspace); let targetrayorigin = targetraypose.transform.position; let targetrayvector = targetraypose.transform.orientation; with this, you now have the point from which the targeting ray is emitted (targetrayorigin) and the direction in which it's pointing (targetrayvector), given in the viewer's reference space (viewerrefspace).
... picking up an object: handling squeezestart events xrsession.addeventlistener("squeezestart", event => { const targetrayspace = event.inputsource.targetrayspace; const hand = event.inputsource.handedness; let targetraypose = event.frame.getpose(targetrayspace, viewerrefspace); if (!targetraypose) { return; } let targetraytransform = targetraypose.transform; let targetobject = findtargetobject(targetraytransform); if (targetobject) { if (avatar.heldobject[hand]) { dropobject(hand); } pickupobject(targetobject, hand); } }); the squeezestart event is handled by getting those pose and transform as usual,...
...And 3 more matches
Spaces and reference spaces: Spatial tracking in WebXR - Web APIs
there is only one way to create an xrpose, and that's using the getpose() method on an animation frame as given using an xrframe object.
... getpose() computes the position of an xrreferencespace relative to the origin of a specified xrspace and then creates a pose representing the resulting position and orientation.
... for example, if you wish to draw a hand controller's representation using the controller's gripspace, you can get the pose needed like this: let controlpose = frame.getpose(worldrefspace, inputsource.gripspace); this converts the position and orientation of the input's grip space to use the world's coordinate system, then generates the corresponding xrpose, storing it in controlpose.
...this data is obtained during each frame by calling the xrframe method getviewerpose() to get the position and facing direction of the viewer (defining what the user should see), and getpose() to get any other poses, such as the positions of the hand controllers and any other parts of the xr system.
XRInputSourceEventInit.frame - Web APIs
the xrinputsourceeventinit dictionary's property frame specifies an xrframe providing information about the timestamp at which the new input source event took place, as well as access to the xrframe method getpose() which can be used to map the coordinates of any xrreferencespace to the space in which the event took place.
... syntax xrinputsourceeventinit.frame = xrframe; let xrinputsourceeventinit = { frame: xrframe }; let xrinputsourceevent = new xrinputsourceevent(type, { frame: xrframe }); value an xrframe indicating the time at which the event took place, and providing a getpose() method which can be used to map reference spaces to the world reference space.
...instead, the xrframe specified by the frame property is simply a method to provide access to the getpose() method, which you can use to get the relative positions of the objects in the scene at the time the event occurred.
XRPose.transform - Web APIs
WebAPIXRPosetransform
the transform read-only attribute of the xrpose interface is a xrrigidtransform object providing the position and orientation of the pose relative to the base xrspace as specified when the pose was obtained by calling xrframe.getpose().
...this is the same pose that's returned by the frame's getpose() method.
... xrsession.addeventlistener("select", event => { let source = event.inputsource; let frame = event.frame; let targetraypose = frame.getpose(source.targetrayspace, myrefspace); let targetobject = findtargetusingray(targetray.transform.matrix); if (source.targetraymode == "tracked-pointer") { if (source.handedness == user.handedness) { targetobject.primaryaction(); } else { targetobject.offhandaction(); } } }); specifications specification status comment webxr device apithe d...
XRFrame - Web APIs
WebAPIXRFrame
in addition to providing a reference to the xrsession for which this frame is to be rendered, the getviewerpose() method is provided to obtain the xrviewerpose describing the viewer's position and orientation in space, and getpose() can be used to create an xrpose describing the relative position of one xrspace relative to another.
... methods getpose() returns an xrpose object representing the spatial relationship between the two specified xrspace objects.
XRInputSource.targetRaySpace - Web APIs
to determine the position and orientation of the target ray while rendering a frame, pass it into the xrframe method getpose() method, then use the returned xrpose object's transform to gather the spatial information you need.
... function updateinputsources(session, frame, refspace) { for (let source of session.getinputsources()) { let targetraypose = frame.getpose(inputsource.targetrayspace, refspace); if (targetraypose) { if (source.targetraymode == "tracked-pointer") { myrendertargetrayasbeam(targetraypose); } } /* ...
XRInputSourceEvent.frame - Web APIs
instead, the xrframe specified by the frame property is simply a method to provide access to the getpose() method, which you can use to get the relative positions of the objects in the scene at the time the event occurred.
... xrsession.onselectstart = event => { let targetraypose = event.frame.getpose(event.inputsource.targetrayspace, myrefspace); if (targetraypose) { checkandhandlehit(targetraypose.transform); } }; specifications specification status comment webxr device apithe definition of 'xrinputsourceevent.frame' in that specification.
XRInputSourceEvent - Web APIs
because this is an event frame, not an animation frame, you cannot call the xrframe method getviewerpose() on it; instead, use getpose().
... xrsession.addeventlistener("select", event => { let targetraypose = event.frame.getpose(event.inputsource.targetrayspace, myrefspace); if (targetraypose) { let hit = myhittest(targetraypose.transform); if (hit) { /* handle the hit */ } } }); specifications specification status comment webxr device apithe definition of 'xrinputsourceevent' in that specification.
XRPose - Web APIs
WebAPIXRPose
to obtain the xrpose for the xrspace used as the local coordinate system of an object, call xrframe.getpose(), specifying that local xrspace and the space to which you wish to convert: thepose = xrframe.getpose(localspace, basespace); the pose for a viewer (or camera) is represented by the xrviewerpose subclass of xrpose.
... this is obtained using xrframe.getviewerpose() instead of getpose(), specifying a reference space which has been adjusted to position and orient the node to provide the desired viewing position and angle: viewerpose = xrframe.getviewerpose(adjreferencespace); here, adjreferencespace is a reference space which has been updated using the base frame of reference for the frame and any adjustments needed to position the viewer based on movement or rotation which is being supplied from a source other than the xr device, such as keyboard or mouse inputs.
XRSession: select event - Web APIs
xrsession.addeventlistener("select", event => { if (event.inputsource.targetraymode == "tracked-pointer") { let targetraypose = event.frame.getpose(event.inputsource.targetrayspace, myrefspace); if (targetraypose) { myhandleselectwithray(targetraypose.transform); } } }); you can of course also set up a handler for select events by setting the xrsession object's onselect event handler property to a function that handles the event: xrsession.onselect = event => { if (event.inputsource.targetra...
...ymode == "tracked-pointer") { let targetraypose = event.frame.getpose(event.inputsource.targetrayspace, myrefspace); if (targetraypose) { myhandleselectwithray(targetraypose.transform); } } }; specifications specification status comment webxr device apithe definition of 'select event' in that specification.
XRSession: selectend event - Web APIs
after checking to ensure that the received event is a tracked-pointer event (the only kind we handle here), the target ray's pose is obtained using getpose().
... xrsession.addeventlistener("selectstart", onselectionevent); xrsession.addeventlistener("select", onselectionevent); xrsession.addeventlistener("selectend", onselectionevent); function onselectionevent(event) { let source = event.inputsource; let targetobj = null; if (source.targetraymode != "tracked-pointer") { return; } let targetraypose = event.frame.getpose(source.targetrayspace, myrefspace); if (!targetraypose) { return; } switch(event.type) { case "selectstart": targetobj = mybegintracking(targetraypose.matrix); break; case "select": mydropobject(targetobj, targetraypose.matrix); break; case "selectend": mystoptracking(targetobj, targetraypose.matrix); break; } } you can of course also...
XRSession: selectstart event - Web APIs
after checking to ensure that the received event is a tracked-pointer event (the only kind we handle here), the target ray's pose is obtained using getpose().
... xrsession.addeventlistener("selectstart", onselectionevent); xrsession.addeventlistener("select", onselectionevent); xrsession.addeventlistener("selectend", onselectionevent); function onselectionevent(event) { let source = event.inputsource; let targetobj = null; if (source.targetraymode != "tracked-pointer") { return; } let targetraypose = event.frame.getpose(source.targetrayspace, myrefspace); if (!targetraypose) { return; } switch(event.type) { case "selectstart": targetobj = mybegintracking(targetraypose.matrix); break; case "select": mydropobject(targetobj, targetraypose.matrix); break; case "selectend": mystoptracking(targetobj, targetraypose.matrix); break; } } you can of course also...
XRSession: squeeze event - Web APIs
xrsession.addeventlistener("squeeze", event => { if (event.inputsource.targetraymode == "tracked-pointer") { let targetraypose = event.frame.getpose(event.inputsource.targetrayspace, myrefspace); if (targetraypose) { myhandlesqueezewithray(targetraypose.transform); } } }); you can of course also set up a handler for squeeze events by setting the xrsession object's onsqueeze event handler property to a function that handles the event: xrsession.onsqueeze = event => { if (event.inputsource.targ...
...etraymode == "tracked-pointer") { let targetraypose = event.frame.getpose(event.inputsource.targetrayspace, myrefspace); if (targetraypose) { myhandlesqueezewithray(targetraypose.transform); } } }; specifications specification status comment webxr device apithe definition of 'squeeze event' in that specification.
XRSession: squeezeend event - Web APIs
after checking to ensure that the received event is a tracked-pointer event (the only kind we handle here), the target ray's pose is obtained using getpose().
... xrsession.addeventlistener("squeezestart", onsqueezeevent); xrsession.addeventlistener("squeeze", onsqueezeevent); xrsession.addeventlistener("squeezeend", onsqueezeevent); function onsqueezeevent(event) { let source = event.inputsource; let targetobj = null; if (source.targetraymode != "tracked-pointer") { return; } let targetraypose = event.frame.getpose(source.targetrayspace, myrefspace); if (!targetraypose) { return; } switch(event.type) { case "squeezestart": targetobj = mybegintracking(targetraypose.matrix); break; case "squeeze": mydropobject(targetobj, targetraypose.matrix); break; case "squeezeend": mystoptracking(targetobj, targetraypose.matrix); break; } } you can of course a...
XRSession: squeezestart event - Web APIs
after checking to ensure that the received event is a tracked-pointer event (the only kind we handle here), the target ray's pose is obtained using getpose().
... xrsession.addeventlistener("squeezestart", onsqueezeevent); xrsession.addeventlistener("squeeze", onsqueezeevent); xrsession.addeventlistener("squeezeend", onsqueezeevent); function onsqueezeevent(event) { let source = event.inputsource; let targetobj = null; if (source.targetraymode != "tracked-pointer") { return; } let targetraypose = event.frame.getpose(source.targetrayspace, myrefspace); if (!targetraypose) { return; } switch(event.type) { case "squeezestart": targetobj = mybegintracking(targetraypose.matrix); break; case "squeeze": mydropobject(targetobj, targetraypose.matrix); break; case "squeezeend": mystoptracking(targetobj, targetraypose.matrix); break; } } you can of course a...
Index - Web APIs
WebAPIIndex
5245 xrframe.getpose() api, ar, augmented reality, experimental, interface, reference, vr, virtual reality, webxr, webxr device api, xrframe, getpose() the getpose() method of the xrframe interface turns an xrpose object representing the relative relationship between any two xrspace objects.
XRFrame.getViewerPose() - Web APIs
see the getpose() method for a way to calculate a pose that represents the difference between two spaces.
XRHandedness - Web APIs
function updateinputsources(session, frame, refspace) { for (let source of session.inputsources) { if (source.gripspace) { let grippose = frame.getpose(source.gripspace, refspace); if (grippose) { myrenderhandobject(grippose, inputsource.handedness); } } } } this function, which would be called every animation frame (or possibly just periodically, depending on the degree of smoothness required and any performance constraints), scans the list of input sources looking for any which have a gripspace which isn't null.
XRInputSource.gripSpace - Web APIs
for (let source in xrsession.inputsources) { if (source.gripspace) { let grippose = frame.getpose(source.gripspace, xrrefspace); if (grippose) { mydrawmeshusingtransform(controllermesh, grippose.transform.matrix); } } } for each input source which has a value for gripspace, this loop obtains the xrpose representing the position and orientation that are described by gripspace.
XRInputSource.handedness - Web APIs
function updateinputsources(session, frame, refspace) { for (let source of session.inputsources) { if (source.gripspace) { let grippose = frame.getpose(source.gripspace, refspace); if (grippose) { myrenderhandobject(grippose, inputsource.handedness); } } } } this function, which would be called every animation frame (or possibly just periodically, depending on the degree of smoothness required and any performance constraints), scans the list of input sources looking for any which have a gripspace which isn't null.
XRInputSource.targetRayMode - Web APIs
function updateinputsources(session, frame, refspace) { for (let source of session.getinputsources()) { let targetraypose = frame.getpose(inputsource.targetrayspace, refspace); if (targetraypose) { if (source.targetraymode == "tracked-pointer") { myrendertargetrayasbeam(targetraypose); } } /* ...
XRInputSourceEvent() - Web APIs
xrsession.addeventlistener("select", event => { let targetraypose = event.frame.getpose(event.inputsource.targetrayspace, myrefspace); if (targetraypose) { let hit = myhittest(targetraypose.transform); if (hit) { /* handle the hit */ } } }); specifications specification status comment webxr device apithe definition of 'xrinputsourceevent' in that specification.
XRSession.onselect - Web APIs
xrsession.onselect = event => { let source = event.inputsource; if (source.handedness == user.handedness) { if (source.targetraymode == "tracked-pointer") { let targetraypose = event.frame.getpose(source.targetrayspace, myrefspace); if (targetraypose) { myhandleselectwithray(targetraypose); } } } }; specifications specification status comment webxr device apithe definition of 'xrsession.onselect' in that specification.
XRSession.onsqueeze - Web APIs
xrsession.onsqueeze = event => { if (event.inputsource.handedness == user.handedness) { let targetraypose = event.frame.getpose(event.inputsource.targetrayspace, myrefspace); if (user.heldobject && targetraypose) { dropobjectusingray(user.heldobject, targetraypose.transform.matrix): } } }; see the examples in the onsqueezestart and onsqueezeend event handlers for the reset of the event handling related to this approach.
XRSession.onsqueezeend - Web APIs
xrsession.onsqueezeend = event => { if (event.inputsource.handedness == user.handedness) { let targetraypose = event.frame.getpose(event.inputsource.targetrayspace, myrefspace; if (user.heldobject) { cancelobjectdrag(user.heldobject); } } }; this code presumes that if the user actually intentionally completed the drag, user.heldobject will be null here.
XRSession.onsqueezestart - Web APIs
xrsession.onsqueezestart = event => { if (event.inputsource.handedness == user.handedness) { let targetraypose = event.frame.getpose(event.inputsource.targetrayspace, myrefspace; if (targetraypose) { user.heldobject = findobjectusingray(targetraypose.transform); } } }; specifications specification status comment webxr device apithe definition of 'xrsession.onsqueezestart' in that specification.
XRTargetRayMode - Web APIs
function updateinputsources(session, frame, refspace) { for (let source of session.getinputsources()) { let targetraypose = frame.getpose(inputsource.targetrayspace, refspace); if (targetraypose) { if (source.targetraymode == "tracked-pointer") { myrendertargetrayasbeam(targetraypose); } } /* ...
XRView - Web APIs
WebAPIXRView
to use this function, we simply pass the returned reference space into xrframe.getpose() or getviewerpose(), as appropriate for yoru needs.