You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A new OpenXR body tracking extension was released yesterday, bringing the total up to 3 (4 if you include vive trackers). There's probably going to be a standardized body tracking skeleton at some point, so LÖVR should start to think about it.
Facebook's extension includes hand bones, but (famously) doesn't include legs. Pico's extension has a cool picture and has 24 joints for the whole body. HTC also has a cool picture and is very close to Pico. But both of these stop at the wrist and don't include hand tracking. Vive trackers only have a few joints defined (elbow, shoulder, chest, waist, knee, foot, wrist, ankle) with no hierarchy, and there might only be a couple of them active at once, depending on the user's full body tracking setup.
Currently the Lua API for skeletons is kind of fragmented. There is lovr.headset.getSkeleton, which returns joints for a particular device, and you can pass in left or right. It's effectively just used for hand tracking.
Separately, there are some extra devices for full body tracking: elbow/*, shoulder/*, chest, waist, knee/*, and foot/*. These mostly exist because they match the roles used by Vive trackers. Ultraleap hand tracking also returns elbow positions, and we map those to the elbow/* devices as well.
So LÖVR is kind of in a weird place where it seems like lovr.headset.getSkeleton should return a full body skeleton, but there are also all these "devices" that contain bits of the skeleton pose as well. We should probably just pick one going forward: either remove devices from getSkeleton and have .getSkeleton() just return as much full-body skeletal info as possible in a giant table (including hand tracking), or lean into semantic path device poses for the entire body, possibly even converting the hand tracking joints into Devices.
Semantic paths cause some problems for the API design since they are also used for all the button/axis input, velocity, controller models, and haptics. Are we ever going to have button inputs on our knees? lovr.headset.isDown('knee/left', 'bend')??? Ask for a 3D model of someone's metacarpal bone? On the other hand, I could see joint velocity and joint haptics potentially becoming a thing, and the Device methods do provide a nice way to wrap all that up. There is maybe an opportunity to separate "pose devices" (hand/point, hand/grip, hand/palm, hand/ring/tip, elbow/left, etc.) from "input devices" (left/right controllers, hand tracking gestures, stylus, headset), I need to think about it more.
The text was updated successfully, but these errors were encountered:
I feel like devices, skeletons, and inputs are three separate concepts.
A device represents headsets, controllers, styluses, trackers, and such. Maybe there's also virtual devices for things like Quest hand tracking. Technically, non-spatial devices like gamepads, steering wheels, mice, and keyboards are also "devices," but they may be less relevant for a discussion on body tracking.
A skeleton is pose information. Ideally you could have a single authoritative concept of a skeleton that has enough bones to represent every joint in the human body.
Inputs are actions and axes, like buttons and analog sticks. Tracked hands may offer virtual inputs like pinching and gripping, as well as gestures like pointing or a thumbs up (which may itself just be represented as a button-style input).
In my head, devices offer a range of inputs. They can be connected and disconnected. They can also offer skeletal pose information. It may be only a subset of a skeleton, but it should map to the bones on the major authoritative skeleton. The headset offers the pose of the head bone. Controllers offer inputs and can offer simulated hand poses. Tracked hands offer high-fidelity hand poses, and can simulate inputs.
Maybe it would be nice to query pose information off devices independently, and have a single unified body skeleton that fuses pose information from all available devices. Maybe the unified skeleton could even offer simulated poses for untracked limbs, like IK elbows?
A new OpenXR body tracking extension was released yesterday, bringing the total up to 3 (4 if you include vive trackers). There's probably going to be a standardized body tracking skeleton at some point, so LÖVR should start to think about it.
Facebook's extension includes hand bones, but (famously) doesn't include legs. Pico's extension has a cool picture and has 24 joints for the whole body. HTC also has a cool picture and is very close to Pico. But both of these stop at the wrist and don't include hand tracking. Vive trackers only have a few joints defined (elbow, shoulder, chest, waist, knee, foot, wrist, ankle) with no hierarchy, and there might only be a couple of them active at once, depending on the user's full body tracking setup.
Currently the Lua API for skeletons is kind of fragmented. There is
lovr.headset.getSkeleton
, which returns joints for a particular device, and you can pass inleft
orright
. It's effectively just used for hand tracking.Separately, there are some extra devices for full body tracking:
elbow/*
,shoulder/*
,chest
,waist
,knee/*
, andfoot/*
. These mostly exist because they match the roles used by Vive trackers. Ultraleap hand tracking also returns elbow positions, and we map those to theelbow/*
devices as well.So LÖVR is kind of in a weird place where it seems like
lovr.headset.getSkeleton
should return a full body skeleton, but there are also all these "devices" that contain bits of the skeleton pose as well. We should probably just pick one going forward: either remove devices from getSkeleton and have.getSkeleton()
just return as much full-body skeletal info as possible in a giant table (including hand tracking), or lean into semantic path device poses for the entire body, possibly even converting the hand tracking joints into Devices.Semantic paths cause some problems for the API design since they are also used for all the button/axis input, velocity, controller models, and haptics. Are we ever going to have button inputs on our knees?
lovr.headset.isDown('knee/left', 'bend')
??? Ask for a 3D model of someone's metacarpal bone? On the other hand, I could see joint velocity and joint haptics potentially becoming a thing, and the Device methods do provide a nice way to wrap all that up. There is maybe an opportunity to separate "pose devices" (hand/point, hand/grip, hand/palm, hand/ring/tip, elbow/left, etc.) from "input devices" (left/right controllers, hand tracking gestures, stylus, headset), I need to think about it more.The text was updated successfully, but these errors were encountered: