US20210089162A1 - Calibration of inertial measurement units in alignment with a skeleton model to control a computer system based on determination of orientation of an inertial measurement unit from an image of a portion of a user - Google Patents

Calibration of inertial measurement units in alignment with a skeleton model to control a computer system based on determination of orientation of an inertial measurement unit from an image of a portion of a user Download PDF

Info

Publication number
US20210089162A1
US20210089162A1 US16/576,661 US201916576661A US2021089162A1 US 20210089162 A1 US20210089162 A1 US 20210089162A1 US 201916576661 A US201916576661 A US 201916576661A US 2021089162 A1 US2021089162 A1 US 2021089162A1
Authority
US
United States
Prior art keywords
user
orientation
orientations
sensor device
hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/576,661
Other versions
US10976863B1 (en
Inventor
Viktor Vladimirovich Erivantcev
Alexey Ivanovich Kartashov
Daniil Olegovich Goncharov
Ratmir Rasilevich Gubaidullin
Alexey Andreevich Gusev
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Finchxr Ltd
Original Assignee
Finch Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Finch Technologies Ltd filed Critical Finch Technologies Ltd
Priority to US16/576,661 priority Critical patent/US10976863B1/en
Assigned to FINCH TECHNOLOGIES LTD. reassignment FINCH TECHNOLOGIES LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ERIVANTCEV, Viktor Vladimirovich, GONCHAROV, DANIIL OLEGOVICH, GUBAIDULLIN, RATMIR RASILEVICH, GUSEV, ALEXEY ANDREEVICH, KARTASHOV, ALEXEY IVANOVICH
Publication of US20210089162A1 publication Critical patent/US20210089162A1/en
Application granted granted Critical
Publication of US10976863B1 publication Critical patent/US10976863B1/en
Assigned to FINCHXR LTD. reassignment FINCHXR LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FINCH TECHNOLOGIES LTD.
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • At least a portion of the present disclosure relates to computer input devices in general and more particularly but not limited to input devices for virtual reality and/or augmented/mixed reality applications implemented using computing devices, such as mobile phones, smart watches, similar mobile devices, and/or other devices.
  • U.S. Pat. App. Pub. No. 2014/0028547 discloses a user control device having a combined inertial sensor to detect the movements of the device for pointing and selecting within a real or virtual three-dimensional space.
  • U.S. Pat. App. Pub. No. 2015/0277559 discloses a finger-ring-mounted touchscreen having a wireless transceiver that wirelessly transmits commands generated from events on the touchscreen.
  • U.S. Pat. App. Pub. No. 2015/0358543 discloses a motion capture device that has a plurality of inertial measurement units to measure the motion parameters of fingers and a palm of a user.
  • U.S. Pat. App. Pub. No. 2007/0050597 discloses a game controller having an acceleration sensor and a gyro sensor.
  • U.S. Pat. No. D772,986 discloses the ornamental design for a wireless game controller.
  • Chinese Pat. App. Pub. No. 103226398 discloses data gloves that use micro-inertial sensor network technologies, where each micro-inertial sensor is an attitude and heading reference system, having a tri-axial micro-electromechanical system (MEMS) micro-gyroscope, a tri-axial micro-acceleration sensor and a tri-axial geomagnetic sensor which are packaged in a circuit board.
  • MEMS micro-electromechanical system
  • U.S. Pat. App. Pub. No. 2014/0313022 and U.S. Pat. App. Pub. No. 2012/0025945 disclose other data gloves.
  • U.S. Pat. App. Pub. No. 2016/0085310 discloses techniques to track hand or body pose from image data in which a best candidate pose from a pool of candidate poses is selected as the current tracked pose.
  • U.S. Pat. App. Pub. No. 2017/0344829 discloses an action detection scheme using a recurrent neural network (RNN) where joint locations are applied to the recurrent neural network (RNN) to determine an action label representing the action of an entity depicted in a frame of a video.
  • RNN recurrent neural network
  • U.S. Pat. App. Pub. No. 2017/0186226 discloses a calibration engine that uses a machine learning system to extracts a region of interest to compute values of shape parameters of a 3D mesh model.
  • U.S. Pat. App. Pub. No. 2017/0186226 discloses a system where an observed position is determined from an image and a predicted position is determined using an inertial measurement unit. The predicted position is adjusted by an offset until a difference between the observed position and the predicted position is less than a threshold value.
  • FIG. 1 illustrates a system to track user movements according to one embodiment.
  • FIG. 2 illustrates a system to control computer operations according to one embodiment.
  • FIG. 3 illustrates a skeleton model that can be controlled by tracking user movements according to one embodiment.
  • FIGS. 4-6 illustrate processing of images showing a portion of a user to determine orientations of predefined features of the portion of the user.
  • FIG. 7 shows a method for calibrating orientation measurements generated by the inertial measurement unit relative to a skeleton model of the user based on the orientation of the sensor device.
  • a camera e.g., in the head mounted display
  • the locations of the LED lights can be processed via an artificial neural network (ANN) to provide an orientation measurement for the sensor module.
  • ANN artificial neural network
  • the orientation measurement for the sensor module determined based on the optical indicators, can be used to calibrate orientation measurements generated by an inertial measurement unit in the sensor module.
  • the LED lights of the sensor module may not be in a position visible to the camera and thus cannot be captured as optical indicators in the images generated by the camera. In other instances, the sensor module may not have LED lights configured on the sensor modules.
  • the present application discloses techniques that can be used to determine orientation of the sensor module based on images captured in the camera, without relying upon LED optional indicators. For example, when the sensor module is being held or worn on a portion of the user in a predetermined manner, an image of the potion of the user can be used to in a first ANN to determine the orientation of predefined features of the user and then used in a second ANN to predict the orientation of the sensor module based on the orientations of the predefined features of the user.
  • the sensor module can be in a form of a ring worn on a predetermined finger of a hand of the user; and the first ANN can be used determine the orientations of features of the user, such as the orientations of the wrist, palm, forearm, and/or distal, middle and proximal phalanges of thumb and/or index finger of the user.
  • a sensor device can be configured as a ring attached to the middle phalange of the index finger; and the sensor device has a touch pad.
  • the orientation of the sensor device can be predicted based on the orientations of the bones of the thumb and/or the index finger.
  • an image of the hand can be provided as an input to an ANN to determine the orientations of certain features on the hand of the user, which orientations can be used in a further ANN to determine the orientation of the ring/sensor device.
  • the features identified/used for the determination the orientation of the ring/sensor device can include bones and/or joints, such as wrist, palm, phalanges of thumb and index finger.
  • uncalibrated measurements of an inertial measurement unit can be considered as orientations of the inertial sensor measured relative to an unknown reference coordinate system.
  • a calibration process identifies the unknown reference coordinate system and its relationship with respect to a known coordinate system. After the calibration the measurements of the IMU are relative to the known coordinate system.
  • the calibrated measurements can be an orientation relative to a predetermined orientation in the space, relative to a particular orientation of the sensor device at a specific time instance, relative to the orientation of the arm or hand of a user at a time instance, or relative to a reference orientation/pose of a skeleton model of the user.
  • the determination of calibration parameters of the measurements of the inertial measurement unit such that the calibrated measurements of the inertial measurement unit are relative to a known orientation, such as the orientation of the sensor device in which the inertial measurement unit is installed, the orientation of the arm or hand of a user to which the sensor device is attached, or the orientation of a skeleton model of the user in a reference pose.
  • a stereo camera integrated in a head mount display (HMD) can be used to capture images of sensor modules on the user.
  • Computer vision techniques and/or artificial neural network techniques can process the captured images identify one or more orientations that can be used to calibrate the measurements of the inertial measurement units in the sensor modules.
  • the kinematics of a user can be modeled using a skeleton model having a set of rigid parts/portions connected by joints.
  • a skeleton model having a set of rigid parts/portions connected by joints.
  • the head, the torso, the left and right upper arms, the left and right forearms, the palms, phalange bones of fingers, metacarpal bones of thumbs, upper legs, lower legs, and feet can be considered as rigid parts that are connected via various joints, such as the neck, shoulders, elbows, wrist, and finger joints.
  • the movements of the parts in the skeleton model of a user can be controlled by the movements of the corresponding portions of the user tracked using sensor modules.
  • the sensor modules can determine the orientations of the portions of the user, such as the hands, arms, and head of the user.
  • the measured orientations of the corresponding parts of the user determine the orientations of the parts of the skeleton model, such as hands and arms.
  • the relative positions and/or orientations of the rigid parts collectively represent the pose of the user and/or the skeleton model.
  • the skeleton model of the user can be used to control the presentation of an avatar of the user, to identify the gesture inputs of the user, and/or to make a virtual realty or augmented reality presentation of the user.
  • FIG. 1 illustrates a system to track user movements according to one embodiment.
  • FIG. 1 illustrates various parts of a user, such as the torso ( 101 ) of the user, the head ( 107 ) of the user, the upper arms ( 103 and 105 ) of the user, the forearms ( 112 and 114 ) of the user, and the hands ( 106 and 108 ) of the user.
  • the hands ( 106 and 108 ) of the user are considered rigid parts movable around the wrists of the user.
  • the palms and finger bones of the user can be further tracked for their movements relative to finger joints (e.g., to determine the hand gestures of the user made using relative positions among fingers of a hand and the palm of the hand).
  • the user wears several sensor modules/devices ( 111 , 113 , 115 , 117 and 119 ) that track the orientations of parts of the user that are considered, or recognized as, rigid in an application.
  • rigid parts of the user are movable relative to the torso ( 101 ) of the user and relative to each other.
  • the rigid parts include the head ( 107 ), the upper arms ( 103 and 105 ), the forearms ( 112 and 114 ), and the hands ( 106 and 108 ).
  • the joints such as neck, shoulder, elbow, and/or wrist, connect the rigid parts of the user to form one or more kinematic chains.
  • the kinematic chains can be modeled in a computing device ( 141 ) to control the application.
  • a tracking device can be attached to each individual rigid part in the kinematic chain to measure its orientation.
  • the position and/or orientation of a rigid part in a reference system can be tracked using one of many systems known in the field.
  • Some of the systems may use one or more cameras to take images of a rigid part marked using optical markers and analyze the images to compute the position and/or orientation of the part.
  • Some of the systems may track the rigid part based on signals transmitted from, or received at, a tracking device attached to the rigid part, such as radio frequency signals, infrared signals, ultrasound signals. The signals may correspond to signals received in the tracking device, and/or signals emitted from the tracking device.
  • Some of the systems may use inertial measurement units (IMUs) to track the position and/or orientation of the tracking device.
  • IMUs inertial measurement units
  • the sensor devices ( 111 , 113 , 115 , 117 and 119 ) are used to track some of the rigid parts (e.g., 107 , 103 , 105 , 106 , 108 ) in the one or more kinematic chains, but sensor devices are omitted from other rigid parts ( 101 , 112 , 114 ) in the one or more kinematic chains to reduce the number of sensor devices used and/or to improve user experience for wearing the reduced number of sensor devices.
  • the rigid parts e.g., 107 , 103 , 105 , 106 , 108
  • the computing device ( 141 ) can have a prediction model ( 141 ) trained to generate predicted measurements of parts ( 101 , 112 , 114 , 107 , 103 , 105 , 106 , and/or 108 ) of the user based on the measurements of the sensor devices ( 111 , 113 , 115 , 117 and 119 ).
  • the prediction model ( 141 ) can be implemented using an artificial neural network (ANN) in the computing device ( 141 ) to predict the measurements of the orientations of the rigid parts ( 101 , 112 , 114 ) that have omitted sensor devices, based on the measurements of the orientations rigid parts ( 107 , 103 , 105 , 106 , 108 ) that have the attached sensor devices ( 111 , 113 , 115 , 117 and 119 ).
  • ANN artificial neural network
  • the artificial neural network can be trained to predict the measurements of the orientations of the rigid parts ( 107 , 103 , 105 , 106 , 108 ) that would be measured by another system (e.g., an optical tracking system), based on the measurement of the attached sensor devices ( 111 , 113 , 115 , 117 and 119 ) that measure orientations using a different technique (e.g., IMUs).
  • another system e.g., an optical tracking system
  • the sensor devices ( 111 , 113 , 115 , 117 , 119 ) communicate their movement measurements to the computing device ( 141 ), which computes or predicts the orientation of the rigid parts ( 107 , 103 , 105 , 106 , 108 , 101 , 112 , 114 ) by applying the measurements obtained from the attached sensor devices ( 111 , 113 , 115 , 117 and 119 ) as inputs to an artificial neural network trained in a way as further discussed below.
  • each of the sensor devices communicates its measurements directly to the computing device ( 141 ) in a way independent from the operations of other sensor devices.
  • one of the sensor devices may function as a base unit that receives measurements from one or more other sensor devices and transmit the bundled and/or combined measurements to the computing device ( 141 ).
  • the artificial neural network is implemented in the base unit and used to generate the predicted measurements that are communicated to the computing device ( 141 ).
  • wireless connections made via a personal area wireless network e.g., Bluetooth connections
  • a local area wireless network e.g., Wi-Fi connections
  • a personal area wireless network e.g., Bluetooth connections
  • a local area wireless network e.g., Wi-Fi connections
  • wired connections can be used to facilitate the communication among some of the sensor devices ( 111 , 113 , 115 , 117 and 119 ) and/or with the computing device ( 141 ).
  • a hand module ( 117 or 119 ) attached to or held in a corresponding hand ( 106 or 108 ) of the user may receive the motion measurements of a corresponding arm module ( 115 or 113 ) and transmit the motion measurements of the corresponding hand ( 106 or 108 ) and the corresponding upper arm ( 105 or 103 ) to the computing device ( 141 ).
  • the hand ( 106 ), the forearm ( 114 ), and the upper arm ( 105 ) can be considered a kinematic chain, for which an artificial neural network can be trained to predict the orientation measurements generated by an optical track system, based on the sensor inputs from the sensor devices ( 117 and 115 ) that are attached to the hand ( 106 ) and the upper arm ( 105 ), without a corresponding device on the forearm ( 114 ).
  • the hand module may combine its measurements with the measurements of the corresponding arm module ( 115 ) to compute the orientation of the forearm connected between the hand ( 106 ) and the upper arm ( 105 ), in a way as disclosed in U.S. patent application Ser. No. 15/787,555, filed Oct. 18, 2017 and entitled “Tracking Arm Movements to Generate Inputs for Computer Systems”, the entire disclosure of which is hereby incorporated herein by reference.
  • the hand modules ( 117 and 119 ) and the arm modules ( 115 and 113 ) can be each respectively implemented via a base unit (or a game controller) and an arm/shoulder module discussed in U.S. patent application Pub. No. 15/492,915, filed Apr. 20, 2017 and entitled “Devices for Controlling Computers based on Motions and Positions of Hands”, the entire disclosure of which application is hereby incorporated herein by reference.
  • the head module ( 111 ) is configured as a base unit that receives the motion measurements from the hand modules ( 117 and 119 ) and the arm modules ( 115 and 113 ) and bundles the measurement data for transmission to the computing device ( 141 ).
  • the computing device ( 141 ) is implemented as part of the head module ( 111 ).
  • the head module ( 111 ) may further determine the orientation of the torso ( 101 ) from the orientation of the arm modules ( 115 and 113 ) and/or the orientation of the head module ( 111 ), using an artificial neural network trained for a corresponding kinematic chain, which includes the upper arms ( 103 and 105 ), the torso ( 101 ), and/or the head ( 107 ).
  • the hand modules are optional in the system illustrated in FIG. 1 .
  • the head module ( 111 ) is not used in the tracking of the orientation of the torso ( 101 ) of the user.
  • the measurements of the sensor devices are calibrated for alignment with a common reference system, such as the coordinate system ( 100 ).
  • the coordinate system ( 100 ) can correspond to the orientation of the arms and body of the user in a standardized pose illustrated in FIG. 1 .
  • the arms of the user point in the directions that are parallel to the Y axis; the front facing direction of the user is parallel to the X axis; and the legs, the torso ( 101 ) to the head ( 107 ) are in the direction that is parallel to the Z axis.
  • the hands, arms ( 105 , 103 ), the head ( 107 ) and the torso ( 101 ) of the user may move relative to each other and relative to the coordinate system ( 100 ).
  • the measurements of the sensor devices ( 111 , 113 , 115 , 117 and 119 ) provide orientations of the hands ( 106 and 108 ), the upper arms ( 105 , 103 ), and the head ( 107 ) of the user relative to the coordinate system ( 100 ).
  • the computing device ( 141 ) computes, estimates, or predicts the current orientation of the torso ( 101 ) and/or the forearms ( 112 and 114 ) from the current orientations of the upper arms ( 105 , 103 ), the current orientation the head ( 107 ) of the user, and/or the current orientation of the hands ( 106 and 108 ) of the user and their orientation history using the prediction model ( 116 ).
  • the computing device ( 141 ) may further compute the orientations of the forearms from the orientations of the hands ( 106 and 108 ) and upper arms ( 105 and 103 ), e.g., using a technique disclosed in U.S. patent application Ser. No. 15/787,555, filed Oct. 18, 2017 and entitled “Tracking Arm Movements to Generate Inputs for Computer Systems”, the entire disclosure of which is hereby incorporated herein by reference.
  • FIG. 2 illustrates a system to control computer operations according to one embodiment.
  • the system of FIG. 2 can be implemented via attaching the arm modules ( 115 and 113 ) to the upper arms ( 105 and 103 ) respectively, the head module ( 111 ) to the head ( 107 ) and/or hand modules ( 117 and 119 ), in a way illustrated in FIG. 1 .
  • the head module ( 111 ) and the arm module ( 113 ) have micro-electromechanical system (MEMS) inertial measurement units (IMUs) ( 121 and 131 ) that measure motion parameters and determine orientations of the head ( 107 ) and the upper arm ( 103 ).
  • MEMS micro-electromechanical system
  • IMUs inertial measurement units
  • the hand modules ( 117 and 119 ) can also have IMUs.
  • the hand modules ( 117 and 119 ) measure the orientation of the hands ( 106 and 108 ) and the movements of fingers are not separately tracked.
  • the hand modules ( 117 and 119 ) have separate IMUs for the measurement of the orientations of the palms of the hands ( 106 and 108 ), as well as the orientations of at least some phalange bones of at least some fingers on the hands ( 106 and 108 ). Examples of hand modules can be found in U.S. patent application Ser. No. 15/792,255, filed Oct. 24, 2017 and entitled “Tracking Finger Movements to Generate Inputs for Computer Systems,” the entire disclosure of which is hereby incorporated herein by reference.
  • Each of the IMUs has a collection of sensor components that enable the determination of the movement, position and/or orientation of the respective IMU along a number of axes.
  • the components are: a MEMS accelerometer that measures the projection of acceleration (the difference between the true acceleration of an object and the gravitational acceleration); a MEMS gyroscope that measures angular velocities; and a magnetometer that measures the magnitude and direction of a magnetic field at a certain point in space.
  • the IMUs use a combination of sensors in three and two axes (e.g., without a magnetometer).
  • the computing device ( 141 ) can have a prediction model ( 116 ) and a motion processor ( 145 ).
  • the measurements of the IMUs (e.g., 131 , 121 ) from the head module ( 111 ), arm modules (e.g., 113 and 115 ), and/or hand modules (e.g., 117 and 119 ) are used in the prediction module ( 116 ) to generate predicted measurements of at least some of the parts that do not have attached sensor modules, such as the torso ( 101 ), and forearms ( 112 and 114 ).
  • the predicted measurements and/or the measurements of the IMUs (e.g., 131 , 121 ) are used in the motion processor ( 145 ).
  • the motion processor ( 145 ) has a skeleton model ( 143 ) of the user (e.g., illustrated FIG. 3 ).
  • the motion processor ( 145 ) controls the movements of the parts of the skeleton model ( 143 ) according to the movements/orientations of the corresponding parts of the user.
  • the orientations of the hands ( 106 and 108 ), the forearms ( 112 and 114 ), the upper arms ( 103 and 105 ), the torso ( 101 ), the head ( 107 ), as measured by the IMUs of the hand modules ( 117 and 119 ), the arm modules ( 113 and 115 ), the head module ( 111 ) sensor modules and/or predicted by the prediction model ( 116 ) based on the IMU measurements are used to set the orientations of the corresponding parts of the skeleton model ( 143 ).
  • the movements/orientation of the torso ( 101 ) can be predicted using the prediction model ( 116 ) using the sensor measurements from sensor modules on a kinematic chain that includes the torso ( 101 ).
  • the prediction model ( 116 ) can be trained with the motion pattern of a kinematic chain that includes the head ( 107 ), the torso ( 101 ), and the upper arms ( 103 and 105 ) and can be used to predict the orientation of the torso ( 101 ) based on the motion history of the head ( 107 ), the torso ( 101 ), and the upper arms ( 103 and 105 ) and the current orientations of the head ( 107 ), and the upper arms ( 103 and 105 ).
  • the movements/orientation of the forearm ( 112 or 114 ) can be predicted using the prediction model ( 116 ) using the sensor measurements from sensor modules on a kinematic chain that includes the forearm ( 112 or 114 ).
  • the prediction model ( 116 ) can be trained with the motion pattern of a kinematic chain that includes the hand ( 106 ), the forearm ( 114 ), and the upper arm ( 105 ) and can be used to predict the orientation of the forearm ( 114 ) based on the motion history of the hand ( 106 ), the forearm ( 114 ), the upper arm ( 105 ) and the current orientations of the hand ( 106 ), and the upper arm ( 105 ).
  • the skeleton model ( 143 ) is controlled by the motion processor ( 145 ) to generate inputs for an application ( 147 ) running in the computing device ( 141 ).
  • the skeleton model ( 143 ) can be used to control the movement of an avatar/model of the arms ( 112 , 114 , 105 and 103 ), the hands ( 106 and 108 ), the head ( 107 ), and the torso ( 101 ) of the user of the computing device ( 141 ) in a video game, a virtual reality, a mixed reality, or augmented reality, etc.
  • the arm module ( 113 ) has a microcontroller ( 139 ) to process the sensor signals from the IMU ( 131 ) of the arm module ( 113 ) and a communication module ( 133 ) to transmit the motion/orientation parameters of the arm module ( 113 ) to the computing device ( 141 ).
  • the head module ( 111 ) has a microcontroller ( 129 ) to process the sensor signals from the IMU ( 121 ) of the head module ( 111 ) and a communication module ( 123 ) to transmit the motion/orientation parameters of the head module ( 111 ) to the computing device ( 141 ).
  • the arm module ( 113 ) and the head module ( 111 ) have LED indicators ( 137 and 127 ) respectively to indicate the operating status of the modules ( 113 and 111 ).
  • the arm module ( 113 ) has a haptic actuator ( 138 ) respectively to provide haptic feedback to the user.
  • the head module ( 111 ) has a display device ( 127 ) and/or buttons and other input devices ( 125 ), such as a touch sensor, a microphone, a camera ( 126 ), etc.
  • a stereo camera ( 126 ) is used to capture stereo images of the sensor devices ( 113 , 115 , 117 , 119 ) to calibrate their measurements relative to a common coordinate system, such as the coordinate system ( 100 ) defined in connection with a reference pose illustrated in FIG. 1 .
  • the LED indicators (e.g., 137 ) of a sensor module (e.g., 113 ) can be turned on during the time of capturing the stereo images such that the orientation and/or identity of the sensor module (e.g., 113 ) can be determined from the locations and/or patterns of the LED indicators.
  • the orientation of the sensor module can be predicted based on an image of a portion of the user wearing the sensor device in a predefined manner.
  • an ANN can be used to determine the orientations of the wrist, palm, distal, middle and proximal phalanges of thumb and index finger from the image of the hand and forearm of the user; and the orientations can be further used in another ANN to determine the orientation of the sensor device.
  • the head module ( 111 ) is replaced with a module that is similar to the arm module ( 113 ) and that is attached to the head ( 107 ) via a strap or is secured to a head mount display device.
  • the hand module ( 119 ) can be implemented with a module that is similar to the arm module ( 113 ) and attached to the hand via holding or via a strap.
  • the hand module ( 119 ) has buttons and other input devices, such as a touch sensor, a joystick, etc.
  • the handheld modules disclosed in U.S. patent application Ser. No. 15/792,255, filed Oct. 24, 2017 and entitled “Tracking Finger Movements to Generate Inputs for Computer Systems”, U.S. patent application Ser. No. 15/787,555, filed Oct. 18, 2017 and entitled “Tracking Arm Movements to Generate Inputs for Computer Systems”, and/or U.S. patent application Ser. No. 15/492,915, filed Apr. 20, 2017 and entitled “Devices for Controlling Computers based on Motions and Positions of Hands” can be used to implement the hand modules ( 117 and 119 ), the entire disclosures of which applications are hereby incorporated herein by reference.
  • a hand module e.g., 117 or 119
  • the motion pattern of a kinematic chain of the hand captured in the predictive mode ( 116 ) can be used in the prediction model ( 116 ) to predict the orientations of other phalange bones that do not wear sensor devices.
  • FIG. 2 shows a hand module ( 119 ) and an arm module ( 113 ) as examples.
  • an application for the tracking of the orientation of the torso ( 101 ) typically uses two arm modules ( 113 and 115 ) as illustrated in FIG. 1 .
  • the head module ( 111 ) can be used optionally to further improve the tracking of the orientation of the torso ( 101 ).
  • Hand modules ( 117 and 119 ) can be further used to provide additional inputs and/or for the prediction/calculation of the orientations of the forearms ( 112 and 114 ) of the user.
  • an IMU e.g., 131 or 121
  • a module e.g., 113 or 111
  • acceleration data from accelerometers
  • angular velocity data from gyrometers/gyroscopes
  • orientation data from magnetometers.
  • the microcontrollers ( 139 and 129 ) perform preprocessing tasks, such as filtering the sensor data (e.g., blocking sensors that are not used in a specific application), applying calibration data (e.g., to correct the average accumulated error computed by the computing device ( 141 )), transforming motion/position/orientation data in three axes into a quaternion, and packaging the preprocessed results into data packets (e.g., using a data compression technique) for transmitting to the host computing device ( 141 ) with a reduced bandwidth requirement and/or communication time.
  • preprocessing tasks such as filtering the sensor data (e.g., blocking sensors that are not used in a specific application), applying calibration data (e.g., to correct the average accumulated error computed by the computing device ( 141 )), transforming motion/position/orientation data in three axes into a quaternion, and packaging the preprocessed results into data packets (e.g., using a data compression technique) for transmit
  • Each of the microcontrollers ( 129 , 139 ) may include a memory storing instructions controlling the operations of the respective microcontroller ( 129 or 139 ) to perform primary processing of the sensor data from the IMU ( 121 , 131 ) and control the operations of the communication module ( 123 , 133 ), and/or other components, such as the LED indicators ( 137 ), the haptic actuator ( 138 ), buttons and other input devices ( 125 ), the display device ( 127 ), etc.
  • the computing device ( 141 ) may include one or more microprocessors and a memory storing instructions to implement the motion processor ( 145 ).
  • the motion processor ( 145 ) may also be implemented via hardware, such as Application-Specific Integrated Circuit (ASIC) or Field-Programmable Gate Array (FPGA).
  • ASIC Application-Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • one of the modules ( 111 , 113 , 115 , 117 , and/or 119 ) is configured as a primary input device; and the other module is configured as a secondary input device that is connected to the computing device ( 141 ) via the primary input device.
  • a secondary input device may use the microprocessor of its connected primary input device to perform some of the preprocessing tasks.
  • a module that communicates directly to the computing device ( 141 ) is consider a primary input device, even when the module does not have a secondary input device that is connected to the computing device via the primary input device.
  • the computing device ( 141 ) specifies the types of input data requested, and the conditions and/or frequency of the input data; and the modules ( 111 , 113 , 115 , 117 , and/or 119 ) report the requested input data under the conditions and/or according to the frequency specified by the computing device ( 141 ).
  • Different reporting frequencies can be specified for different types of input data (e.g., accelerometer measurements, gyroscope/gyrometer measurements, magnetometer measurements, position, orientation, velocity).
  • the computing device ( 141 ) may be a data processing system, such as a mobile phone, a desktop computer, a laptop computer, a head mount virtual reality display, a personal medial player, a tablet computer, etc.
  • FIG. 3 illustrates a skeleton model that can be controlled by tracking user movements according to one embodiment.
  • the skeleton model of FIG. 3 can be used in the motion processor ( 145 ) of FIG. 2 .
  • the skeleton model illustrated in FIG. 3 includes a torso ( 232 ) and left and right upper arms ( 203 and 205 ) that can move relative to the torso ( 232 ) via the shoulder joints ( 234 and 241 ).
  • the skeleton model may further include the forearms ( 215 and 233 ), hands ( 206 and 208 ), neck, head ( 207 ), legs and feet.
  • a hand ( 206 ) includes a palm connected to phalange bones (e.g., 245 ) of fingers, and metacarpal bones of thumbs via joints (e.g., 244 ).
  • the positions/orientations of the rigid parts of the skeleton model illustrated in FIG. 3 are controlled by the measured orientations of the corresponding parts of the user illustrated in FIG. 1 .
  • the orientation of the head ( 207 ) of the skeleton model is configured according to the orientation of the head ( 107 ) of the user as measured using the head module ( 111 );
  • the orientation of the upper arm ( 205 ) of the skeleton model is configured according to the orientation of the upper arm ( 105 ) of the user as measured using the arm module ( 115 );
  • the orientation of the hand ( 206 ) of the skeleton model is configured according to the orientation of the hand ( 106 ) of the user as measured using the hand module ( 117 ); etc.
  • the tracking system as illustrated in FIG. 2 measures the orientations of the modules ( 111 , 113 , . . . , 119 ) using IMUs (e.g., 111 , 113 , . . . ).
  • IMUs e.g., 111 , 113 , . . .
  • the inertial-based sensors offer good user experiences, have less restrictions on the use of the sensors, and can be implemented in a computational efficient way. However, the inertial-based sensors may be less accurate than certain tracking methods in some situations, and can have drift errors and/or accumulated errors through time integration. Drift errors and/or accumulated errors can be considered as the change of the reference orientation used for the measurement from a known reference orientation to an unknown reference orientation. An update calibration can remove the drift errors and/or accumulated errors.
  • An optical tracking system can use one or more cameras (e.g., 126 ) to track the positions and/or orientations of optical markers (e.g., LED indicators ( 137 )) that are in the fields of view of the cameras.
  • optical markers e.g., LED indicators ( 137 )
  • the images captured by the cameras can be used to compute the positions and/or orientations of optical markers and thus the orientations of parts that are marked using the optical markers.
  • the optical tracking system may not be as user friendly as the inertial-based tracking system and can be more expensive to deploy. Further, when an optical marker is out of the fields of view of cameras, the positions and/or orientations of optical marker cannot be determined by the optical tracking system.
  • An artificial neural network of the prediction model ( 116 ) can be trained to predict the measurements produced by the optical tracking system based on the measurements produced by the inertial-based tracking system.
  • the drift errors and/or accumulated errors in inertial-based measurements can be reduced and/or suppressed, which reduces the need for re-calibration of the inertial-based tracking system.
  • Further details on the use of the prediction model ( 116 ) can be found in U.S. patent application Ser. No. 15/973,137, filed May 7, 2018 and entitled “tracking User Movements to Control a Skeleton Model in a Computer System,” the entire disclosure of which application is hereby incorporated herein by reference.
  • orientations determined using images captured by the camera ( 126 ) can be used to calibrate the measurements of the sensor devices ( 111 , 113 , 115 , 117 , 119 ) relative to a common coordinate system, such as the coordinate system ( 100 ) defined using a standardized reference pose illustrated in FIG. 1 , as further discussed below.
  • FIGS. 4-6 illustrate processing of images showing a portion of a user to determine orientations of predefined features of the portion of the user.
  • FIG. 4 illustrates an image ( 400 ) that can be captured using a camera ( 126 ) configured on a head mounted display ( 127 ).
  • a sensor device ( 401 ) having an inertial measurement unit, similar to IMU ( 131 ) in an arm module ( 113 ) can be configured to have a form factor of a ring adapted to be worn on the middle phalange ( 403 ) of the index finger.
  • the sensor device ( 401 ) is configured with a touch pad that can be ready touched by the thumb ( 405 ) to generate a touch input.
  • the image ( 400 ) can be processed as input for an ANN to predict orientations of predefined features ( 601 - 603 ) of the portion of the user.
  • the image ( 400 ) FIG. 4 captured by the camera ( 126 ) is converted into the image similar to image ( 500 ) of FIG. 5 in a black/white format for processing to recognize the orientations of predefined features.
  • the image ( 400 ) FIG. 4 captured by the camera ( 126 ) is converted into the image similar to image ( 500 ) of FIG. 5 in a black/white format for processing to recognize the orientations of predefined features.
  • ANN 501
  • distal phalange ( 605 ) of thumb a feature that is associated with the camera ( 126 )
  • middle phalange ( 607 ) of thumb a feature that is associated with the camera ( 126 )
  • distal phalange ( 603 ) of index finger a feature that is associated with the camera ( 126 )
  • middle phalange ( 615 ) of index finger a feature that is associated with the camera ( 126 )
  • metacarpal 611
  • the system converts the original image ( 400 ) from higher resolution into a lower resolution image in a black/white format ( 500 ) to facilitate the recognize orientations ( 503 ) of the features (e.g., forearm ( 613 ), wrist ( 607 ), palm ( 611 ), distal phalanges ( 617 and 605 ), middle phalanges ( 615 and 607 ), and proximal phalange ( 609 ) and metacarpal ( 611 ) as illustrated in FIG. 6 ).
  • the features e.g., forearm ( 613 ), wrist ( 607 ), palm ( 611 ), distal phalanges ( 617 and 605 ), middle phalanges ( 615 and 607 ), and proximal phalange ( 609 ) and metacarpal ( 611 ) as illustrated in FIG. 6 ).
  • orientations of forearm ( 613 ), wrist ( 607 ), palm ( 611 ), distal phalanges ( 617 and 605 ), middle phalanges ( 615 and 607 ), proximal phalange ( 609 ), and metacarpal ( 611 ), as illustrated in FIG. 6 and determined from the image of the hand and the upper arm illustrated in FIG. 4 , can be provided as input to an ANN ( 601 ) to predict the orientation ( 603 ) of the sensor device ( 401 ). Capturing the upper arm portion in the image ( 400 ) in FIG. 4 is optionally.
  • orientations of forearm ( 613 ), wrist ( 607 ), palm ( 611 ), distal phalanges ( 617 and 605 ), middle phalanges ( 615 and 607 ), proximal phalange ( 609 ), and metacarpal ( 611 ) can be recognized/determined without capturing the upper arm in the image ( 400 ) in FIG. 4 . However, capturing the upper arm in the image ( 400 ) in FIG.
  • FIG. 7 shows a method for calibrating orientation measurements generated by the inertial measurement unit relative to a skeleton model of the user based on the orientation of the sensor device.
  • the method of FIG. 7 can be used in a system of FIG. 2 and/or FIG. 1 to control a skeleton model of FIG. 3 , after the orientation measurement of the sensor device ( 401 ) of FIG. 4 is using images captured illustrated in FIGS. 4-6 .
  • the method includes: determining ( 701 ) that the thumb on a hand is on the touch pad on the sensor device ( 401 ) worn on the finger on the hand; in response to the determination that the thumb on the hand is on the touch pad on a sensor device ( 401 ) worn on a finger on the hand, capturing ( 703 ) an image ( 400 ) using the camera ( 126 ) configured on a head mounted display ( 127 ); receiving ( 705 ) the image ( 400 ) showing a portion of the user, including the hand to which the sensor device ( 401 ) is attached and optionally, an upper arm connected to the hand; determining ( 707 ) orientations of predefined features of the portion of the user based on the image ( 400 , 500 , 600 ) (e.g., vectors aligned with bones in the hand of the user); determining ( 709 ), using the artificial neural network (ANN) ( 601 ), the orientation ( 603 ) of the sensor device ( 401 ) based on the
  • the method of FIG. 7 can be used to determine the orientation of the sensor module ( 401 ) and thus calibrate the orientation measurements generated by the inertial measurement unit in the sensor module ( 401 ).
  • the sensor device ( 401 ) is configured to be attached to the middle phalange ( 615 ) of the index finger; and sensor device ( 401 ) can have a touch pad.
  • the camera of the system can capture an image showing a portion of the user, including the hand and optionally the upper arm of the user, where the thumb ( 605 ) of the user is placed on the touch pad of the sensor device ( 401 ).
  • the image can be captured using a camera in a head mounted display worn on the head of the user such that the orientation measured via the image is relate to a skeleton model of the user.
  • Orientations of predefined features of the portion of the user can be calculated based on the image using an ANN ( 501 ).
  • the ANN ( 501 ) can be a convolutional neural network (CNN) trained using a training dataset.
  • the training dataset can be obtained by capturing multiple images of a user having the sensor device ( 401 ) on the middle phalange ( 403 ) of the index finger and having the thumb ( 405 ) touching the touch pad of the sensor device ( 401 ).
  • the images can be viewed by human operators to identify the vectors (e.g., 617 , 615 , 609 , 611 , 605 , 607 , 613 ).
  • the vectors can be identified relative to a reference system of the skeleton model ( 200 ) of the user.
  • a supervised machine learning technique can be used to train the CNN to predict the vectors from the images with reduced/minimized differences between the predicted vectors and the vectors identified by human operators.
  • the ANN ( 601 ) can be trained to determine the orientation ( 603 ) of the sensor device ( 401 ) based on the orientations ( 503 ) of the predefined features.
  • a training dataset can be collected within a predetermined time period following the calibration of the orientation measurements of the inertial measurement unit in the sensor module ( 401 ).
  • the calibration can be performed using an alternative method.
  • the touch pad on the sensor device ( 401 ) used to generate the training dataset can be painted with optical marks to allow the determination of its operation from images captured from the camera configured on the head mounted display; and the calibration can be performed with the touch pad of the sensor device ( 401 ) visible to the camera (and the thumb moved away from the touch pad).
  • the orientation measurements generated with the predetermined time period following the calibration can be considered as accurate; and the images of the hand can be captured to label the feature vectors (e.g., as illustrated in FIG. 6 ) to generate the orientations of the features ( 503 ) for the corresponding orientations measured by the inertial measurement unit in the sensor module ( 401 ).
  • a supervised machine learning technique can be used to train the ANN ( 601 ) to predict the orientations measured by the inertial measurement unit in the sensor module ( 401 ) from the feature vectors labeled by human operators.
  • the ANN ( 601 ) can be used predict the orientation ( 603 ) of the sensor device ( 401 ) based on the orientations ( 503 ) of the predefined features, such as vectors aligned with bones, structures and/or characteristics points in a portion of the user, such as wrist, palm, distal, middle and proximal phalanges for thumb and index finger.
  • a corrected rotation can be applied to the orientation measurement generated by the inertial measurement unit such that the corrected orientation measurement is in agree with the orientation ( 603 ) of the sensor device ( 401 ) predicted by the ANN ( 601 ).
  • the inertial measurement unit of the sensor device ( 401 ) is calibrated based on the results of the ANN ( 501 ) and the ANN ( 601 ).
  • the IMU measurements can be calibrated without requiring the user to perform an exact, predefined pose (e.g., a pose as illustrated in FIG. 1 ).
  • different modules can be calibrated separately while they are in the field of view of the stereo camera ( 126 ).
  • the calibration can be performed in real time on an on-going basis.
  • the computing device ( 141 ) may instruct the camera ( 126 ) to take stereo images from time to time; and when a sensor module is found within a stereo image, the computing device ( 143 ) can perform a calibration calculation based on the stereo image.
  • the present disclosure includes methods and apparatuses which perform these methods, including data processing systems which perform these methods, and computer readable media containing instructions which when executed on data processing systems cause the systems to perform these methods.
  • the computing device ( 141 ), the arm modules ( 113 , 115 ) and/or the head module ( 111 ) can be implemented using one or more data processing systems.
  • a typical data processing system may include an inter-connect (e.g., bus and system core logic), which interconnects a microprocessor(s) and memory.
  • the microprocessor is typically coupled to cache memory.
  • the inter-connect interconnects the microprocessor(s) and the memory together and also interconnects them to input/output (I/O) device(s) via I/O controller(s).
  • I/O devices may include a display device and/or peripheral devices, such as mice, keyboards, modems, network interfaces, printers, scanners, video cameras and other devices known in the art.
  • the data processing system is a server system, some of the I/O devices, such as printers, scanners, mice, and/or keyboards, are optional.
  • the inter-connect can include one or more buses connected to one another through various bridges, controllers and/or adapters.
  • the I/O controllers include a USB (Universal Serial Bus) adapter for controlling USB peripherals, and/or an IEEE-1394 bus adapter for controlling IEEE-1394 peripherals.
  • USB Universal Serial Bus
  • IEEE-1394 IEEE-1394
  • the memory may include one or more of: ROM (Read Only Memory), volatile RAM (Random Access Memory), and non-volatile memory, such as hard drive, flash memory, etc.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • non-volatile memory such as hard drive, flash memory, etc.
  • Volatile RAM is typically implemented as dynamic RAM (DRAM) which requires power continually in order to refresh or maintain the data in the memory.
  • Non-volatile memory is typically a magnetic hard drive, a magnetic optical drive, an optical drive (e.g., a DVD RAM), or other type of memory system which maintains data even after power is removed from the system.
  • the non-volatile memory may also be a random access memory.
  • the non-volatile memory can be a local device coupled directly to the rest of the components in the data processing system.
  • a non-volatile memory that is remote from the system such as a network storage device coupled to the data processing system through a network interface such as a modem or Ethernet interface, can also be used.
  • the functions and operations as described here can be implemented using special purpose circuitry, with or without software instructions, such as using Application-Specific Integrated Circuit (ASIC) or Field-Programmable Gate Array (FPGA).
  • ASIC Application-Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • Embodiments can be implemented using hardwired circuitry without software instructions, or in combination with software instructions. Thus, the techniques are limited neither to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by the data processing system.
  • While one embodiment can be implemented in fully functioning computers and computer systems, various embodiments are capable of being distributed as a computing product in a variety of forms and are capable of being applied regardless of the particular type of machine or computer-readable media used to actually effect the distribution.
  • At least some aspects disclosed can be embodied, at least in part, in software. That is, the techniques may be carried out in a computer system or other data processing system in response to its processor, such as a microprocessor, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device.
  • processor such as a microprocessor
  • a memory such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device.
  • Routines executed to implement the embodiments may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as “computer programs.”
  • the computer programs typically include one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects.
  • a machine readable medium can be used to store software and data which when executed by a data processing system causes the system to perform various methods.
  • the executable software and data may be stored in various places including for example ROM, volatile RAM, non-volatile memory and/or cache. Portions of this software and/or data may be stored in any one of these storage devices.
  • the data and instructions can be obtained from centralized servers or peer to peer networks. Different portions of the data and instructions can be obtained from different centralized servers and/or peer to peer networks at different times and in different communication sessions or in a same communication session.
  • the data and instructions can be obtained in entirety prior to the execution of the applications. Alternatively, portions of the data and instructions can be obtained dynamically, just in time, when needed for execution. Thus, it is not required that the data and instructions be on a machine readable medium in entirety at a particular instance of time.
  • Examples of computer-readable media include but are not limited to non-transitory, recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, optical storage media (e.g., Compact Disk Read-Only Memory (CD ROM), Digital Versatile Disks (DVDs), etc.), among others.
  • the computer-readable media may store the instructions.
  • the instructions may also be embodied in digital and analog communication links for electrical, optical, acoustical or other forms of propagated signals, such as carrier waves, infrared signals, digital signals, etc.
  • propagated signals such as carrier waves, infrared signals, digital signals, etc. are not tangible machine readable medium and are not configured to store instructions.
  • a machine readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.).
  • a machine e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.
  • hardwired circuitry may be used in combination with software instructions to implement the techniques.
  • the techniques are neither limited to any specific combination of hardware circuitry and software nor to any particular source for the instructions executed by the data processing system.

Abstract

A method to calibrate orientation measurements of an inertial measurement unit of a sensor device based on an image of a portion of a user to which the sensor device is attached. For example, the sensor device can be configured to be attached to the middle phalange of the index finger and configured with a touch pad. In response to the determination that the thumb of the user is placed on the touch pad of the sensor device, the camera of the system can capture the image showing that the hand of the user. A convolutional neural network is configured to determine, from the image, orientations of predefined features of the hand of the user. A further artificial neural network is configured to determine the orientation of the sensor device based on the orientations of the predefined features to calibrate the orientation measurements of the inertial measurement unit.

Description

    RELATED APPLICATIONS
  • The present application relates to U.S. patent application Ser. No. 16/044,984, filed Jul. 25, 2018 and entitled “Calibration of Measurement Units in Alignment with a Skeleton Model to Control a Computer System,” U.S. patent application Ser. No. 15/973,137, filed May 7, 2018 and entitled “Tracking User Movements to Control a Skeleton Model in a Computer System,” U.S. patent application Ser. No. 15/868,745, filed Jan. 11, 2018 and entitled “Correction of Accumulated Errors in Inertial Measurement Units Attached to a User,” U.S. patent application Ser. No. 15/864,860, filed Jan. 8, 2018 and entitled “Tracking Torso Leaning to Generate Inputs for Computer Systems,” U.S. patent application Ser. No. 15/847,669, filed Dec. 19, 2017 and entitled “Calibration of Inertial Measurement Units Attached to Arms of a User and to a Head Mounted Device,” U.S. patent application Ser. No. 15/817,646, filed Nov. 20, 2017 and entitled “Calibration of Inertial Measurement Units Attached to Arms of a User to Generate Inputs for Computer Systems,” U.S. patent application Ser. No. 15/813,813, filed Nov. 15, 2017 and entitled “Tracking Torso Orientation to Generate Inputs for Computer Systems,” U.S. patent application Ser. No. 15/792,255, filed Oct. 24, 2017 and entitled “Tracking Finger Movements to Generate Inputs for Computer Systems,” U.S. patent application Ser. No. 15/787,555, filed Oct. 18, 2017 and entitled “Tracking Arm Movements to Generate Inputs for Computer Systems,” and U.S. patent application Ser. No. 15/492,915, filed Apr. 20, 2017 and entitled “Devices for Controlling Computers based on Motions and Positions of Hands,” the entire disclosures of which applications are hereby incorporated herein by reference.
  • FIELD OF THE TECHNOLOGY
  • At least a portion of the present disclosure relates to computer input devices in general and more particularly but not limited to input devices for virtual reality and/or augmented/mixed reality applications implemented using computing devices, such as mobile phones, smart watches, similar mobile devices, and/or other devices.
  • BACKGROUND
  • U.S. Pat. App. Pub. No. 2014/0028547 discloses a user control device having a combined inertial sensor to detect the movements of the device for pointing and selecting within a real or virtual three-dimensional space.
  • U.S. Pat. App. Pub. No. 2015/0277559 discloses a finger-ring-mounted touchscreen having a wireless transceiver that wirelessly transmits commands generated from events on the touchscreen.
  • U.S. Pat. App. Pub. No. 2015/0358543 discloses a motion capture device that has a plurality of inertial measurement units to measure the motion parameters of fingers and a palm of a user.
  • U.S. Pat. App. Pub. No. 2007/0050597 discloses a game controller having an acceleration sensor and a gyro sensor. U.S. Pat. No. D772,986 discloses the ornamental design for a wireless game controller.
  • Chinese Pat. App. Pub. No. 103226398 discloses data gloves that use micro-inertial sensor network technologies, where each micro-inertial sensor is an attitude and heading reference system, having a tri-axial micro-electromechanical system (MEMS) micro-gyroscope, a tri-axial micro-acceleration sensor and a tri-axial geomagnetic sensor which are packaged in a circuit board. U.S. Pat. App. Pub. No. 2014/0313022 and U.S. Pat. App. Pub. No. 2012/0025945 disclose other data gloves.
  • U.S. Pat. App. Pub. No. 2016/0085310 discloses techniques to track hand or body pose from image data in which a best candidate pose from a pool of candidate poses is selected as the current tracked pose.
  • U.S. Pat. App. Pub. No. 2017/0344829 discloses an action detection scheme using a recurrent neural network (RNN) where joint locations are applied to the recurrent neural network (RNN) to determine an action label representing the action of an entity depicted in a frame of a video.
  • U.S. Pat. App. Pub. No. 2017/0186226 discloses a calibration engine that uses a machine learning system to extracts a region of interest to compute values of shape parameters of a 3D mesh model.
  • U.S. Pat. App. Pub. No. 2017/0186226 discloses a system where an observed position is determined from an image and a predicted position is determined using an inertial measurement unit. The predicted position is adjusted by an offset until a difference between the observed position and the predicted position is less than a threshold value.
  • The disclosures of the above discussed patent documents are hereby incorporated herein by reference.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements.
  • FIG. 1 illustrates a system to track user movements according to one embodiment.
  • FIG. 2 illustrates a system to control computer operations according to one embodiment.
  • FIG. 3 illustrates a skeleton model that can be controlled by tracking user movements according to one embodiment.
  • FIGS. 4-6 illustrate processing of images showing a portion of a user to determine orientations of predefined features of the portion of the user.
  • FIG. 7 shows a method for calibrating orientation measurements generated by the inertial measurement unit relative to a skeleton model of the user based on the orientation of the sensor device.
  • DETAILED DESCRIPTION
  • The following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding. However, in certain instances, well known or conventional details are not described to avoid obscuring the description. References to one or an embodiment in the present disclosure are not necessarily references to the same embodiment; and, such references mean at least one.
  • U.S. patent application Ser. No. 16/044,984, filed Jul. 25, 2018 and entitled “Calibration of Measurement Units in Alignment with a Skeleton Model to Control a Computer System,” the entire disclosure of which is hereby incorporated herein by reference, discloses sensor modules having LED lights that can be used to provide optical indicators in the determination of the orientations of the sensor modules. A camera (e.g., in the head mounted display) can be used to capture images of the optical indicators to determine the orientations of the sensor modules. After identifying the locations of LED lights of sensor module in an image, the locations of the LED lights can be processed via an artificial neural network (ANN) to provide an orientation measurement for the sensor module. The orientation measurement for the sensor module, determined based on the optical indicators, can be used to calibrate orientation measurements generated by an inertial measurement unit in the sensor module.
  • In some instances, the LED lights of the sensor module may not be in a position visible to the camera and thus cannot be captured as optical indicators in the images generated by the camera. In other instances, the sensor module may not have LED lights configured on the sensor modules. The present application discloses techniques that can be used to determine orientation of the sensor module based on images captured in the camera, without relying upon LED optional indicators. For example, when the sensor module is being held or worn on a portion of the user in a predetermined manner, an image of the potion of the user can be used to in a first ANN to determine the orientation of predefined features of the user and then used in a second ANN to predict the orientation of the sensor module based on the orientations of the predefined features of the user. For example, the sensor module can be in a form of a ring worn on a predetermined finger of a hand of the user; and the first ANN can be used determine the orientations of features of the user, such as the orientations of the wrist, palm, forearm, and/or distal, middle and proximal phalanges of thumb and/or index finger of the user.
  • For example, a sensor device can be configured as a ring attached to the middle phalange of the index finger; and the sensor device has a touch pad. When the thumb of the user is placed on the touch pad of the sensor device, the orientation of the sensor device can be predicted based on the orientations of the bones of the thumb and/or the index finger. Thus, in response to the configuration of the thumb being on the touching pad of the sensor device worn on the middle phalange of the index finger, an image of the hand can be provided as an input to an ANN to determine the orientations of certain features on the hand of the user, which orientations can be used in a further ANN to determine the orientation of the ring/sensor device. For example, the features identified/used for the determination the orientation of the ring/sensor device can include bones and/or joints, such as wrist, palm, phalanges of thumb and index finger.
  • Once the orientation of sensor device is determined, calibration can be performed in a way similar to those disclosed in U.S. patent application Ser. No. 16/044,984, filed Jul. 25, 2018 and entitled “Calibration of Measurement Units in Alignment with a Skeleton Model to Control a Computer System,” the entire disclosure of which is hereby incorporated herein by reference.
  • In general, uncalibrated measurements of an inertial measurement unit (IMU) can be considered as orientations of the inertial sensor measured relative to an unknown reference coordinate system. A calibration process identifies the unknown reference coordinate system and its relationship with respect to a known coordinate system. After the calibration the measurements of the IMU are relative to the known coordinate system. For example, the calibrated measurements can be an orientation relative to a predetermined orientation in the space, relative to a particular orientation of the sensor device at a specific time instance, relative to the orientation of the arm or hand of a user at a time instance, or relative to a reference orientation/pose of a skeleton model of the user.
  • In some embodiments, the determination of calibration parameters of the measurements of the inertial measurement unit such that the calibrated measurements of the inertial measurement unit are relative to a known orientation, such as the orientation of the sensor device in which the inertial measurement unit is installed, the orientation of the arm or hand of a user to which the sensor device is attached, or the orientation of a skeleton model of the user in a reference pose. For example, a stereo camera integrated in a head mount display (HMD) can be used to capture images of sensor modules on the user. In some embodiments, Computer vision techniques and/or artificial neural network techniques can process the captured images identify one or more orientations that can be used to calibrate the measurements of the inertial measurement units in the sensor modules.
  • In general, the kinematics of a user can be modeled using a skeleton model having a set of rigid parts/portions connected by joints. For example, the head, the torso, the left and right upper arms, the left and right forearms, the palms, phalange bones of fingers, metacarpal bones of thumbs, upper legs, lower legs, and feet can be considered as rigid parts that are connected via various joints, such as the neck, shoulders, elbows, wrist, and finger joints.
  • The movements of the parts in the skeleton model of a user can be controlled by the movements of the corresponding portions of the user tracked using sensor modules. The sensor modules can determine the orientations of the portions of the user, such as the hands, arms, and head of the user. The measured orientations of the corresponding parts of the user determine the orientations of the parts of the skeleton model, such as hands and arms. The relative positions and/or orientations of the rigid parts collectively represent the pose of the user and/or the skeleton model. The skeleton model of the user can be used to control the presentation of an avatar of the user, to identify the gesture inputs of the user, and/or to make a virtual realty or augmented reality presentation of the user.
  • FIG. 1 illustrates a system to track user movements according to one embodiment.
  • FIG. 1 illustrates various parts of a user, such as the torso (101) of the user, the head (107) of the user, the upper arms (103 and 105) of the user, the forearms (112 and 114) of the user, and the hands (106 and 108) of the user.
  • In an application illustrated in FIG. 1, the hands (106 and 108) of the user are considered rigid parts movable around the wrists of the user. In other applications, the palms and finger bones of the user can be further tracked for their movements relative to finger joints (e.g., to determine the hand gestures of the user made using relative positions among fingers of a hand and the palm of the hand).
  • In FIG. 1, the user wears several sensor modules/devices (111, 113, 115, 117 and 119) that track the orientations of parts of the user that are considered, or recognized as, rigid in an application.
  • In an application illustrated in FIG. 1, rigid parts of the user are movable relative to the torso (101) of the user and relative to each other. Examples of the rigid parts include the head (107), the upper arms (103 and 105), the forearms (112 and 114), and the hands (106 and 108). The joints, such as neck, shoulder, elbow, and/or wrist, connect the rigid parts of the user to form one or more kinematic chains. The kinematic chains can be modeled in a computing device (141) to control the application.
  • To track the relative positions/orientations of rigid parts in a kinematic chain that connects the rigid parts via one or more joints, a tracking device can be attached to each individual rigid part in the kinematic chain to measure its orientation.
  • In general, the position and/or orientation of a rigid part in a reference system (100) can be tracked using one of many systems known in the field. Some of the systems may use one or more cameras to take images of a rigid part marked using optical markers and analyze the images to compute the position and/or orientation of the part. Some of the systems may track the rigid part based on signals transmitted from, or received at, a tracking device attached to the rigid part, such as radio frequency signals, infrared signals, ultrasound signals. The signals may correspond to signals received in the tracking device, and/or signals emitted from the tracking device. Some of the systems may use inertial measurement units (IMUs) to track the position and/or orientation of the tracking device.
  • In FIG. 1, the sensor devices (111, 113, 115, 117 and 119) are used to track some of the rigid parts (e.g., 107, 103, 105, 106, 108) in the one or more kinematic chains, but sensor devices are omitted from other rigid parts (101, 112, 114) in the one or more kinematic chains to reduce the number of sensor devices used and/or to improve user experience for wearing the reduced number of sensor devices.
  • The computing device (141) can have a prediction model (141) trained to generate predicted measurements of parts (101, 112, 114, 107, 103, 105, 106, and/or 108) of the user based on the measurements of the sensor devices (111, 113, 115, 117 and 119).
  • For example, the prediction model (141) can be implemented using an artificial neural network (ANN) in the computing device (141) to predict the measurements of the orientations of the rigid parts (101, 112, 114) that have omitted sensor devices, based on the measurements of the orientations rigid parts (107, 103, 105, 106, 108) that have the attached sensor devices (111, 113, 115, 117 and 119).
  • Further, the artificial neural network can be trained to predict the measurements of the orientations of the rigid parts (107, 103, 105, 106, 108) that would be measured by another system (e.g., an optical tracking system), based on the measurement of the attached sensor devices (111, 113, 115, 117 and 119) that measure orientations using a different technique (e.g., IMUs).
  • The sensor devices (111, 113, 115, 117, 119) communicate their movement measurements to the computing device (141), which computes or predicts the orientation of the rigid parts (107, 103, 105, 106, 108, 101, 112, 114) by applying the measurements obtained from the attached sensor devices (111, 113, 115, 117 and 119) as inputs to an artificial neural network trained in a way as further discussed below.
  • In some implementations, each of the sensor devices (111, 113, 115, 117 and 119) communicates its measurements directly to the computing device (141) in a way independent from the operations of other sensor devices.
  • Alternative, one of the sensor devices (111, 113, 115, 117 and 119) may function as a base unit that receives measurements from one or more other sensor devices and transmit the bundled and/or combined measurements to the computing device (141). In some instances, the artificial neural network is implemented in the base unit and used to generate the predicted measurements that are communicated to the computing device (141).
  • Preferably, wireless connections made via a personal area wireless network (e.g., Bluetooth connections), or a local area wireless network (e.g., Wi-Fi connections) are used to facilitate the communication from the sensor devices (111, 113, 115, 117 and 119) to the computing device (141).
  • Alternatively, wired connections can be used to facilitate the communication among some of the sensor devices (111, 113, 115, 117 and 119) and/or with the computing device (141).
  • For example, a hand module (117 or 119) attached to or held in a corresponding hand (106 or 108) of the user may receive the motion measurements of a corresponding arm module (115 or 113) and transmit the motion measurements of the corresponding hand (106 or 108) and the corresponding upper arm (105 or 103) to the computing device (141).
  • The hand (106), the forearm (114), and the upper arm (105) can be considered a kinematic chain, for which an artificial neural network can be trained to predict the orientation measurements generated by an optical track system, based on the sensor inputs from the sensor devices (117 and 115) that are attached to the hand (106) and the upper arm (105), without a corresponding device on the forearm (114).
  • Optionally or in combination, the hand module (e.g., 117) may combine its measurements with the measurements of the corresponding arm module (115) to compute the orientation of the forearm connected between the hand (106) and the upper arm (105), in a way as disclosed in U.S. patent application Ser. No. 15/787,555, filed Oct. 18, 2017 and entitled “Tracking Arm Movements to Generate Inputs for Computer Systems”, the entire disclosure of which is hereby incorporated herein by reference.
  • For example, the hand modules (117 and 119) and the arm modules (115 and 113) can be each respectively implemented via a base unit (or a game controller) and an arm/shoulder module discussed in U.S. patent application Pub. No. 15/492,915, filed Apr. 20, 2017 and entitled “Devices for Controlling Computers based on Motions and Positions of Hands”, the entire disclosure of which application is hereby incorporated herein by reference.
  • In some implementations, the head module (111) is configured as a base unit that receives the motion measurements from the hand modules (117 and 119) and the arm modules (115 and 113) and bundles the measurement data for transmission to the computing device (141). In some instances, the computing device (141) is implemented as part of the head module (111). The head module (111) may further determine the orientation of the torso (101) from the orientation of the arm modules (115 and 113) and/or the orientation of the head module (111), using an artificial neural network trained for a corresponding kinematic chain, which includes the upper arms (103 and 105), the torso (101), and/or the head (107).
  • For the determination of the orientation of the torso (101), the hand modules (117 and 119) are optional in the system illustrated in FIG. 1.
  • Further, in some instances the head module (111) is not used in the tracking of the orientation of the torso (101) of the user.
  • Typically, the measurements of the sensor devices (111, 113, 115, 117 and 119) are calibrated for alignment with a common reference system, such as the coordinate system (100).
  • For example, the coordinate system (100) can correspond to the orientation of the arms and body of the user in a standardized pose illustrated in FIG. 1. When in the pose of FIG. 1, the arms of the user point in the directions that are parallel to the Y axis; the front facing direction of the user is parallel to the X axis; and the legs, the torso (101) to the head (107) are in the direction that is parallel to the Z axis.
  • After the calibration, the hands, arms (105, 103), the head (107) and the torso (101) of the user may move relative to each other and relative to the coordinate system (100). The measurements of the sensor devices (111, 113, 115, 117 and 119) provide orientations of the hands (106 and 108), the upper arms (105, 103), and the head (107) of the user relative to the coordinate system (100). The computing device (141) computes, estimates, or predicts the current orientation of the torso (101) and/or the forearms (112 and 114) from the current orientations of the upper arms (105, 103), the current orientation the head (107) of the user, and/or the current orientation of the hands (106 and 108) of the user and their orientation history using the prediction model (116).
  • Some techniques of using an artificial neural network to predict the movements of certain parts in a skeleton model that are not separately tracked using dedicated sensor devices can be found in U.S. patent application Ser. No. 15/996,389, filed Jun. 1, 2018 and entitled “Motion Predictions of Overlapping Kinematic Chains of a Skeleton Model used to Control a Computer System,” and U.S. patent application Ser. No. 15/973,137, filed May 7, 2018 and entitled “tracking User Movements to Control a Skeleton Model in a Computer System,” the entire disclosures of which applications are hereby incorporated herein by reference.
  • Optionally or in combination, the computing device (141) may further compute the orientations of the forearms from the orientations of the hands (106 and 108) and upper arms (105 and 103), e.g., using a technique disclosed in U.S. patent application Ser. No. 15/787,555, filed Oct. 18, 2017 and entitled “Tracking Arm Movements to Generate Inputs for Computer Systems”, the entire disclosure of which is hereby incorporated herein by reference.
  • FIG. 2 illustrates a system to control computer operations according to one embodiment. For example, the system of FIG. 2 can be implemented via attaching the arm modules (115 and 113) to the upper arms (105 and 103) respectively, the head module (111) to the head (107) and/or hand modules (117 and 119), in a way illustrated in FIG. 1.
  • In FIG. 2, the head module (111) and the arm module (113) have micro-electromechanical system (MEMS) inertial measurement units (IMUs) (121 and 131) that measure motion parameters and determine orientations of the head (107) and the upper arm (103).
  • Similarly, the hand modules (117 and 119) can also have IMUs. In some applications, the hand modules (117 and 119) measure the orientation of the hands (106 and 108) and the movements of fingers are not separately tracked. In other applications, the hand modules (117 and 119) have separate IMUs for the measurement of the orientations of the palms of the hands (106 and 108), as well as the orientations of at least some phalange bones of at least some fingers on the hands (106 and 108). Examples of hand modules can be found in U.S. patent application Ser. No. 15/792,255, filed Oct. 24, 2017 and entitled “Tracking Finger Movements to Generate Inputs for Computer Systems,” the entire disclosure of which is hereby incorporated herein by reference.
  • Each of the IMUs (131 and 121) has a collection of sensor components that enable the determination of the movement, position and/or orientation of the respective IMU along a number of axes. Examples of the components are: a MEMS accelerometer that measures the projection of acceleration (the difference between the true acceleration of an object and the gravitational acceleration); a MEMS gyroscope that measures angular velocities; and a magnetometer that measures the magnitude and direction of a magnetic field at a certain point in space. In some embodiments, the IMUs use a combination of sensors in three and two axes (e.g., without a magnetometer).
  • The computing device (141) can have a prediction model (116) and a motion processor (145). The measurements of the IMUs (e.g., 131, 121) from the head module (111), arm modules (e.g., 113 and 115), and/or hand modules (e.g., 117 and 119) are used in the prediction module (116) to generate predicted measurements of at least some of the parts that do not have attached sensor modules, such as the torso (101), and forearms (112 and 114). The predicted measurements and/or the measurements of the IMUs (e.g., 131, 121) are used in the motion processor (145).
  • The motion processor (145) has a skeleton model (143) of the user (e.g., illustrated FIG. 3). The motion processor (145) controls the movements of the parts of the skeleton model (143) according to the movements/orientations of the corresponding parts of the user. For example, the orientations of the hands (106 and 108), the forearms (112 and 114), the upper arms (103 and 105), the torso (101), the head (107), as measured by the IMUs of the hand modules (117 and 119), the arm modules (113 and 115), the head module (111) sensor modules and/or predicted by the prediction model (116) based on the IMU measurements are used to set the orientations of the corresponding parts of the skeleton model (143).
  • Since the torso (101) does not have a separately attached sensor module, the movements/orientation of the torso (101) can be predicted using the prediction model (116) using the sensor measurements from sensor modules on a kinematic chain that includes the torso (101). For example, the prediction model (116) can be trained with the motion pattern of a kinematic chain that includes the head (107), the torso (101), and the upper arms (103 and 105) and can be used to predict the orientation of the torso (101) based on the motion history of the head (107), the torso (101), and the upper arms (103 and 105) and the current orientations of the head (107), and the upper arms (103 and 105).
  • Similarly, since a forearm (112 or 114) does not have a separately attached sensor module, the movements/orientation of the forearm (112 or 114) can be predicted using the prediction model (116) using the sensor measurements from sensor modules on a kinematic chain that includes the forearm (112 or 114). For example, the prediction model (116) can be trained with the motion pattern of a kinematic chain that includes the hand (106), the forearm (114), and the upper arm (105) and can be used to predict the orientation of the forearm (114) based on the motion history of the hand (106), the forearm (114), the upper arm (105) and the current orientations of the hand (106), and the upper arm (105).
  • The skeleton model (143) is controlled by the motion processor (145) to generate inputs for an application (147) running in the computing device (141). For example, the skeleton model (143) can be used to control the movement of an avatar/model of the arms (112, 114, 105 and 103), the hands (106 and 108), the head (107), and the torso (101) of the user of the computing device (141) in a video game, a virtual reality, a mixed reality, or augmented reality, etc.
  • Preferably, the arm module (113) has a microcontroller (139) to process the sensor signals from the IMU (131) of the arm module (113) and a communication module (133) to transmit the motion/orientation parameters of the arm module (113) to the computing device (141). Similarly, the head module (111) has a microcontroller (129) to process the sensor signals from the IMU (121) of the head module (111) and a communication module (123) to transmit the motion/orientation parameters of the head module (111) to the computing device (141).
  • Optionally, the arm module (113) and the head module (111) have LED indicators (137 and 127) respectively to indicate the operating status of the modules (113 and 111).
  • Optionally, the arm module (113) has a haptic actuator (138) respectively to provide haptic feedback to the user.
  • Optionally, the head module (111) has a display device (127) and/or buttons and other input devices (125), such as a touch sensor, a microphone, a camera (126), etc.
  • In some instances, a stereo camera (126) is used to capture stereo images of the sensor devices (113, 115, 117, 119) to calibrate their measurements relative to a common coordinate system, such as the coordinate system (100) defined in connection with a reference pose illustrated in FIG. 1. Further, the LED indicators (e.g., 137) of a sensor module (e.g., 113) can be turned on during the time of capturing the stereo images such that the orientation and/or identity of the sensor module (e.g., 113) can be determined from the locations and/or patterns of the LED indicators.
  • When the LED lights are not captured in the images, or when the sensor device do not have LED lights, the orientation of the sensor module can be predicted based on an image of a portion of the user wearing the sensor device in a predefined manner. For example, an ANN can be used to determine the orientations of the wrist, palm, distal, middle and proximal phalanges of thumb and index finger from the image of the hand and forearm of the user; and the orientations can be further used in another ANN to determine the orientation of the sensor device.
  • In some implementations, the head module (111) is replaced with a module that is similar to the arm module (113) and that is attached to the head (107) via a strap or is secured to a head mount display device.
  • In some applications, the hand module (119) can be implemented with a module that is similar to the arm module (113) and attached to the hand via holding or via a strap. Optionally, the hand module (119) has buttons and other input devices, such as a touch sensor, a joystick, etc.
  • For example, the handheld modules disclosed in U.S. patent application Ser. No. 15/792,255, filed Oct. 24, 2017 and entitled “Tracking Finger Movements to Generate Inputs for Computer Systems”, U.S. patent application Ser. No. 15/787,555, filed Oct. 18, 2017 and entitled “Tracking Arm Movements to Generate Inputs for Computer Systems”, and/or U.S. patent application Ser. No. 15/492,915, filed Apr. 20, 2017 and entitled “Devices for Controlling Computers based on Motions and Positions of Hands” can be used to implement the hand modules (117 and 119), the entire disclosures of which applications are hereby incorporated herein by reference.
  • When a hand module (e.g., 117 or 119) tracks the orientations of the palm and a selected set of phalange bones, the motion pattern of a kinematic chain of the hand captured in the predictive mode (116) can be used in the prediction model (116) to predict the orientations of other phalange bones that do not wear sensor devices.
  • FIG. 2 shows a hand module (119) and an arm module (113) as examples. In general, an application for the tracking of the orientation of the torso (101) typically uses two arm modules (113 and 115) as illustrated in FIG. 1. The head module (111) can be used optionally to further improve the tracking of the orientation of the torso (101). Hand modules (117 and 119) can be further used to provide additional inputs and/or for the prediction/calculation of the orientations of the forearms (112 and 114) of the user.
  • Typically, an IMU (e.g., 131 or 121) in a module (e.g., 113 or 111) generates acceleration data from accelerometers, angular velocity data from gyrometers/gyroscopes, and/or orientation data from magnetometers. The microcontrollers (139 and 129) perform preprocessing tasks, such as filtering the sensor data (e.g., blocking sensors that are not used in a specific application), applying calibration data (e.g., to correct the average accumulated error computed by the computing device (141)), transforming motion/position/orientation data in three axes into a quaternion, and packaging the preprocessed results into data packets (e.g., using a data compression technique) for transmitting to the host computing device (141) with a reduced bandwidth requirement and/or communication time.
  • Each of the microcontrollers (129, 139) may include a memory storing instructions controlling the operations of the respective microcontroller (129 or 139) to perform primary processing of the sensor data from the IMU (121, 131) and control the operations of the communication module (123, 133), and/or other components, such as the LED indicators (137), the haptic actuator (138), buttons and other input devices (125), the display device (127), etc.
  • The computing device (141) may include one or more microprocessors and a memory storing instructions to implement the motion processor (145). The motion processor (145) may also be implemented via hardware, such as Application-Specific Integrated Circuit (ASIC) or Field-Programmable Gate Array (FPGA).
  • In some instances, one of the modules (111, 113, 115, 117, and/or 119) is configured as a primary input device; and the other module is configured as a secondary input device that is connected to the computing device (141) via the primary input device. A secondary input device may use the microprocessor of its connected primary input device to perform some of the preprocessing tasks. A module that communicates directly to the computing device (141) is consider a primary input device, even when the module does not have a secondary input device that is connected to the computing device via the primary input device.
  • In some instances, the computing device (141) specifies the types of input data requested, and the conditions and/or frequency of the input data; and the modules (111, 113, 115, 117, and/or 119) report the requested input data under the conditions and/or according to the frequency specified by the computing device (141). Different reporting frequencies can be specified for different types of input data (e.g., accelerometer measurements, gyroscope/gyrometer measurements, magnetometer measurements, position, orientation, velocity).
  • In general, the computing device (141) may be a data processing system, such as a mobile phone, a desktop computer, a laptop computer, a head mount virtual reality display, a personal medial player, a tablet computer, etc.
  • FIG. 3 illustrates a skeleton model that can be controlled by tracking user movements according to one embodiment. For example, the skeleton model of FIG. 3 can be used in the motion processor (145) of FIG. 2.
  • The skeleton model illustrated in FIG. 3 includes a torso (232) and left and right upper arms (203 and 205) that can move relative to the torso (232) via the shoulder joints (234 and 241). The skeleton model may further include the forearms (215 and 233), hands (206 and 208), neck, head (207), legs and feet. In some instances, a hand (206) includes a palm connected to phalange bones (e.g., 245) of fingers, and metacarpal bones of thumbs via joints (e.g., 244).
  • The positions/orientations of the rigid parts of the skeleton model illustrated in FIG. 3 are controlled by the measured orientations of the corresponding parts of the user illustrated in FIG. 1. For example, the orientation of the head (207) of the skeleton model is configured according to the orientation of the head (107) of the user as measured using the head module (111); the orientation of the upper arm (205) of the skeleton model is configured according to the orientation of the upper arm (105) of the user as measured using the arm module (115); and the orientation of the hand (206) of the skeleton model is configured according to the orientation of the hand (106) of the user as measured using the hand module (117); etc.
  • For example, the tracking system as illustrated in FIG. 2 measures the orientations of the modules (111, 113, . . . , 119) using IMUs (e.g., 111, 113, . . . ). The inertial-based sensors offer good user experiences, have less restrictions on the use of the sensors, and can be implemented in a computational efficient way. However, the inertial-based sensors may be less accurate than certain tracking methods in some situations, and can have drift errors and/or accumulated errors through time integration. Drift errors and/or accumulated errors can be considered as the change of the reference orientation used for the measurement from a known reference orientation to an unknown reference orientation. An update calibration can remove the drift errors and/or accumulated errors.
  • An optical tracking system can use one or more cameras (e.g., 126) to track the positions and/or orientations of optical markers (e.g., LED indicators (137)) that are in the fields of view of the cameras. When the optical markers are within the fields of view of the cameras, the images captured by the cameras can be used to compute the positions and/or orientations of optical markers and thus the orientations of parts that are marked using the optical markers. However, the optical tracking system may not be as user friendly as the inertial-based tracking system and can be more expensive to deploy. Further, when an optical marker is out of the fields of view of cameras, the positions and/or orientations of optical marker cannot be determined by the optical tracking system.
  • An artificial neural network of the prediction model (116) can be trained to predict the measurements produced by the optical tracking system based on the measurements produced by the inertial-based tracking system. Thus, the drift errors and/or accumulated errors in inertial-based measurements can be reduced and/or suppressed, which reduces the need for re-calibration of the inertial-based tracking system. Further details on the use of the prediction model (116) can be found in U.S. patent application Ser. No. 15/973,137, filed May 7, 2018 and entitled “tracking User Movements to Control a Skeleton Model in a Computer System,” the entire disclosure of which application is hereby incorporated herein by reference.
  • Further, the orientations determined using images captured by the camera (126) can be used to calibrate the measurements of the sensor devices (111, 113, 115, 117, 119) relative to a common coordinate system, such as the coordinate system (100) defined using a standardized reference pose illustrated in FIG. 1, as further discussed below.
  • FIGS. 4-6 illustrate processing of images showing a portion of a user to determine orientations of predefined features of the portion of the user.
  • FIG. 4 illustrates an image (400) that can be captured using a camera (126) configured on a head mounted display (127). As illustrated in FIG. 4, a sensor device (401) having an inertial measurement unit, similar to IMU (131) in an arm module (113) can be configured to have a form factor of a ring adapted to be worn on the middle phalange (403) of the index finger. The sensor device (401) is configured with a touch pad that can be ready touched by the thumb (405) to generate a touch input.
  • The image (400) can be processed as input for an ANN to predict orientations of predefined features (601-603) of the portion of the user.
  • In some embodiments, the image (400) FIG. 4 captured by the camera (126) is converted into the image similar to image (500) of FIG. 5 in a black/white format for processing to recognize the orientations of predefined features. For example, the image (400) FIG. 4 captured by the camera (126) can be processed by an ANN (501) to determine the orientations (503) of features, such as forearm (613), wrist (607), palm (611), distal phalange (605) of thumb, middle phalange (607) of thumb, distal phalange (603) of index finger, middle phalange (615) of index finger, proximal (609) of index finger, and metacarpal (611) of the index finger in palm connecting, as illustrated in FIG. 6. Optionally, the system converts the original image (400) from higher resolution into a lower resolution image in a black/white format (500) to facilitate the recognize orientations (503) of the features (e.g., forearm (613), wrist (607), palm (611), distal phalanges (617 and 605), middle phalanges (615 and 607), and proximal phalange (609) and metacarpal (611) as illustrated in FIG. 6).
  • The orientations of forearm (613), wrist (607), palm (611), distal phalanges (617 and 605), middle phalanges (615 and 607), proximal phalange (609), and metacarpal (611), as illustrated in FIG. 6 and determined from the image of the hand and the upper arm illustrated in FIG. 4, can be provided as input to an ANN (601) to predict the orientation (603) of the sensor device (401). Capturing the upper arm portion in the image (400) in FIG. 4 is optionally. The orientations of forearm (613), wrist (607), palm (611), distal phalanges (617 and 605), middle phalanges (615 and 607), proximal phalange (609), and metacarpal (611) can be recognized/determined without capturing the upper arm in the image (400) in FIG. 4. However, capturing the upper arm in the image (400) in FIG. 4 can improve the accuracy of the recognized/determined orientations of forearm (613), wrist (607), palm (611), distal phalanges (617 and 605), middle phalanges (615 and 607), proximal phalange (609), and metacarpal (611).
  • FIG. 7 shows a method for calibrating orientation measurements generated by the inertial measurement unit relative to a skeleton model of the user based on the orientation of the sensor device. For example, the method of FIG. 7 can be used in a system of FIG. 2 and/or FIG. 1 to control a skeleton model of FIG. 3, after the orientation measurement of the sensor device (401) of FIG. 4 is using images captured illustrated in FIGS. 4-6.
  • In FIG. 7, the method includes: determining (701) that the thumb on a hand is on the touch pad on the sensor device (401) worn on the finger on the hand; in response to the determination that the thumb on the hand is on the touch pad on a sensor device (401) worn on a finger on the hand, capturing (703) an image (400) using the camera (126) configured on a head mounted display (127); receiving (705) the image (400) showing a portion of the user, including the hand to which the sensor device (401) is attached and optionally, an upper arm connected to the hand; determining (707) orientations of predefined features of the portion of the user based on the image (400, 500, 600) (e.g., vectors aligned with bones in the hand of the user); determining (709), using the artificial neural network (ANN) (601), the orientation (603) of the sensor device (401) based on the orientations (503) of the predefined features; calibrating (711) orientation measurements generated by an inertial measurement unit in the sensor device (401) relative to a skeleton model of the user based on the orientation (603) of the sensor device (401) determined using the artificial neural network (601).
  • For example, when no LED lights are available in images as optical markers of the determination of the orientation of a sensor device (401), the method of FIG. 7 can be used to determine the orientation of the sensor module (401) and thus calibrate the orientation measurements generated by the inertial measurement unit in the sensor module (401). In some embodiments, the sensor device (401) is configured to be attached to the middle phalange (615) of the index finger; and sensor device (401) can have a touch pad. When the thumb on the hand is determined to be on the touch pad on the sensor device (401), the camera of the system can capture an image showing a portion of the user, including the hand and optionally the upper arm of the user, where the thumb (605) of the user is placed on the touch pad of the sensor device (401). For example, the image can be captured using a camera in a head mounted display worn on the head of the user such that the orientation measured via the image is relate to a skeleton model of the user. Orientations of predefined features of the portion of the user can be calculated based on the image using an ANN (501). For example, the ANN (501) can be a convolutional neural network (CNN) trained using a training dataset. The training dataset can be obtained by capturing multiple images of a user having the sensor device (401) on the middle phalange (403) of the index finger and having the thumb (405) touching the touch pad of the sensor device (401). The images can be viewed by human operators to identify the vectors (e.g., 617, 615, 609, 611, 605, 607, 613). The vectors can be identified relative to a reference system of the skeleton model (200) of the user. A supervised machine learning technique can be used to train the CNN to predict the vectors from the images with reduced/minimized differences between the predicted vectors and the vectors identified by human operators.
  • The ANN (601) can be trained to determine the orientation (603) of the sensor device (401) based on the orientations (503) of the predefined features. For example, a training dataset can be collected within a predetermined time period following the calibration of the orientation measurements of the inertial measurement unit in the sensor module (401). For the generation of the training dataset, the calibration can be performed using an alternative method. For example, the touch pad on the sensor device (401) used to generate the training dataset can be painted with optical marks to allow the determination of its operation from images captured from the camera configured on the head mounted display; and the calibration can be performed with the touch pad of the sensor device (401) visible to the camera (and the thumb moved away from the touch pad). Once the orientation measurements of the inertial measurement unit in the sensor module (401) is calibrated, the orientation measurements generated with the predetermined time period following the calibration can be considered as accurate; and the images of the hand can be captured to label the feature vectors (e.g., as illustrated in FIG. 6) to generate the orientations of the features (503) for the corresponding orientations measured by the inertial measurement unit in the sensor module (401). A supervised machine learning technique can be used to train the ANN (601) to predict the orientations measured by the inertial measurement unit in the sensor module (401) from the feature vectors labeled by human operators.
  • After the training, the ANN (601) can be used predict the orientation (603) of the sensor device (401) based on the orientations (503) of the predefined features, such as vectors aligned with bones, structures and/or characteristics points in a portion of the user, such as wrist, palm, distal, middle and proximal phalanges for thumb and index finger. When the predicted orientation (603) of the sensor device (401) is different from the orientation measurement generated by the inertial measurement unit, a corrected rotation can be applied to the orientation measurement generated by the inertial measurement unit such that the corrected orientation measurement is in agree with the orientation (603) of the sensor device (401) predicted by the ANN (601). Thus, periodically, when the image similar to that illustrated in FIG. 4 is available, the inertial measurement unit of the sensor device (401) is calibrated based on the results of the ANN (501) and the ANN (601).
  • Using the above discussed techniques, the IMU measurements can be calibrated without requiring the user to perform an exact, predefined pose (e.g., a pose as illustrated in FIG. 1). Further, different modules can be calibrated separately while they are in the field of view of the stereo camera (126). The calibration can be performed in real time on an on-going basis. For example, the computing device (141) may instruct the camera (126) to take stereo images from time to time; and when a sensor module is found within a stereo image, the computing device (143) can perform a calibration calculation based on the stereo image.
  • The present disclosure includes methods and apparatuses which perform these methods, including data processing systems which perform these methods, and computer readable media containing instructions which when executed on data processing systems cause the systems to perform these methods.
  • For example, the computing device (141), the arm modules (113, 115) and/or the head module (111) can be implemented using one or more data processing systems.
  • A typical data processing system may include an inter-connect (e.g., bus and system core logic), which interconnects a microprocessor(s) and memory. The microprocessor is typically coupled to cache memory.
  • The inter-connect interconnects the microprocessor(s) and the memory together and also interconnects them to input/output (I/O) device(s) via I/O controller(s). I/O devices may include a display device and/or peripheral devices, such as mice, keyboards, modems, network interfaces, printers, scanners, video cameras and other devices known in the art. In one embodiment, when the data processing system is a server system, some of the I/O devices, such as printers, scanners, mice, and/or keyboards, are optional.
  • The inter-connect can include one or more buses connected to one another through various bridges, controllers and/or adapters. In one embodiment the I/O controllers include a USB (Universal Serial Bus) adapter for controlling USB peripherals, and/or an IEEE-1394 bus adapter for controlling IEEE-1394 peripherals.
  • The memory may include one or more of: ROM (Read Only Memory), volatile RAM (Random Access Memory), and non-volatile memory, such as hard drive, flash memory, etc.
  • Volatile RAM is typically implemented as dynamic RAM (DRAM) which requires power continually in order to refresh or maintain the data in the memory. Non-volatile memory is typically a magnetic hard drive, a magnetic optical drive, an optical drive (e.g., a DVD RAM), or other type of memory system which maintains data even after power is removed from the system. The non-volatile memory may also be a random access memory.
  • The non-volatile memory can be a local device coupled directly to the rest of the components in the data processing system. A non-volatile memory that is remote from the system, such as a network storage device coupled to the data processing system through a network interface such as a modem or Ethernet interface, can also be used.
  • In the present disclosure, some functions and operations are described as being performed by or caused by software code to simplify description. However, such expressions are also used to specify that the functions result from execution of the code/instructions by a processor, such as a microprocessor.
  • Alternatively, or in combination, the functions and operations as described here can be implemented using special purpose circuitry, with or without software instructions, such as using Application-Specific Integrated Circuit (ASIC) or Field-Programmable Gate Array (FPGA). Embodiments can be implemented using hardwired circuitry without software instructions, or in combination with software instructions. Thus, the techniques are limited neither to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by the data processing system.
  • While one embodiment can be implemented in fully functioning computers and computer systems, various embodiments are capable of being distributed as a computing product in a variety of forms and are capable of being applied regardless of the particular type of machine or computer-readable media used to actually effect the distribution.
  • At least some aspects disclosed can be embodied, at least in part, in software. That is, the techniques may be carried out in a computer system or other data processing system in response to its processor, such as a microprocessor, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device.
  • Routines executed to implement the embodiments may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as “computer programs.” The computer programs typically include one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects.
  • A machine readable medium can be used to store software and data which when executed by a data processing system causes the system to perform various methods. The executable software and data may be stored in various places including for example ROM, volatile RAM, non-volatile memory and/or cache. Portions of this software and/or data may be stored in any one of these storage devices. Further, the data and instructions can be obtained from centralized servers or peer to peer networks. Different portions of the data and instructions can be obtained from different centralized servers and/or peer to peer networks at different times and in different communication sessions or in a same communication session. The data and instructions can be obtained in entirety prior to the execution of the applications. Alternatively, portions of the data and instructions can be obtained dynamically, just in time, when needed for execution. Thus, it is not required that the data and instructions be on a machine readable medium in entirety at a particular instance of time.
  • Examples of computer-readable media include but are not limited to non-transitory, recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, optical storage media (e.g., Compact Disk Read-Only Memory (CD ROM), Digital Versatile Disks (DVDs), etc.), among others. The computer-readable media may store the instructions.
  • The instructions may also be embodied in digital and analog communication links for electrical, optical, acoustical or other forms of propagated signals, such as carrier waves, infrared signals, digital signals, etc. However, propagated signals, such as carrier waves, infrared signals, digital signals, etc. are not tangible machine readable medium and are not configured to store instructions.
  • In general, a machine readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.).
  • In various embodiments, hardwired circuitry may be used in combination with software instructions to implement the techniques. Thus, the techniques are neither limited to any specific combination of hardware circuitry and software nor to any particular source for the instructions executed by the data processing system.
  • In the foregoing specification, the disclosure has been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.

Claims (20)

1. A method, comprising:
receiving an image showing a portion of the user, wherein a sensor device is attached on the portion of the user, the sensor device including an inertial measurement unit configured to measure an orientation;
determining orientations of predefined features of the portion of the user based on the image;
determining, using a first artificial neural network (ANN), the orientation of the sensor device based on the orientations of the predefined features;
calibrating orientation measurements generated by the inertial measurement unit relative to a skeleton model of the user based on the orientation of the sensor device determined using the first artificial neural network.
2. The method of claim 1, wherein the orientations of the predefined of features are determined from the image using a second artificial neural network (ANN).
3. The method of claim 2, wherein the second ANN is a convolutional neural network.
4. The method of claim 3, wherein the portion of the user includes a hand of the user; and the predefined features include vectors aligned with bones in the hand of the user.
5. The method of claim 4, further comprising:
capturing the image using a camera in a head mounted display, in response to a determination that a thumb on the hand is on a touch pad on the sensor device worn on a finger on the hand.
6. (canceled)
7. The method of claim 1, wherein the orientations of the predefined of features are calculated relative to a reference system of the skeleton model of the user.
8. A system, comprising:
one or more processors; and
a non-transitory computer-readable medium including one or more sequences of instructions that, when executed by the one or more processors, cause:
receiving an image showing a portion of the user, wherein a sensor device is attached on the portion of the user, the sensor device including an inertial measurement unit configured to measure an orientation;
determining orientations of predefined features of the portion of the user based on the image;
determining, using a first artificial neural network (ANN), the orientation of the sensor device based on the orientations of the predefined features; and
calibrating orientation measurements generated by the inertial measurement unit relative to a skeleton model of the user based on the orientation of the sensor device determined using the first artificial neural network.
9. The system of claim 8, wherein the orientations of the set of features are determined from the image using a second artificial neural network (ANN).
10. The system of claim 9, wherein the second ANN is a convolutional neural network.
11. The system of claim 10, wherein the portion of the user includes a hand of the user; and the predefined features include vectors aligned with bones in the hand of the user.
12. The system of claim 11, further comprising:
capturing the image using a camera in a head mounted display, in response to a determination that a thumb on the hand is on a touch pad on the sensor device worn on a finger on the hand.
13. (canceled)
14. The system of claim 8, wherein the orientations of the set of features are calculated relative to a reference system of the skeleton model of the user.
15. A non-transitory computer storage medium storing instructions which, when executed by a computing device, cause the computing device to perform a method, the method comprising:
receiving an image showing a portion of the user, wherein a sensor device is attached on the portion of the user, the sensor device including an inertial measurement unit configured to measure an orientation;
determining orientations of predefined features of the portion of the user based on the image;
determining, using a first artificial neural network (ANN), the orientation of the sensor device based on the orientations of the predefined features; and
calibrating orientation measurements generated by the inertial measurement unit relative to a skeleton model of the user based on the orientation of the sensor device determined using the first artificial neural network.
16. The non-transitory computer storage medium of claim 15, wherein the orientations of the set of features are determined from the image using a second artificial neural network (ANN).
17. The non-transitory computer storage medium of claim 16, wherein the second ANN is a convolutional neural network.
18. The non-transitory computer storage medium of claim 17, wherein the portion of the user includes a hand of the user; and the predefined features include vectors aligned with bones in the hand of the user.
19. The non-transitory computer storage medium of claim 18, further comprising:
capturing the image using a camera in a head mounted display, in response to a determination that a thumb on the hand is on a touch pad on the sensor device worn on a finger on the hand.
20. (canceled)
US16/576,661 2019-09-19 2019-09-19 Calibration of inertial measurement units in alignment with a skeleton model to control a computer system based on determination of orientation of an inertial measurement unit from an image of a portion of a user Active US10976863B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/576,661 US10976863B1 (en) 2019-09-19 2019-09-19 Calibration of inertial measurement units in alignment with a skeleton model to control a computer system based on determination of orientation of an inertial measurement unit from an image of a portion of a user

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16/576,661 US10976863B1 (en) 2019-09-19 2019-09-19 Calibration of inertial measurement units in alignment with a skeleton model to control a computer system based on determination of orientation of an inertial measurement unit from an image of a portion of a user

Publications (2)

Publication Number Publication Date
US20210089162A1 true US20210089162A1 (en) 2021-03-25
US10976863B1 US10976863B1 (en) 2021-04-13

Family

ID=74881858

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/576,661 Active US10976863B1 (en) 2019-09-19 2019-09-19 Calibration of inertial measurement units in alignment with a skeleton model to control a computer system based on determination of orientation of an inertial measurement unit from an image of a portion of a user

Country Status (1)

Country Link
US (1) US10976863B1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113900516A (en) * 2021-09-27 2022-01-07 阿里巴巴达摩院(杭州)科技有限公司 Data processing method and device, electronic equipment and storage medium
US11474593B2 (en) 2018-05-07 2022-10-18 Finch Technologies Ltd. Tracking user movements to control a skeleton model in a computer system
WO2022255642A1 (en) * 2021-06-04 2022-12-08 주식회사 피앤씨솔루션 Weight-reduced hand joint prediction method and device for implementation of real-time hand motion interface of augmented reality glass device
US11531392B2 (en) * 2019-12-02 2022-12-20 Finchxr Ltd. Tracking upper arm movements using sensor modules attached to the hand and forearm
EP4239455A1 (en) * 2022-03-03 2023-09-06 HTC Corporation Motion computing system and method for mixed reality
US11836302B2 (en) 2022-03-03 2023-12-05 Htc Corporation Motion computing system and method for virtual reality
US11893166B1 (en) * 2022-11-08 2024-02-06 Snap Inc. User avatar movement control using an augmented reality eyewear device
EP4339740A1 (en) * 2022-09-15 2024-03-20 HTC Corporation Controller, control method, and wearable tracking system

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2040903C (en) 1991-04-22 2003-10-07 John G. Sutherland Neural networks
US6982697B2 (en) 2002-02-07 2006-01-03 Microsoft Corporation System and process for selecting objects in a ubiquitous computing environment
JP2010534316A (en) 2007-07-10 2010-11-04 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ System and method for capturing movement of an object
GB0720412D0 (en) 2007-10-18 2007-11-28 Melexis Nv Combined mems accelerometer and gyroscope
KR101483713B1 (en) 2008-06-30 2015-01-16 삼성전자 주식회사 Apparatus and Method for capturing a motion of human
US9019349B2 (en) 2009-07-31 2015-04-28 Naturalpoint, Inc. Automated collective camera calibration for motion capture
US8279418B2 (en) 2010-03-17 2012-10-02 Microsoft Corporation Raster scanning for depth detection
US10321873B2 (en) 2013-09-17 2019-06-18 Medibotics Llc Smart clothing for ambulatory human motion capture
US10716510B2 (en) 2013-09-17 2020-07-21 Medibotics Smart clothing with converging/diverging bend or stretch sensors for measuring body motion or configuration
EP2893479B1 (en) 2012-09-05 2018-10-24 Sizer Technologies Ltd System and method for deriving accurate body size measures from a sequence of 2d images
US20150177842A1 (en) * 2013-12-23 2015-06-25 Yuliya Rudenko 3D Gesture Based User Authorization and Device Control Methods
US9524580B2 (en) 2014-01-06 2016-12-20 Oculus Vr, Llc Calibration of virtual reality systems
US10019059B2 (en) * 2014-08-22 2018-07-10 Sony Interactive Entertainment Inc. Glove interface object
US9552070B2 (en) 2014-09-23 2017-01-24 Microsoft Technology Licensing, Llc Tracking hand/body pose
US10606341B2 (en) * 2015-02-22 2020-03-31 Technion Research & Development Foundation Limited Gesture recognition using multi-sensory data
US10388053B1 (en) 2015-03-27 2019-08-20 Electronic Arts Inc. System for seamless animation transition
US9911219B2 (en) 2015-05-13 2018-03-06 Intel Corporation Detection, tracking, and pose estimation of an articulated body
US10318008B2 (en) * 2015-12-15 2019-06-11 Purdue Research Foundation Method and system for hand pose detection
US10037624B2 (en) 2015-12-29 2018-07-31 Microsoft Technology Licensing, Llc Calibrating object shape
US10019629B2 (en) 2016-05-31 2018-07-10 Microsoft Technology Licensing, Llc Skeleton-based action detection using recurrent neural network
CN106127120B (en) 2016-06-16 2018-03-13 北京市商汤科技开发有限公司 Posture estimation method and device, computer system
US11337652B2 (en) 2016-07-25 2022-05-24 Facebook Technologies, Llc System and method for measuring the movements of articulated rigid bodies
US20200073483A1 (en) * 2018-08-31 2020-03-05 Ctrl-Labs Corporation Camera-guided interpretation of neuromuscular signals
US10178495B2 (en) 2016-10-14 2019-01-08 OneMarket Network LLC Systems and methods to determine a location of a mobile device
WO2018090308A1 (en) * 2016-11-18 2018-05-24 Intel Corporation Enhanced localization method and apparatus
US10719125B2 (en) 2017-05-09 2020-07-21 Microsoft Technology Licensing, Llc Object and environment tracking via shared sensor
US10614591B2 (en) * 2017-05-31 2020-04-07 Google Llc Hand tracking based on articulated distance field
US11474593B2 (en) 2018-05-07 2022-10-18 Finch Technologies Ltd. Tracking user movements to control a skeleton model in a computer system
US10416755B1 (en) 2018-06-01 2019-09-17 Finch Technologies Ltd. Motion predictions of overlapping kinematic chains of a skeleton model used to control a computer system
US11009941B2 (en) 2018-07-25 2021-05-18 Finch Technologies Ltd. Calibration of measurement units in alignment with a skeleton model to control a computer system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11474593B2 (en) 2018-05-07 2022-10-18 Finch Technologies Ltd. Tracking user movements to control a skeleton model in a computer system
US11531392B2 (en) * 2019-12-02 2022-12-20 Finchxr Ltd. Tracking upper arm movements using sensor modules attached to the hand and forearm
WO2022255642A1 (en) * 2021-06-04 2022-12-08 주식회사 피앤씨솔루션 Weight-reduced hand joint prediction method and device for implementation of real-time hand motion interface of augmented reality glass device
CN113900516A (en) * 2021-09-27 2022-01-07 阿里巴巴达摩院(杭州)科技有限公司 Data processing method and device, electronic equipment and storage medium
EP4239455A1 (en) * 2022-03-03 2023-09-06 HTC Corporation Motion computing system and method for mixed reality
US11836302B2 (en) 2022-03-03 2023-12-05 Htc Corporation Motion computing system and method for virtual reality
EP4339740A1 (en) * 2022-09-15 2024-03-20 HTC Corporation Controller, control method, and wearable tracking system
US11893166B1 (en) * 2022-11-08 2024-02-06 Snap Inc. User avatar movement control using an augmented reality eyewear device

Also Published As

Publication number Publication date
US10976863B1 (en) 2021-04-13

Similar Documents

Publication Publication Date Title
US11009941B2 (en) Calibration of measurement units in alignment with a skeleton model to control a computer system
US10860091B2 (en) Motion predictions of overlapping kinematic chains of a skeleton model used to control a computer system
US11474593B2 (en) Tracking user movements to control a skeleton model in a computer system
US10976863B1 (en) Calibration of inertial measurement units in alignment with a skeleton model to control a computer system based on determination of orientation of an inertial measurement unit from an image of a portion of a user
US10534431B2 (en) Tracking finger movements to generate inputs for computer systems
US10775946B2 (en) Universal handheld controller of a computer system
US10521011B2 (en) Calibration of inertial measurement units attached to arms of a user and to a head mounted device
US10540006B2 (en) Tracking torso orientation to generate inputs for computer systems
US11175729B2 (en) Orientation determination based on both images and inertial measurement units
US11054923B2 (en) Automatic switching between different modes of tracking user motions to control computer applications
US11079860B2 (en) Kinematic chain motion predictions using results from multiple approaches combined via an artificial neural network
US11237632B2 (en) Ring device having an antenna, a touch pad, and/or a charging pad to control a computing device based on user motions
US11009964B2 (en) Length calibration for computer models of users to generate inputs for computer systems
WO2020009715A2 (en) Tracking user movements to control a skeleton model in a computer system
US20210068674A1 (en) Track user movements and biological responses in generating inputs for computer systems
US20230011082A1 (en) Combine Orientation Tracking Techniques of Different Data Rates to Generate Inputs to a Computing System
US20210318759A1 (en) Input device to control a computing device with a touch pad having a curved surface configured to sense touch input
US11531392B2 (en) Tracking upper arm movements using sensor modules attached to the hand and forearm
US20210072820A1 (en) Sticky device to track arm movements in generating inputs for computer systems
US10809797B1 (en) Calibration of multiple sensor modules related to an orientation of a user of the sensor modules
US20230103932A1 (en) Motion Sensor Modules with Dynamic Protocol Support for Communications with a Computing Device
US20230214027A1 (en) Reduction of Time Lag Between Positions and Orientations Being Measured and Display Corresponding to the Measurements

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

AS Assignment

Owner name: FINCH TECHNOLOGIES LTD., VIRGIN ISLANDS, BRITISH

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ERIVANTCEV, VIKTOR VLADIMIROVICH;KARTASHOV, ALEXEY IVANOVICH;GONCHAROV, DANIIL OLEGOVICH;AND OTHERS;REEL/FRAME:051353/0504

Effective date: 20190918

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: FINCHXR LTD., CYPRUS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FINCH TECHNOLOGIES LTD.;REEL/FRAME:060422/0732

Effective date: 20220630