GB2466714A - Hybrid visual and physical object tracking for virtual (VR) system - Google Patents
Hybrid visual and physical object tracking for virtual (VR) system Download PDFInfo
- Publication number
- GB2466714A GB2466714A GB0922650A GB0922650A GB2466714A GB 2466714 A GB2466714 A GB 2466714A GB 0922650 A GB0922650 A GB 0922650A GB 0922650 A GB0922650 A GB 0922650A GB 2466714 A GB2466714 A GB 2466714A
- Authority
- GB
- United Kingdom
- Prior art keywords
- sensor
- information
- feature
- visual
- sensor information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G06T7/2033—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
- H04N5/145—Movement estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/245—Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- General Health & Medical Sciences (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
A method comprises: combining 210 visual information 202 from an image of at least an object feature, and information 209 from a sensor attached to the object; and determining the position of a representation of the feature in a virtual representation of the object. As also independently claimed, the capture images include content representing plural markers attached to a motion capture body suit 102, with the physical position sensor data provided by plural inertial sensors, attached to an actor's body. The method and system may determine regions of future images likely to include a feature, based on sensor data, allowing motion tracking. The object with the physical (e.g. inertial) sensor may itself be a camera. The virtual representation may be a virtual online game character, and the visual information may be an object silhouette. Sensors may provide time series data. One or more poses (e.g. position and orientation) of the object can be represented.
Description
Visual and Physical Motion Sensing for Three-Dimensional Motion Capture
TECHNICAL FIELD
This document relates to combining physical and visual information for three-dimensional motion capture.
BACKGROUND
Motion capture techniques allow motion data to be generated based on tracking and recording the movement of real objects. For example, a realistic sequence of motion, e.g., by a human actor, can be captured and used to drive the motion of an animated object. To capture such motion with conventional systems, an actor wears a body suit (e.g., black coloured) that is attached with retro-reflective objects (e.g., balls) at locations that correspond to the actor's joints, e.g., shoulder, elbow, wrist. As the actor performs, a sequence of movements are digitally recorded by a number of cameras and processed to identify the retro-reflective balls as points, upon which, triangulation can be used to determine the three-dimensional position of the point. A virtual skeleton representing an actor can be fitted to such point movement collected over time to represent the motion of the actor.
SUMMARY
The systems and techniques described here relate to combining visual information provided from one or more images with sensor information. From the combined information, the position of object features represented in the image (or images) may be determined.
In one aspect, a system includes a visual data collector for collecting visual information from a first image of at least a first feature of an object. The system also includes a physical data collector for collecting sensor information provided by at least one sensor attached to the object. The system also includes a computer system that includes a motion data combiner for combining the visual information the sensor information. The motion data combiner is configured to determine the position of a representation of the first feature in a virtual representation of the object from the combined visual information and sensor information. Various types of virtual representations may be provided from the combined information, for example, one or more poses (e.g., position and orientation) of the object may be represented.
Implementations may include any or all of the following features. For example, the motion data combiner may be configured to determine the position of the first feature in a second image based upon the sensor information provided by the sensor. The motion data combiner may also be configured to determine a region of the second image likely to include a representation of the first feature based upon the sensor information provided by the sensor. The motion data combiner may be configured to track the movement of the first feature based upon the position of the first feature in the first image and the position of the first feature in the second image.
The motion data combiner may be configured to determine the initial position of a second feature from the sensor information provided by the sensor. The motion data combiner may use the visual information to define a pose of the object produced by the sensor information from the sensor. The motion data combiner may use the combined visual information and sensor information to produce a pose representative of the object. The virtual representation of the object may represent a pose of the object during collection of the visual information and the sensor information. The object may be capable of covering a portion of a body, may be a camera, or may take another form. The motion data combiner may be configured to synchronize the visual information and the sensor information to combine the visual information and the sensor information. The inertial sensor may be an inertial sensor The sensor information may include a time series of data provided by the sensor.
In another aspect, a computer-implemented method includes combining visual information from a first image of at least a first feature of an object and sensor information provided by at least one sensor attached to the object. The method also includes determining the position of a representation of the first feature in a virtual representation of the object from the combined visual information and sensor information.
In another aspect, a computer program product tangibly embodied in an information carrier and comprising instructions that when executed by a processor perform a method that includes combining visual information from a first image of at least a first feature of an object and sensor information provided by at least one sensor attached to the object. The method also includes determining the position of a representation of the first feature in a virtual representation of the object from the combined visual information and sensor information.
In still another aspect, a system includes a visual data collector for collecting visual infomiation from a plurality of images. Each image includes content that represents a plurality of markers attached to a body suit. The system also includes a physical data collector for collecting sensor information provided by a plurality of inertial sensors also attached to the body suit. The system also includes a computer system that includes a memory for storing the collected visual information and sensor information. The computer system also includes a processor for executing instructions that perform a method that includes combining the visual information and the sensor information. The method also includes determining, for each image, the respective positions of representations of the plurality of markers in a virtual representation of a pose provided by the body suit from the combined visual information and sensor information.
Details of one or more implementations are set forth in the accompanying drawings and the description below. Other features, aspects and advantages will be apparent from the description and drawings, and from the claims.
DESCRIPTION OF DRAWINGS
FIG. 1 illustrates an individual wearing a motion capture body suit.
FIG. 2 illustrates a motion capture system.
FIGS. 3 and 4 illustrate exemplary motion data combiners.
FIG. 5 illustrates exemplary movement of a motion capture body suit.
FIG. 6 illustrates exemplary operations of a motion data combiner.
Like reference symbols in the various drawings indicate like elements.
DETAILED DESCRIPTION
Refening to FIG. 1, an individual 100 (e.g., an actor) is illustrated wearing a body suit 102 that includes a series of bands (e.g., rigid or semi-rigid support structures) with markers that may be detected by one or more camera(s) included in a motion capture system. The bands, in turn, are attached to an underlying object, the actor 100 or the body suit 102 of the actor. A band that is a rigid structure is one in which the movement of the markers on a same support structure is negligible relative to the markers' positions from each other. Additionally, a band that is a semi-rigid structure is one in which the movement of the markers on a same support structure is permitted, but the movement is substantially limited within a predetermined range.
The amount of the movement between the markers may be based on several factors, such as the type of material used in the support structure and the amount of force applied to the support structure. For example, a flexible body suit, depending on materials used and methods of construction, may qualify as a "rigid" or "semi-rigid" support structure provided that it demonstrates the appropriate level of rigidity.
Additionally, bands may be attached separate from the suit such as a headband, incorporated in to gloves, shoes, etc., which may also qualify as rigid or semi-rigid support structures. Other types of visual markers may be implemented independently or in combination with the bands. For example, retro-reflective spheres and other types of visual markers may be incorporated into the body suit 102.
In some arrangements, one or more other techniques may be implemented to collect visual information, which may be used independently or in concert with visual marking technique (e.g., bands) illustrated in the figure. Features and existing markings of an object may be used to provide visual information. For example, features of the body suit 102 (e.g., colours, patterns, etc.) or other type of clothing may be used to visually indicate movement of the actor 100. Visual information may also be provided by physical features of the actor 100 (or other type of object). For example, the movement of natural features such as appendages (e.g., arms, legs, etc.), digits (e.g., fingers) and other portions of the actor (e.g., hair, etc.) may be used to provide visual information. As such, visual information may be collected with or with our using a body suit. Similarly, bands may not be implemented, for example, and sufficient visual information may be provided by the body suit 102 (e.g., from one or more patterns and colours incorporated into the body suit). In another anangement, visual markers may be attached directly to the actor 100 to provide visual information. For example, markers of one or more sizes may be attached to various portions of the body of the actor (e.g., face, hands, arms, legs, feet, torso, etc.). Such visual information may be independently used or together with visual information provided from markers attached to the actor's clothing, such as the body suit 102.
Other types of visual information may also be captured and used. For example, the visual boundary between an object and a background (e.g., a silhouette) may be captured and provide visual information that represents the pose and motion of an object. As such, one or more silhouettes of the actor 100 or other type of occluding contours may be used for providing visual information.
In some systems, one or more cameras capture two-dimensional (2D) images f the actor 100 or other types of object in motion. As such, ambiguity may be introduced by motion that is weakly observed or even undetectable in the 2D images, for example, when the actor moves directly toward or away from the camera. Such ambiguity can be reduced with incorporating multiple cameras, e.g., twenty cameras, into the motion capture system. However, situations may occur in which the markers of the body suit 102 or other features are undetectable. For example, if the markers or features are visually obscured (e.g., the actor 100 is immersed in smoke or other type of substance) then it may be difficult detect, let alone track the individual markers.
Due to such detection gaps, a virtual representation of the actor (produced from the captured data) may include artefacts in the form of physically impossible motions in an attempt to produce appropriate three-dimensional (3D) points. For example, a leg can suddenly appear to rotate many degrees from one frame to the next frame. While dramatic changes can be easy to spot, subtle mistakes may be difficult to catch.
To reduce such ambiguities, inertial sensors are attached to the actor 100, the body suit 102 or other articles of the actor's clothing. In general, the inertial sensors capture motion information that can be combined with the motion information provided by the visual information from the markers, features, or combination. For illustration, the motion information captured from the markers is provided to a visual motion sensing system 104 and the motion from the inertial sensors is provided to a physical motion sensing system 106. By using the two sets of motion information in aggregate, estimations of the 3D location and orientation of the skeleton is improved while spatial ambiguities can be reduced. As such, the collected visual and sensor information may be used in combination to increase accuracy for determining poses (e.g., position, orientation) of an object (e.g., the actor) Additionally, other applications such as tracking and marker (or feature) identification may be improved.
Along with the using the inertial information to improve the accuracy of the visual information, the visual information can also be used to complement the inertial information such as providing a relatively global position and orientation. For example, if the actor's performance included climbing a ladder, the inertial sensors (of the motion sensing system 106) might be unable to accurately recover the actor's vertical translation (up the ladder) and simply result in poses that represent the actor climbing in place. By including the global position and orientation from the visual information, the appropriate vertical translation can be incorporated to provide realistic poses of the actor. As such, the visual information can assist calibrating drift that appears in the inertial information. In another example, while in a canonical T-pose (during inertial information collection), drift in the inertial sensors may appear to show the actor's arms drooping and cause misrepresentations in processed skeleton poses (e.g., arms may appear slightly raised). By calibrating the inertial information with the visual information, one or more offsets may be identified to correct the skeleton poses. Similarly, the visual information may be used for calibrating the information provided from one or more of the sensors.
Various types of sensors may be used to attain position information, for example, inertial sensors (e.g., accelerators, gyroscopes, etc.) or other types of sensors (e.g., magnetometers, etc.) may be incorporated with the body suit 102, individually or in combination. In some arrangements, accelerators may measure acceleration, the direction of gravity, or other quantity. Other types of sensors may measure similar or different quantities, for example, gyroscopes may be used to measure rates of rotation, and magnetometers may be used to measure magnetic fields. One or more thermometers may be used to measure temperatures (e.g., to compensate for drift in one or more sensors). Various sensor packaging techniques may also be implemented, for example, sensors may be incorporated into relatively small physical packages to conserve space, extended through elongated flexible materials (e.g., embedded in flexible tape), or utilize other packaging methodologies. In this particular arrangement, one type of inertial sensor (e.g., illustrated with circular representations such as circle 108) are distributed throughout the body suit 102 to provide motion information from the torso and extremities of the actor 100. For example, three inertial sensors are positioned to capture the motion of each leg and arm of the actor 100. Further, inertial sensors are positioned to capture motion from the actor's torso. Additionally, sensors may be placed for capturing motion information from other locations, for example, one or more inertial sensors may be placed on the head and neck of the actor, digits (e.g., fingers and toes), and the like.
For this arrangement, the markers (used to provide motion information to the visual motion sensing system 104) may be incorporated into the series of bands (e.g., band 110) wrapped around portions to of the actor 100. In some arrangements, the marker-to-marker spacing on a support structure such as a band may be application specific (e.g., based on the environment), and thus the spacing does not need to be known a-priori. Similar spacing and other types of constraints may also be used for collecting visual information from features (e.g., natural occuning or synthetic) associated with an object. The system can use one or more cameras to track different markers on the support structures (or object features). From the captured visual information, these markers (or features) may be used to estimate the motion (e.g., position and orientation in 3D space through time) of the bands (or the features). The knowledge that each support structure is rigid (or semi-rigid) may also be used in the estimation process and may facilitate reconstruction of the motion.
In some arrangements, the markers can be made of high-contrast materials, and may also optionally be lit with light emitting diodes (LED5) or luminescent materials that are visible in the dark. These lighting qualities can enable cameras to capture the marks on the object in low lighting or substantially dark conditions. For example, an actor being filmed may walk from a well lit area to a shadowed area.
The markers may be captured despite the actor's movement into the shadowed area because the marks glow or emit light.
Various type of materials and fabrics may be used to produce the body suit 102, for example layers of flexible and durable material may be used to produce a suit that can be used over a large range of temperatures and weather conditions (e.g., bright sun, rain, snow, etc.) and be capable of attaching to inertial sensors.
In the present example, markers and inertial sensors are attached to the body suit 102 to provide visual information as well as sensor information. However, other types of objects may be attached with markers and inertial sensors (or other similar devices) for providing visual and sensor information. For example, various types of body coverings may be used in concert with marker and inertial sensors. Various types of costumes, fabrics and covering (e.g., peelable synthetic body paint) may be implemented. In some arrangements, either or both the markers and the sensors may be positioned directly upon a subject's skin (e.g., an actor), fur (e.g., an animal) or other body portion.
Refening to FIG. 2, a exemplary environment 200 (e.g., movie studio, sound stage, etc.) is illustrated for collecting visual and physical motion information from the actor 100 wearing the body suit 102. While the environment 200 is depicted as an interior location, in some arrangements the environment may be external (e.g., located outside) or a combination of two types of locations (e.g., for performances that include the actor moving from an interior location to an exterior location).
To collect visual information produced by the actor 100, a visual data collector 202 and a computer system 204 are included in the visual motion sensing system 104 (shown in FIG. 1). In this particular arrangement, the visual data collector 202 includes a camera 206 that is capable of capturing multiple frames of visual information. In general, each frame can be considered as visual sample of a pose of the actor 100 as represented by the markers included body suit 102. From the information captured by the camera 206, the pose can be represented as a set of numerical values that describe how the object (e.g., the actor 100) has moved at a particular instance in time. For example, the pose could include such numerical values that represent the angle of the actor's right elbow, the angle of his left shoulder, the actor's global position and global orientation (relative to the environment 200).
Typically, the numerical values that represent the pose are relative to a predefined canonical pose. For example, a pose in which the actor 100 is standing with his both arms pointing horizontally outward (referred to as a "T" pose) may be defined as a canonical pose. While the one camera 206 is illustrated as being included in the visual motion sensing system 104, two or more cameras (which are typically synchronized for image capture) may also be included for capturing visual infonnation from different perspectives. To use the visual data captured by the camera 206, the computer system 204 executes operations that include receiving, processing and outputting data. In some arrangements, the computer system 204 may also execute operations to control one or more operations of the camera(s) (e.g., control data capture rate, adjust camera orientation, etc.). Operations of the computer system 204 may also include computing the location of the camera and its visual properties, such as the cameras field of view, lens distortion, and orientation, while a sequence of images is being recorded. For example, operations of the computer system 204 can derive the position of the camera 206 given enough markers and information associated with the markers, such as the number, identification, and position of the markers captured by the camera 206.
From the markers and patterned bands of the body suit 102 or features of the actor 100, one or more cameras (e.g., camera 206) record the motion of the actor. To determine the pose of the actor 100, the computer system 204 executes operations to identify each marker that is visible in the captured 2D data and assigned a conespondence to the particular marker of the body suit 102 (referred to as tracking).
Next, one or more techniques may be used by the computer system 204 identify the pose of the actor 100 (referred to as solving), for example, by finding a pose that minimizes re-projection error of an estimated 3D position of the markers onto the captured image from the camera 206.
Along with collecting the images from the actor 100, information is also collected from the inertial sensors included in the body suit 102. As represented in FIG. 1, data from the sensors is collected and processed by the physical motion sensing system 106. For some arrangements, each senor may collect and provide a sequence of data over a period of time. Such time series of data may be collected, for example, over predefined periods of time, initiated by one or more triggering events (e.g., an actor initiating a performance) or controlled by one or more other data collection techniques. As illustrated in the FIG. 2, the system 106 includes a physical data collector 208 that is capable of receiving motion information from the inertial sensors. In this particular arrangement, a wireless signal 209 is illustrated as providing the information to the physical data collector 206, however, various data transferring techniques and methodologies may be implemented by the physical motion sensing system 106. For example, multiple wireless signals may be implemented (e.g., each sensor may provide collected motion data in a wireless signal). One or more telemetry, modulation, encryption techniques may also be exploited for data transfer. Hardwire connections may also be independently implemented or in concert with various wireless connections (e.g., Bluetooth, Wi-Fi, Infrared, etc.).
Similar to the visual motion sensing system 104, a computing device is used to process the motion data from the inertial sensors of the physical motion sensing system 106. In this particular arrangement, the computer system 204 is also used to process the inertial sensor data, however, such processing may be provided by a separate computer system dedicated to the physical motion sensing system 106, or the processing may be distributed across multiple computers.
By processing the data from the inertial sensors, pose estimates of the actor may be calculated from the data that correspond in time to the images captured by the camera 206. By producing these two synchronized data sets, the inertial data can be combined with the image data to produce a virtual representation of the actor 100 for various applications such as producing an animation for a movie, video game, etc. Such a representation (referred to as a skeleton), can be used to describe the motion of the actor 100 by using a hierarchy of transformations that approximate the true kinematics of the captured object (e.g., the actor 100). Each transformation corresponds to a joint in the actor's body, such as the shoulder, elbow, knee, etc. These transformations are typically located at the centre of rotation of each of the anatomical joints. Some skeletal joints may be considered an approximation to anatomical kinematics, for example, joints of the spine may be approximated using two or three rotations and translations. A translation component, which is referred to as a bone, is defined as being located between two or more joints.
Along with the joints and bones, numerical parameters (referred as skeleton parameters) are used to describe the skeleton of the actor 100 (or other subject). In general, skeleton parameters include intrinsic parameters of the skeleton that are substantially constant as the subject moves. For example, skeleton parameters can represent quantities such as the length of the actor's left or right upper arm, lower arm, upper leg, lower leg, etc. To combine and use the visual data provided by the camera 206 and the inertial data provided by the sensors included in the body suit 102, the computer system 204 may execute one or more processes. For example, a motion data combiner 210 may be executed for combining the data sets and using the data sets to form a skeleton that virtually represents the movement of the actor 100. To assist with execution, a storage device 212 (e.g., hard drive, CD-ROM, etc.) is in communication with the computer system 204 and may be used for storing and retrieving data. Other types of data stores may also be accessed by the computer system 204, for example, data may be accessed from one or more computer networks or from the Internet to assist the processing of the data sets by the motion data combiner 210.
Along with components (e.g., interface cards, etc.) for receiving user input and data from various sources (e.g., the camera 206, the physical data collector 208, the Internet, other computer systems, etc.), the computer system 204 also includes memory (not shown) and one or more processors (also not shown) to execute processing operations. The storage device 212 is capable of storing data (e.g., visual data, inertial motion data, etc.) and providing the stored data to assist production of one or more virtual representations.
Various operations may be executed to combine the inertial data and the visual data. For example, the data sets may be synchronized using one or more methodologies and techniques such as manual synchronization in which a user identifies the time-shift between the inertia data and the visual data. Time-coding techniques may also be used such as a common time signal being associated with each of the data sets. One or more actions from the object being captured (e.g., the actor 100) may also be used as a timing trigger. For example, the actor 100 may perform a predefined action (e.g., waving both arms and jumping up and down) that is clearly identifiable in each data set and used to accurately aligned the data sets.
Refening to FIGS. 3 and 4, two exemplary anangements of the motion data combiner 210 (shown in FIG. 2) are illustrated for combining inertial sensor data and visual data to estimate the 3D location and orientation of a virtual representation of the captured object (e.g., actor 100). Referring to FIG. 3, a motion data combiner 300 operates in a sequential manner to combine physical data 302 (e.g., provided by the inertial sensors of the body suit 102) and the visual data 304 (e.g., provided by the camera 206). In some arrangements, data from multiple sensors (e.g., accelerometers, gyroscopes, magnetometers, thermometers, etc.) may be combined to determine the 3D location and orientation of the virtual representation. In general, the serial architecture of the motion data combiner 300 operates in stages on a frame by frame basis. A physical pose solver 306 receives the physical data 302 and produces a pose estimate for each frame. Once produced, the estimated pose is provided to a visual pose solver 308, which also receives the visual data from the visual motion sensing system (e.g., captured by the camera 206). The visual pose solver 308 uses the pose estimate (from the physical pose solver 306) to provide one or more locations to detect markers and initiate tracking of movement. The pose provided by the physical pose solver 306 may also be used by the visual pose solver 308 to produce a pose representation that includes aspects and features of the capture object that may not be visible by the visual data collector (e.g., the camera 206). The absence of such information in the visual data 304 may be the result of occlusion, object movement not detectable in the two dimensional visual data, or similar phenomena. For example, movements of the actor 100 directly toward or away from the camera 206 (e.g., a slight arm movement toward the camera) may not be detected within the visual data 304. However, the acceleration of one or more of the inertial sensors with the body suit 102 (caused by a slight arm movement) may be detected and represented in the physical data 302.
By using the pose produced from the physical data 302, along with the visual data 304, the visual pose solver 308 can provide a refined pose estimate. In one arrangement, the visual pose solver 308 solves for joint angles that substantially minimize the re-projected error of the markers in the visual data 304. Additionally, the term error that reflects the distance in joint space of the pose provided by the physical data 302 may also be minimized. Further, the pose provided from the physical data 302 can be used by the visual pose solver 308 to restrict joint movement (e.g., a joint angle) over time.
As illustrated with pathway 310, in some arrangements, the refined pose provided by the visual pose solver 308 can be fed back to the physical pose solver 306 for use with data in subsequent fames. For example, the fed back refined pose may be used by the physical data solver 306 to increase the accuracy of initial estimates.
Such feedback data could also be used to update one or more models produced by the physical motion sensing system 106. Such updates may implement various techniques, for example, updates could be provided in a direct manner by overriding previous model states with a new estimates, combining previous model states with new estimates (e.g., by using predetermined or continuously updates confidence values), or one or more other similar techniques. In some arrangements, differences (referred to as deltas) between the pose originally produced by the physical pose solver 306 and the refined pose may be identified and stored for later retrieval and application to subsequent pose estimates produced by the physical data solver 306.
While the physical pose solver 306 provides the initial pose in the motion data combiner 300, in some arrangements the visual pose solver 308 may be used to provide an initial pose or multiple initial poses.
Refening to FIG. 4, another exemplary motion data combiner 400 is illustrated that includes an integrated architecture. In particular, the functionality of the physical pose solver 306 and the visual pose solver 308 (shown in FIG. 3) are combined into a pose solver 402. As such, physical data 404 (e.g., provided by the inertial sensors of the body suit 102) and visual data 406 (e.g., provided by the exemplary camera 206) are both provided to the pose solver 402. Tn this integrated approach, one or more operations (e.g., optimization operations) are executed by the pose solver 402 to identify the pose that appropriately represents movement information provided by both the inertial sensors (e.g., provided by the physical data 404) and the camera 206 (e.g., provided by the visual data 406). Once identified, a pose estimate (for a particular frame) is provided by the pose solver 402 and physical data and visual data associated with a subsequent frame are received for processing and pose estimation.
While distributed and combined architectures are provided in FIGS. 3 and 4, some implemented architectures may provide additional functionality. For example, pose estimates may be used to improve error detection and tracking markers of objects being captured, cameras, etc. In some arrangements, a motion data combiner may operate on data from multiple frames to solve for a pose. For example, physical and visual data collected over a predefined period of time (e.g., a predefined number of frames) may be used for determining one or more poses, or refining a particular pose multiple poses. The equivalent number of frames of physical and visual data may or may not be used for determining a pose. For example, visual data from a significant number frames may be used with relatively smaller amount of physical data, or vice versa. By using multiple frames, information from previous frames can be used to solve for a current pose. Motion information (e.g., velocity, acceleration, etc.) associated with a series of movements of a portion of an actor (e.g., a joint) can be used to solve for a current pose by implementing various signal processing techniques (e.g., applying a Kalman filter). In some arrangements, physical and visual data collected over a period of time (e.g., series of frames) may be used to simultaneously solve for a series of poses, however, such calculations may be processor and memory intensive.
Refening to FIG. 5, a portion of the actor's 100 arm is presented in two positions to illustrate movement over a period of time. In particular, over time, the arm moves from one position 500 to a second position 502, which is closer to the camera 206. However, based upon the camera's position and ability to capture 2D images, the slight forward movement of the arm may not register in the captured visual data. For example, the relative position and orientation of markers 504 and 506 do not significantly change (from the camera's perspective) as the arm moves from S position 500 to position 502. As such, the visual motion sensing system 104 (e.g., that includes camera 206 and computer system 204) may not detect the arm move across the frames captured during the movement. However, inertial sensors 508 and 510, which are also respectively located on the actor's arm, can sense the acceleration due to the arm movement. Once sensed, data representative of the acceleration is provided by the inertial sensors 508, 510 to the physical data collector 206 and correspondingly provided to the computer system 204. From this physical data, the motion data combiner 210 can produce a 3D pose of the actor (including the arm) that accounts for the sensed acceleration.
By combining the data provided by these measurements, along with producing pose with increased accuracy, other applications may be improved. For example, the visual motion sensing system 104 may also be capable of tracking markers from frame to frame. By using the data provided by the inertial sensors, tracking may be improved along with system robustness. For example, the inertial sensors (e.g., sensors 508 and 510) may sense relatively slight physical movements (e.g., in a particular extremity) while tracking operations on the visual data register a substantial movement (perhaps in the course of just one frame). By using the inertial data to serve as a check and validate the visual data, erroneous tracking data may be detected and rejected.
In another application, the data from the inertial sensors may be used to determine the approximate location of markers in subsequent frames (e.g., to assist tracking). For example, based upon an initial marker position being identified (e.g., the location of marker 506 at arm position 500) and the motion sensed (e.g., due to the movement from arm position 500 to position 502) by the inertial sensors (e.g., sensors 508 and 510), the approximate location of the marker can be determined by the motion data combiner 210. As such, a region (associated with a subsequent pose or subsequent poses) may be identified for constraining a search for the marker 506 in subsequent frames. As highlighted with box 512, by constraining searching to the region, tracking may be efficiently improved for identifying marker location through a sequence of captured frames.
The information provided by the inertial sensors (and in combination with the visual information) may also to improve other aspects of tracking markers. For example, inertial information may be used to determine the appearance of the markers, relative to the perspective of the camera 206. Based upon the inertial information, the motion data combiner 210 can estimate the 2D shape of a marker captured by the camera 206. For example, a marker with a circular shape may appear elliptical as the marker moves away from the camera's field of view. Additionally, the information allows the motion data combiner 210 to predict which of the markers (e.g., being tracked) should appear in one or more captured images and which markers should be absent from the images (e.g., due to occlusion, grazing angle, etc).
The motion information provided by the inertial sensors may also be used to initially identify markers for establishing a virtual representation of an object such as a skeleton. For systems that only use visual information for tracking markers, a virtual representation may need to be posed for a particular frame or a set of frames in order to establish a correspondence between points that forni the virtual representation and the markers represented in the captured images. By using the data provided by the inertial sensors, the amount of virtual representation manipulations may be reduced. In general, the inertial sensor data can be used to adjust the virtual representation (e.g., the root position, orientation, etc.) to align with the image data from one frame. Alternatively, the image data could be adjusted by the inertial sensor data to align points of a virtual representation (being held fixed) to the adjusted image data. Once the image data and points of the virtual representation are aligned, the motion data combiner 210 could locate and label the markers by searching the 2D region associated with the projected locations of the markers. In some arrangements, further adjustments may be executed on the virtual representation. For example, if parameters associated with the virtual representation (e.g., skeleton parameters) or the marker positions (e.g., located on the body suit 102) are un-calibrated, the virtual representation may be adjusted and optimized to determine the marker locations (relative to the virtual representation). To further optimize the alignment, additional cameras may also be included in the visual motion sensing system 104 such that only one frame of data may be needed for the alignment.
In the scenarios described with respect to FIGS. 1 and 2, the inertial data (provided by the inertial sensors) and the visual data (provided by the markers) are collected from the performing actor 100. However, other objects (e.g., animals, vehicles, etc.) may also be fitted with markers and inertial sensors (e.g., via a body suit, fabric, etc.) for similar applications such as developing a virtual representation.
The inertial sensors and may also be positioned on other objects such as one or more of the cameras (e.g., the camera 206) being used by the visual motion sensing system 104. For example, by positioning inertial sensors on the camera 206, and placing tracking markers on a fixed surface of the environment (e.g., a wall 214, shown in FIG. 2) within the camera's field of view, the position of the markers can be estimated from frame-to-frame. By measuring the fixed markers and the inertial data, movement of the camera 206 can be tracked, thereby assisting with the calibrating of the camera location. For alTangements in which sensors are not positioned on the camera 206, the markers of the body suit 102 can assist calibrating the position of the camera. In particular, the relative transformation between the subject (e.g., the actor in the body suit 102) and the camera may be determined; however the global position may be subject to the vagaries of the physical motion sensing system (e.g., the inertial sensors). Markers may also be positioned in other locations to assist with camera calibration, for example, markers may be positioned in one or more stationary locations (e.g., fixed surfaces, rigid objects, etc.) and captured in images to determine the position of the camera or another object.
In still another arrangement, the motion data combiner 210 may estimate both the positions of an object (e.g., wearing a body suit with inertial sensors) and an inertially-tracked camera (or cameras). Such estimates may be mutual reinforcing, however, if the position of the camera or the object is needed with respect to another coordinate system (e.g., for another environment or object), a relationship provided by tracking fiducials captured by the camera or provided by the separate system may be needed.
Refening to FIG. 6, a flowchart 600 represents some of the operations of the motion data combiner 210. As mentioned, the operations may be executed by a single computer system (e.g., computer system 204) or multiple computing devices. Along with being executed at a single site (e.g., at one computer system), operation execution may be distributed among two or more sites.
Operations include receiving 602 visual information, for example, information may be received from the visual motion sensing system 104 (shown in FIG. 1) or a similar system such as the motion capture system described in United States Patent Application Publication US2006/0228 101, titled "Three-Dimensional Motion Capture", filed 16 March 2006 and published 12 October 2006, which is incorporated by reference herein in its entirety. The received visual information can be provided from markers attached to a body suit that is being worn by a performer such as an actor. Operations of the motion data combiner 210 also include receiving 604 sensor information (e.g., inertial information), for example, from one or multiple inertial sensors attached to the body suit. In one particular arrangement, the inertial sensors are accelerometers that measure the local acceleration. Processing (e.g., integrating) the acceleration data allows position information local to the sensor to be determined.
Once the visual information and the sensor information has been received, operations include combining 606 the visual and sensor information. In some alTangements, to combine, operations such as synchronizing the two information sets are executed. For example, manual user operations, time codes and other methodologies and techniques may be implemented for synchronizing the information.
Once combined, operations include determining 608 the position of one or more of the markers from both information sources. For example, a global position of a marker (e.g., in an environment) can be provided by the visual information (e.g., from a motion capture system) and a relatively local position of the marker (e.g., based on localized movements) can be provided by the position information of the inertial sensor. In some arrangements, additional sources may provide position information, for example, a GPS receiver may be used to provide global position information. As mentioned, the combined position information may be used for locating the position of one or more points on a virtual representation of the captured object. For arrangements in which the markers are placed on a camera or other object types, similar position data may be attained similar applications (e.g., position calibration).
From the received visual and inertial information, operations may also include determining 610 one or more regions for marker searching. For example, position information from one or more inertial sensors can be used to define boundaries or constraints for search regions within the images and frames included in the visual information to locate markers. Such regions may be defined for frames collected during the time period that colTespond to the collection period of the position information from the inertial sensors. Such regions may also be defined for marker searching and identification in one or more subsequent frames. Operations may also include determining 612 the initial position of a marker (or markers) from the position information provided by the inertial sensor. For example, the position of individual points included in a virtual representation (e.g., a skeleton) of the actor (or other object) may be adjusted based upon position information provided by one or more inertial sensors. Once adjusted, the virtual representation can be used to adjust the visual information for locating and labelling markers. Other operations may also include producing 614 a virtual representation (e.g., a skeleton pose) from the combined visual and inertial information.
To perform the operations described in flow chart 600, the motion data combiner 210 may perform any of the computer-implement methods described previously, according to one implementation. For example, a computer system such as computer system 204 (shown in FIG. 2) may execute operations of the motion data combiner 210. The computer system may include a processor (not shown), a memory (not shown), a storage device (e.g., storage device 212), and an input/output device (not shown). Each of the components may be interconnected using a system bus or other similar structure. The processor is capable of processing instructions for execution within the computer system. In one implementation, the processor is a single-threaded processor. In another implementation, the processor is a multi-threaded processor. The processor is capable of processing instructions stored in the memory or on the storage device to display graphical information for a user interface on the input/output device.
The memory stores information within the computer system. In one implementation, the memory is a computer-readable medium. In one implementation, the memory is a volatile memory unit. In another implementation, the memory is a non-volatile memory unit.
The storage device is capable of providing mass storage for the computer system. In one implementation, the storage device is a computer-readable medium.
In various different implementations, the storage device may be a floppy disk device, a hard disk device, an optical disk device, or a tape device.
The input/output device provides input/output operations for the computer system. In one implementation, the input/output device includes a keyboard and/or pointing device. In another implementation, the input/output device includes a display unit for displaying graphical user interfaces.
The features described can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The apparatus can be implemented in a computer program product tangibly embodied in an information carrier, e.g., in a machine-readable storage device or in a propagated signal, for execution by a programmable processor; and method steps can be performed by a programmable processor executing a program of instructions to perform functions of the described implementations by operating on input data and generating output. The described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data.
Generally, a computer will also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
The features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The components of the system can be connected by any form or medium of digital data communication such as a communication network.
Examples of communication networks include, for example, a LAN, a WAN, and the computers and networks forming the Internet.
The computer system can include clients and servers. A client and server are generally remote from each other and typically interact through a network, such as the described one. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the following claims.
When used in this specification and claims, the terms "comprises" and "comprising" and variations thereof mean that the specified features, steps or integers are included. The terms are not to be interpreted to exclude the presence of other features, steps or components.
The features disclosed in the foregoing description, or the following claims, or the accompanying drawings, expressed in their specific forms or in terms of a means for performing the disclosed function, or a method or process for attaining the disclosed result, as appropriate, may, separately, or in any combination of such features, be utilised for realising the invention in diverse forms thereof
Claims (59)
- CLAIMS: 1. A method comprising: combining visual information from a first image of at least a first feature of an object and sensor information provided by at least one sensor attached to the object; and determining the position of a representation of the first feature in a virtual representation of the object from the combined visual information and sensor information.
- 2. The method of claim 1, further comprising: determining the position of the first feature in a second image based upon the sensor information provided by the sensor.
- 3. The method of claim 2, wherein determining the position of the first feature in the second image includes determining a region of the second image likely to include a representation of the first feature based upon the sensor information provided by the sensor.
- 4. The method of claim 3, wherein the position of the first feature in the first image and the position of the first feature in the second image are used to track the movement of the first feature.
- 5. The method of any one of claims 1-4, further comprising: determining the initial position of at least a second feature from the sensor information provided by the sensor.
- 6. The method of any one of claims 1-5, wherein the sensor information from the sensor represents orientation information local to the sensot
- 7. The method of any one of claims 1-5, wherein the visual information represents global position information.
- 8. The method of any one of claims 1-7, wherein the virtual representation of the object represents a pose of the object during collection of the visual information and the sensor information.
- 9. The method of any preceding claim, wherein the object is capable of covering aportionofabody.
- 10. The method of any preceding claim, wherein the object is a camera.
- 11. The method of any preceding claim, wherein combining the visual information and the sensor information includes synchronizing the visual information and the sensor information.
- 12. The method of any preceding claim, wherein the virtual representation of the object is an online character of a video game.
- 13. The method of any preceding claim, wherein the senor is an inertial sensor.
- 14. The method of any preceding claim, wherein the sensor information includes a time series of data provided by the sensor.
- 15. The method of any preceding claim, wherein the visual information represents a silhouette of the object.
- 16. The method of any preceding claim, wherein the first feature is a naturally occurring feature.
- 17. The method of any preceding claim, wherein the first feature is a synthetic feature.
- 18. The method of claim 17, wherein the first feature is a marker attached to the object.
- 19. The method of any preceding claim, wherein at least one of the visual information and one of the sensor information is used to calibrate the virtual representation of the object.
- 20. A computer program comprising computer program code adapted to perform the steps of any one of claims 1-19 when run on a computer.
- 21. The computer program of claim 20, embodied on a computer readable medium.
- 22. A system, comprising: a visual data collector for collecting visual information from a first image of at least a first feature of an object; a physical data collector for collecting sensor information provided by at least one sensor attached to the object; and a computer system including a motion data combiner for combining the visual information the sensor information, the motion data combiner is configured to determine the position of a representation of the first feature in a virtual representation of the object from the combined visual information and sensor information.
- 23. The system of claim 22, wherein the motion data combiner is further configured to determine the position of the first feature in a second image based upon the sensor information provided by the sensor.
- 24. The system of claim 23, wherein the motion data combiner is configured to determine a region of the second image likely to include a representation of the first feature based upon the sensor information provided by the sensor.
- 25. The system of any one of claims 22-24, wherein the motion data combiner is configured to track the movement of the first feature based upon the position of the first feature in the first image and the position of the first feature in the second image.
- 26. The system of any one of claims 22-25, wherein the motion data combiner is configured to determine the initial position of a second feature from the sensor information provided by the sensor.
- 27. The system of any one of claims 22-26, wherein the motion data combiner uses the visual information to define a pose of the object produced by the sensor information from the sensor.
- 28. The system of any one of claims 22-27, wherein the motion data combiner uses the combined visual information and sensor information to produce a pose representative of the object.
- 29. The system of any one of claims 22-28, wherein the virtual representation of the object represents a pose of the object during collection of the visual information and the sensor information.
- 30. The system of any one of claims 22-29, wherein the object is capable of covering a portion of a body.
- 31. The system of any one of claims 22-30, wherein the object is a camera.
- 32. The system of any one of claims 22-3 1, wherein the motion data combiner is configured synchronize the visual information and the sensor information to combine the visual information and the sensor information.
- 33. The system of any one of claims 22-32, wherein the senor is an inertial sensor.
- 34. The system of any one of claims 22-33, wherein the sensor information includes a time series of data provided by the sensor.
- 35. A method comprising: combining visual information from a first image of at least a first feature of an object and sensor information provided by at least one sensor attached to the object; and determining the position of a representation of the first feature in a virtual representation of the object from the combined visual information and sensor information.
- 36. The method of claim 35 further comprising determining the position of the first feature in a second image based upon the sensor information provided by the sensor.
- 37. The method of claim 36, wherein determining the position of the first feature in the second image includes determining a region of the second image likely to include a representation of the first feature based upon the sensor information provided by the sensor.
- 38. The method of claim 37, wherein the position of the first feature in the first image and the position of the first feature in the second image are used to track the movement of the first feature.
- 39. The method of any one of claims 35-38, wherein the method further comprises: determining the initial position of a second feature from the sensor information provided by the sensor.
- 40. The method of any one of claims 35-39, wherein the sensor information from the sensor represents orientation information local to the sensor.
- 41. The method of any one of claims 35-40, wherein the visual information represents global position information.
- 42. The method of any one of claims 35-4 1, the virtual representation of the object represents a pose of the object during collection of the visual information and the sensor information.
- 43. The method of any one of claims 35-42, wherein the object is capable of covering a portion of a body.
- 44. The method of any one of claims 35-43, wherein the object is a camera.
- 45. The method of any one of claims 35-44, wherein combining the visual information and the sensor information includes synchronizing the visual information and the sensor information.
- 46. The method of any one of claims 35-45, wherein the virtual representation of the object is an online character of a video game.
- 47. The method of any one of claims 35-46, wherein the sensor information includes a time series of data provided by the sensor.
- 48. A computer program comprising computer program code adapted to perform the steps of any one of claims 35-47 when run on a computer.
- 49. The computer program of claim 48, embodied on a computer readable medium.
- 50. A system comprising: a visual data collector for collecting visual information from a plurality of images, wherein each image includes content that represents a plurality of markers attached to a body suit; a physical data collector for collecting sensor information provided by a plurality of inertial sensors also attached to the body suit; and a computer system including a memory for storing the collected visual information and sensor information, and a processor for executing instructions that perform a method comprising, combining the visual information and the sensor information, and determining, for each image, the respective positions of representations of the plurality of markers in a virtual representation of a pose provided by the body suit from the combined visual information and sensor information.
- 51. The system of claim 50, wherein the method further comprises: tracking the movement of the plurality of markers across the plurality of images.
- 52. The system of any one of claims 50-51, wherein determining the respective positions of the representations of the plurality of markers includes determining a region in at least one of the plurality of images that is likely to include a representation of the at least one of the plurality of markers based upon the sensor information provided by the plurality of inertial sensors.
- 53. The system of any one of claims 50-52, wherein the method executed by the processor further comprises: determining the respective positions of the representations of the plurality of markers includes determining initial positions for each of the plurality of markers based upon the collected sensor information.
- 54. The system of any one of claims 50-53, wherein the visual data collector further comprises: one or more cameras for collecting the plurality of images, wherein one or more inertial sensors are attached to each of the one or more cameras for collecting calibration information.
- 55. The system of any one of claims 50-54, wherein the visual data collector further comprises: one or more cameras for collecting the plurality of images, wherein the sensor information is used to calibrate the position of at least one of the cameras.
- 56. The system of any one of claims 50-55, wherein the method executed by the processor further comprises: producing a series of poses of the virtual representation of the body suit based upon the combined visual information and sensor information.
- 57. A method as hereinbefore described, with reference to the accompanying drawings.
- 58. A system as hereinbefore described, with reference to the accompanying drawings.
- 59. Any novel feature or combination of features described herein.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14197008P | 2008-12-31 | 2008-12-31 | |
US12/506,798 US9142024B2 (en) | 2008-12-31 | 2009-07-21 | Visual and physical motion sensing for three-dimensional motion capture |
Publications (3)
Publication Number | Publication Date |
---|---|
GB0922650D0 GB0922650D0 (en) | 2010-02-10 |
GB2466714A true GB2466714A (en) | 2010-07-07 |
GB2466714B GB2466714B (en) | 2015-02-11 |
Family
ID=41716981
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB0922650.7A Active GB2466714B (en) | 2008-12-31 | 2009-12-30 | Visual and physical motion sensing for three-dimentional motion capture |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2466714B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9648271B2 (en) | 2011-12-13 | 2017-05-09 | Solidanim | System for filming a video movie |
US9711006B2 (en) | 2013-03-28 | 2017-07-18 | Aristocrat Technologies Australia Pty Limited | Electronic method of gaming, a game controller and a gaming system |
GB2559809A (en) * | 2017-02-21 | 2018-08-22 | Sony Interactive Entertainment Europe Ltd | Motion tracking apparatus and system |
WO2020035650A1 (en) * | 2018-08-17 | 2020-02-20 | Hiltermann Sean | Augmented reality doll |
WO2021045698A1 (en) * | 2019-09-03 | 2021-03-11 | Ggtek Gelismis Guvenlik Teknolojileri A.S. | A speed detection system |
EP3061092B1 (en) * | 2013-10-24 | 2022-06-22 | Visible Ink Television Ltd. | Motion tracking system |
WO2022207485A1 (en) * | 2021-03-30 | 2022-10-06 | SWORD Health S.A. | Digital assessment of position of motion trackers on a person |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5831260A (en) * | 1996-09-10 | 1998-11-03 | Ascension Technology Corporation | Hybrid motion tracker |
EP1946243A2 (en) * | 2005-10-04 | 2008-07-23 | Intersense, Inc. | Tracking objects with markers |
US20090209343A1 (en) * | 2008-02-15 | 2009-08-20 | Eric Foxlin | Motion-tracking game controller |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060055706A1 (en) * | 2004-09-15 | 2006-03-16 | Perlman Stephen G | Apparatus and method for capturing the motion of a performer |
WO2009007917A2 (en) * | 2007-07-10 | 2009-01-15 | Koninklijke Philips Electronics N.V. | Object motion capturing system and method |
-
2009
- 2009-12-30 GB GB0922650.7A patent/GB2466714B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5831260A (en) * | 1996-09-10 | 1998-11-03 | Ascension Technology Corporation | Hybrid motion tracker |
EP1946243A2 (en) * | 2005-10-04 | 2008-07-23 | Intersense, Inc. | Tracking objects with markers |
US20090209343A1 (en) * | 2008-02-15 | 2009-08-20 | Eric Foxlin | Motion-tracking game controller |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9756277B2 (en) | 2011-12-13 | 2017-09-05 | Solidanim | System for filming a video movie |
US9648271B2 (en) | 2011-12-13 | 2017-05-09 | Solidanim | System for filming a video movie |
US9711006B2 (en) | 2013-03-28 | 2017-07-18 | Aristocrat Technologies Australia Pty Limited | Electronic method of gaming, a game controller and a gaming system |
EP3061092B1 (en) * | 2013-10-24 | 2022-06-22 | Visible Ink Television Ltd. | Motion tracking system |
GB2559809A (en) * | 2017-02-21 | 2018-08-22 | Sony Interactive Entertainment Europe Ltd | Motion tracking apparatus and system |
EP3363509A1 (en) * | 2017-02-21 | 2018-08-22 | Sony Interactive Entertainment Europe Limited | Motion tracking apparatus and system |
US10545572B2 (en) | 2017-02-21 | 2020-01-28 | Sony Interactive Entertainment Europe Limited | Motion tracking apparatus and system |
GB2559809B (en) * | 2017-02-21 | 2020-07-08 | Sony Interactive Entertainment Europe Ltd | Motion tracking apparatus and system |
WO2020035650A1 (en) * | 2018-08-17 | 2020-02-20 | Hiltermann Sean | Augmented reality doll |
US11360549B2 (en) | 2018-08-17 | 2022-06-14 | Sean HILTERMANN | Augmented reality doll |
WO2021045698A1 (en) * | 2019-09-03 | 2021-03-11 | Ggtek Gelismis Guvenlik Teknolojileri A.S. | A speed detection system |
WO2022207485A1 (en) * | 2021-03-30 | 2022-10-06 | SWORD Health S.A. | Digital assessment of position of motion trackers on a person |
US11908145B2 (en) | 2021-03-30 | 2024-02-20 | Sword Health, S.A. | Digital assessment of position of motion trackers on a person |
Also Published As
Publication number | Publication date |
---|---|
GB0922650D0 (en) | 2010-02-10 |
GB2466714B (en) | 2015-02-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9401025B2 (en) | Visual and physical motion sensing for three-dimensional motion capture | |
CN107833271B (en) | Skeleton redirection method and device based on Kinect | |
US10679360B2 (en) | Mixed motion capture system and method | |
US20100194879A1 (en) | Object motion capturing system and method | |
GB2466714A (en) | Hybrid visual and physical object tracking for virtual (VR) system | |
US8019137B2 (en) | Three-dimensional motion capture | |
US8786680B2 (en) | Motion capture from body mounted cameras | |
CN102323854B (en) | Human motion capture device | |
CN109284006B (en) | Human motion capturing device and method | |
CN105849502A (en) | Motion capture system | |
TW201234278A (en) | Mobile camera localization using depth maps | |
CN110609621B (en) | Gesture calibration method and human motion capture system based on microsensor | |
JP6516109B2 (en) | Operation display system and program | |
US20180216959A1 (en) | A Combined Motion Capture System | |
Ahmed et al. | A visual-inertial approach to human gait estimation | |
Ahmadi et al. | Human gait monitoring using body-worn inertial sensors and kinematic modelling | |
CN112114660A (en) | Method for realizing large-scale movement of virtual world character by utilizing motion of human foot in small space range | |
JP6205387B2 (en) | Method and apparatus for acquiring position information of virtual marker, and operation measurement method | |
Nonnarit et al. | Hand tracking interface for virtual reality interaction based on marg sensors | |
Araujo et al. | Life cycle of a slam system: Implementation, evaluation and port to the project tango device | |
Zhou et al. | Kinematic model aided inertial motion tracking of human upper limb | |
US20210003717A1 (en) | Motion capture system and method | |
Bostanci et al. | Tracking methods for augmented reality | |
CN114332912B (en) | Human motion capturing and joint stress analysis method based on IMU | |
WO2023236353A1 (en) | Method for determining whole body posture of human, determination apparatus thereof and intelligent shoes thereof |