WO2004094943A1 - Motion capturing method, motion capturing device, and motion capturing marker - Google Patents

Motion capturing method, motion capturing device, and motion capturing marker Download PDF

Info

Publication number
WO2004094943A1
WO2004094943A1 PCT/JP2003/016080 JP0316080W WO2004094943A1 WO 2004094943 A1 WO2004094943 A1 WO 2004094943A1 JP 0316080 W JP0316080 W JP 0316080W WO 2004094943 A1 WO2004094943 A1 WO 2004094943A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
components
position information
articulated
articulated body
Prior art date
Application number
PCT/JP2003/016080
Other languages
French (fr)
Japanese (ja)
Inventor
Hiroshi Arisawa
Kazunori Sakaki
Original Assignee
Hiroshi Arisawa
Kazunori Sakaki
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hiroshi Arisawa, Kazunori Sakaki filed Critical Hiroshi Arisawa
Priority to JP2004571106A priority Critical patent/JPWO2004094943A1/en
Priority to AU2003289108A priority patent/AU2003289108A1/en
Publication of WO2004094943A1 publication Critical patent/WO2004094943A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches

Definitions

  • the present invention relates to a motion canopy that captures the motion of an object in the real world on a computer.
  • motion caps For example, mechanical, magnetic and optical types are known as motion caps.
  • an angle detector or pressure sensor is attached to the performer's body, and the bending angle of the joint is detected.
  • the motion of a performer is detected, and in the case of a magnetic motion cap, a magnetic sensor is attached to each part of the performer's body, the performer is moved in an artificially generated magnetic field, and the density and angle of magnetic lines of force are determined.
  • the movement of the performer is detected by deriving the absolute position where the magnetic sensor is present.
  • a marker is attached to the place where it is desired to measure the movement of the performer's body, and the movement of each part is measured from the position of the marker by imaging this marker with a camera.
  • a motion canopy that does not burden the subject.
  • This motion capture captures the motion of the human body in a non-contact manner by correlating with a virtual three-dimensional human body model using images of a multi-viewpoint camera.
  • documents 1 and 2 as a method for matching such multi-view images with models.
  • the posture is estimated by overlaying and evaluating a three-dimensional model and a silhouette image obtained by extracting only the subject in each image.
  • '2' is a method of determining the difference between the current image and the next image, and using this difference to determine the posture.
  • the above-mentioned non-contact type motion capture that does not use a marker needs to acquire an element for determining the motion from the image, and therefore has not reached a practical level in terms of motion extraction accuracy and analysis processing time.
  • the present invention is intended to solve the above-mentioned conventional problems and to reduce the burden of attaching a marker in motion capture, shorten the analysis processing time, and obtain high extraction accuracy.
  • the present invention reduces the load due to marker attachment by reducing the number of markers attached, and can adopt an attitude that can be taken by using positional information obtained by the markers and constraints that relate the components.
  • the number is reduced, which shortens the analysis processing time and obtains high extraction accuracy.
  • the present invention includes a motion capture method, each aspect of a motion capture device, and a marker suitable for the motion capture.
  • the posture of the articulated body can be determined by the positions of the components constituting the articulated body and the angle with each other.
  • the movement of the articulated body can be regarded as a temporal change of this posture. Therefore, the information that motion canopy wants to acquire is the position information and angle information of the components that make up the articulated body.
  • the motion capture of the present invention obtains positional information and angular information of components of an articulated body such as a human body using a small number of markers.
  • each component of the articulated body has constraints on the distance (length) of the components and the angular relationship that can be taken with each other.
  • the upper arm and the forearm constituting the human body are connected by an elbow joint, and the length of the upper arm and the forearm, the distance between the upper arm and the forearm, and the angle that the forearm can take with the upper arm are limited due to human body structure. You can not take positions or angles beyond this limit. Therefore, these distances and angles become possible constraints of the component.
  • position information and angle information of components of an articulated body such as a human body using markers, it is necessary to attach a large number of markers to each component.
  • the present invention uses the constraints of each component of the articulated body in order to obtain position information and angle information of the component of the articulated body such as a human body using a small amount of force.
  • the motion capture of the present invention extracts candidate postures of other components that satisfy this constraint condition from position information of certain components obtained from a small number of markers, and generates images of the posture candidates and the articulated body.
  • Angle information is acquired by superimposing with.
  • the number of postures to be evaluated is reduced by narrowing down the postures satisfying the constraint condition determined by the position information from among a large number of postures. In this way, by reducing the number of postures to be evaluated in posture determination, the analysis time is shortened and the accuracy of posture extraction is increased.
  • a specific portion of the component constituting the articulated body is determined.
  • Position information is determined, and angle information between each component that satisfies a constraint condition determined by the position information is determined based on the position information and a constraint that relates the plurality of components to each other.
  • the position information is acquired from the specific part of the component, and the other angle information is acquired using the constraint condition, whereby the articulated body is obtained without acquiring the position information for all the components provided in the articulated body. You can get your attitude.
  • the second aspect of the motion capture method of the present invention comprises a plurality of components
  • distance and angle constraints that relate a plurality of components to each other are determined in advance, and at least one component is connected
  • a marker which can be individually identified is provided at a specific site of the component to be obtained, and an image of an articulated body including this marker is obtained.
  • the position information of the specific part is obtained from the position of the marker in the obtained image.
  • the component angle information is a component that satisfies the constraints defined by the line position information based on the image of the articulated body in the image, the position information determined, and the constraints that relate the plurality of components to each other.
  • the angle information corresponding to the image of the articulated body is obtained by finding the angle information among them. From the position information and the angle information, the posture and motion of the articulated body are determined.
  • Each marker can be individually identified, and the marker attachment position can be identified by identifying the marker. Also, the component to which the force is attached should be placed with the other component in between. As a result, the force to be attached to an articulated body such as a human body can be disposed by thinning out, and the number of markers required for posture acquisition can be reduced.
  • a constraint that relates a plurality of components to each other in determining the posture and motion of an articulated body in which the plurality of components are articulated to each other, and A plurality of models representing postures of body components are predetermined, and an articulated body including individually distinguishable markers provided at specific portions of the components connected with at least one component interposed therebetween. Ask for an image. The position information of the specific part is obtained from the position of the marker in the obtained image.
  • component angle information is extracted from a plurality of models that conform to constraints determined by position information, and a model that is most approximate to the image of the articulated body in the image is extracted from the extracted models,
  • the angle information of the components of the articulated body is obtained from the angles between the components of the extracted model. Posture of the articulated body by this position information and angle information And determine the action.
  • the motion capture without using a conventional marker is to evaluate the superposition with the image acquired for all models without using this constraint condition, and the number of postures that hierarchically connected components can take. Is an enormous number because it increases with the number of components and the multiplier of their degrees of freedom. If a more detailed model is set by increasing the number of components constituting an articulated body, the number of possible postures will increase.
  • the number of models as attitudes to be evaluated can be reduced, so that the analysis time can be shortened, and it becomes closer to real-time (real-time) motion analysis. Is possible.
  • an image of a marker attached to a human body and a specific part of the human body is acquired, and the individual force is individually identified and extracted from this image to detect the specific part of the human body.
  • the position information is determined, and the angle information of the component of the human body is determined by image matching between the model extracted based on the position information of the specific part from among the prepared models and the acquired image, and the determined position information and angle information Determine the posture of the human body.
  • Models are extracted by selecting at least one model that meets the human posture constraints determined by the position of a specific part.
  • Image alignment between the image and the model is performed by selecting from among the extracted models the model that most closely approximates the human body image in the image.
  • an approximate model for example, with respect to the vertexes provided on the component, the distance between the image and the model is obtained for each posture, and a model that can obtain the posture with the shortest distance is selected.
  • Angle information can be obtained from the angle of the component in the selected model.
  • the marker emits light to identify the marker.
  • a first aspect of the motion capture device of the present invention is a motion capture device for determining the posture and operation of an articulated body in which a plurality of components are connected to each other by articulation, the motion capture device comprising the components constituting the articulated body. Based on position detection means for obtaining position information of a specific part, position information, and constraints that relate the plurality of components to each other, angle information between each component that satisfies the constraint defined by the position information is obtained. An attitude of the articulated body is determined based on the detected position information and angle information.
  • a second aspect of the motion capture device of the present invention is a motion capture device for determining the posture and motion of an articulated body in which a plurality of components are articulated to one another, wherein a constraint relating the plurality of components to one another
  • a storage means for pre-storing a plurality of models representing the conditions and postures of the components of the articulated body, and individually distinguishable at specific parts of the components connected across at least one component
  • Image acquisition means for obtaining an image of an articulated body including various markers, position information detection means for obtaining position information of a specific site from the position of the marker in the image, multiple models determined by the position of the specific site
  • the model selection means for selecting a model that meets the constraint conditions of the joint body, and the model closest to the image of the articulated body in the image is extracted from the selected models, Configuration from the angle between the components of the out model of the multi-joint body It is configured to have matching means for obtaining angle information of the element, and the posture and motion of the articulated body are determined by the
  • a marker suitable for the motion spacer of the present invention is a marker for identifying a specific part of the articulated body in the motion capsule, which determines the posture and operation of the articulated body in which a plurality of components are connected to each other.
  • a light shield can be provided between the first light emitting diode and the second light emitting diode.
  • This light shield can prevent the mixing of the two colors on the image. When two colors are mixed, they are identified as different colors, which makes it difficult to identify markers and causes false recognition.
  • the light shield is on the image Prevents mixing of two colors and makes marker identification easier.
  • FIG. 1 is a view for explaining an outline of a motion canopy according to the present invention
  • FIG. 2 is a flow chart for explaining a procedure of attitude determination by the motion canopy according to the present invention
  • FIG. FIG. 4 is a schematic view of components for explaining a procedure of posture determination by motion capture
  • FIG. 4 is a diagram for explaining components of an articulated body
  • FIG. FIG. 6 is a flow chart for explaining an example of a calculation procedure of position information
  • FIG. 6 is a color model of a bi-hexagonal pyramid of HSI color system
  • FIG. 7 is an extraction of a specific part and position information
  • FIG. 8 is a view for explaining an example of the calculation procedure of the present invention
  • FIG. 8 is a view for explaining an example of the calculation procedure of the present invention
  • FIG. 8 is a view for explaining a motion capture device of the present invention
  • FIG. 9 is a posture of a human body by the motion capture of the present invention.
  • Work Figure 10 is a hierarchy diagram for explaining the mounting position of the force when the posture and motion of the human body are determined by the motion canopy of the present invention.
  • FIG. 11 is a diagram for explaining a marker suitable for the motion of the present invention
  • FIG. 12 is a diagram for explaining a marker suitable for the motion capture of the present invention.
  • FIG. 13 is a view showing an example of a human body model.
  • FIG. 1 is a view for explaining the outline of the motion chart of the present invention.
  • multi-view video is acquired in Fig.1.
  • Multi-view images are acquired by arranging multiple cameras. The images taken by each camera are For example, synchronization is acquired in frame units (A in the figure).
  • the acquired multi-view video is processed almost in real time, and the position information and angle information of the articulated object as the object are obtained, and the posture and motion are obtained.
  • the position information is obtained by extracting a specific part of the multi-joint body that is the object from each image of the multi-view video, and from the position (B in the figure).
  • the specific site is any site defined on the component in the articulated body.
  • the position information obtained by the motion canopy of the present invention relates to a specific part defined on a component in a multi-segment body, it is not possible to determine the posture of the other component by this position information alone. .
  • the present invention uses angle information of each component in addition to position information as an element for determining the posture of the object.
  • each image of the multi-view video is superimposed on a predetermined model (D in the figure), a model matching the image is extracted, and an angle from the angle of each component of the model is extracted.
  • Finding information C in the figure
  • Extraction of the model that most closely matches the image by superimposing this image on the model is called matching.
  • the present invention uses the constraint that determines the relationship between the constituent elements of the articulated body in extracting the angle information.
  • an articulated body there is a relationship between the specific part and each component according to the characteristics of the articulated body, and once the position of the specific part is determined, the other components are defined with the relationship as the constraint and the range It is not possible to take an internal attitude and take an attitude that deviates from this constraint.
  • the motion capture of the present invention reduces the time required for matching by narrowing down the model to be matched using the position information of the specific part and the above-mentioned restriction condition.
  • a multi-viewpoint image of the articulated body is acquired by a plurality of cameras arranged around the object, the articulated body.
  • Multi-viewpoint images can be acquired as synchronized frame images.
  • a three-dimensional position and angle are obtained from a plurality of images acquired by these multi-viewpoint images, but in FIG.
  • Fig. 3 (a) simply shows an image acquired in multi-view video.
  • the articulated body shown in the figure shows a configuration example in which three components are connected so as to be able to change the angular relationship of each other by joints.
  • FIG. 4 is a view for explaining the components of this articulated body.
  • the articulated body 10 is composed of the components 1 1, 1 2 and 1 3, and the component 1 1 and the component 1 2 are rotatably articulated at the joint 1 4, and the component 1 2 and the component 1 2
  • the component 1 3 is pivotably articulated at the joint 1 5.
  • one end of the component 11 is designated as a specific site 16 and one end of the component 13 is designated as a specific site 17.
  • These specific parts are provided in the components 1 1 and 1 3 sandwiching the component 1 2, but if they are connected components, there may be one or more components placed in between. it can.
  • the lengths of the respective components 1 1, 1 2, and 13 or the distances between the respective components are determined from the structure constituting an articulated body such as a human body.
  • the angular relationships among the component houses 1, 1, 2, and 13 are limited within a certain range.
  • the range of angles 0 1 0 that can be taken by component 1 1 at a specific site 1 6 is 1 0 1 1 to ⁇ 1 2 2, and the angle 0 4 0 at which components 1 3 can take at a specific site 1 7
  • the range is from 1 0 4 1 to 0 4 2.
  • the range of angles 0 2 0 that component 1 2 can take with respect to component 1 1 at joint 1 4 is 1 0 2:! ⁇ 0 2 2
  • the range of the angle 0 3 0 that the component 1 3 can take on the component 1 2 at the joint 1 5 is ⁇ ⁇ 3 1 to 0 3 2.
  • the clockwise angle is shown as a negative angle.
  • marker M1 shall be provided in specific site
  • each component constituting an articulated body is limited in possible postures when the position of a part is determined, and these lengths and angles may be regarded as possible posture constraints. it can.
  • the relationship between the components shown in Fig. 4 is expressed in two dimensions, these relationships and constraints are set in three dimensions by the image obtained from the multi-viewpoint image (step S l).
  • position information is obtained from the image acquired in the process of step S 1.
  • a specific part is extracted from the image (step S 2), and position information of the extracted specific part is obtained.
  • Figure 3 (b) simply shows the state in which the specific part is extracted from the image of the articulated body in Figure 3 (a).
  • the position information (X1, y1, z1) is obtained by extracting the force M 1 provided at the specific portion 16, and the force M 2 provided at the specific portion 17 is extracted.
  • the position information (x2, y2, ⁇ 2) is obtained.
  • RGB signal represents the signal strength by, for example, a gradation value of 0 to 25 steps, but since this gradation value includes elements such as brightness and color tone in a mixed form, Markers provided can not be identified and extracted by color.
  • the motion capture of the present invention identifies and extracts markers provided at each specific site by color-coding the markers. Identify the rank. Therefore, markers in the image need to be identified by color.
  • the H S I color system has three attributes: hue (H: hue), saturation (S: saturation), and lightness (I: intensity).
  • Figure 6 is a color model of the double hexagonal pyramid in the H S I color system.
  • hue (H) can be represented by angle values in the order of yellow, green, cyan, lue and magenta, where red is 0 °.
  • the number of hues is an example and may be another number of hues.
  • lightness (V) can also be expressed numerically.
  • hue (H) and lightness (V) in order to identify hue (H) and lightness (V) in advance according to the color emitted by the predetermined power, threshold values are determined and converted. The obtained hue signal and lightness signal are 0
  • Fig. 7 (a) schematically shows an example of extracting a high-intensity part from the image (step S12).
  • the hue signal (H) is selected by comparing with the threshold of the hue set in advance, and the specific part (marker from the high brightness part in the image Extract the region where) exists.
  • Fig. 7 (b) schematically shows an example in which a specific part is extracted from the high-intensity part in the image (step S13).
  • the color part of a specific part such as a marker has an area, and therefore, it is detected as an area across multiple pixels on the image. Therefore, in order to determine the position of the specific part, the position of the specific part is calculated from the area of the specific part detected in step S13.
  • any calculation method such as calculation of peak position of luminance, calculation of barycentric position of region, calculation of barycentric position weighted using luminance can be used.
  • a light emitting body such as a light emitting diode does not necessarily have a uniform light emitting state, and even within the same marker, there may be a difference in color between the light emitting portion and the periphery thereof. Due to this color change, the originally connected area may be divided. In order to compensate for this, processing to expand or contract the area on the image may be performed.
  • a plurality of position information of these specific parts are obtained by cameras of different arrangement positions by multi-viewpoint images, and are performed by each camera and each frame.
  • Figure 7 (c) schematically shows an example of the position of a specific part extracted in the image (step S14).
  • the obtained position information is a force camera coordinate system
  • the camera coordinate system and the real space coordinate system can be transformed by the combination of the rotation matrix and the translation matrix.
  • non-linearity caused by the optical system between the position on the camera coordinate system and the position projected on the real space coordinate system may be corrected by parameters such as focal length and lens distortion coefficient.
  • Ts a i is known as a method of converting markers in an image from a camera coordinate system to a real coordinate system in consideration of weights.
  • the calculation of the three-dimensional position of the marker position in the real coordinate system corresponds to, for example, finding the intersection of a plurality of straight lines extending from each camera image in FIG. 7 (d).
  • errors in the camera parameters may cause a shift in the angle of the straight line, and therefore may not intersect at one point.
  • the middle point on the straight line at which the distance between the straight lines is minimum may be determined as the point of intersection of the straight lines.
  • the midpoint is determined in the combination of n straight lines, and the average of these midpoints is set as the three-dimensional position of the marker (step S 16). Thereby, position information is obtained for the specific part (step S 3).
  • steps S4 to S8 the component angle information is determined.
  • a possible model is selected.
  • the model is obtained in advance for various postures of the articulated body as the object, and the sizes and tolerances of the lengths, the distance relationships, and the angular relationships of each component of the articulated body are also determined.
  • This restriction condition may, for example, define the range of mutual angle tolerance of each component with respect to the position of a specific part. it can.
  • the tolerance range can be expressed numerically or as a function representing a mutual relationship.
  • Figure 3 (c) schematically shows the model selection. For example, if there are c-1, c-1, c-3 as models of component, choose a model that satisfies the constraints from among these models. Here, for example, a case is shown where a model is selected with the possible angular range of each component as a constraint.
  • the model selected in step S4 is superimposed on the image to model the image and the model.
  • a model for example, a three-dimensional model is placed in a virtual space, and the distribution of lightness and saturation on the model is determined as a histogram while changing the posture of the model.
  • the histogram of the distribution of lightness and saturation can be obtained and the approximation of this histogram can be evaluated (step S 5).
  • a nonlinear least squares method, a genetic algorithm or the like can be used to obtain an evaluation function so as to minimize it. 6080
  • an evaluation value For example, it is possible to obtain an evaluation value by assuming a certain parameter value (0 or position), changing the parameter and repeating evaluation based on the result, and finding the minimum evaluation value. it can.
  • This evaluation function represents the degree of deviation determined by comparing the result of the silled image and the result of the optical flow with the model (step S 6). Find the model closest to the image in matching the image with all selected models.
  • Figure 3 (d) schematically shows this matching process (step S7).
  • Figure 3 (e) shows how to obtain angle information ( ⁇ 1 to ⁇ 4) from the obtained model (step S8).
  • the posture of the articulated body is determined from the position information determined in step S3 and the angle information determined in step S9. By performing this posture for each frame acquisition, the motion of the articulated body can be determined in almost real time.
  • the motion capture device 1 is an image acquisition unit 2 that acquires multi-viewpoint images of multiple joints such as a human body and acquires an image, and an operation unit 3 that obtains position information and angle information of components from this image.
  • the position and movement of the articulated body are determined by the position information and the angle information.
  • the image acquisition means 2 obtains an image of an articulated body including individually distinguishable markers provided at specific parts of constituent elements connected across at least one constituent element.
  • a plurality of cameras are arranged to acquire multi-view images, and each image frame is synchronized.
  • the calculating means 3 is a detecting means 3a for detecting a specific part (marker) in the image. 6080
  • a position information detecting means 3 b for obtaining position information of a specific part (marker), a constraint that relates a plurality of components to one another, and a memory for storing in advance a plurality of models representing the postures of components of an articulated body Means 3 c, model selecting means 3 d for selecting a model that fits the constraints of the articulated body determined by the position of the specific part from a plurality of models, and an image of the articulated body in the image from among the selected models
  • a matching unit 3 e for extracting the most approximate model and obtaining angle information of the component of the articulated body from the angle between the components of the extracted model.
  • Each means provided in the computing means 3 is for describing each function performed by the computing means, and is not necessarily provided with hardware for executing each function, and can be implemented by software.
  • the human body can be regarded as an articulated body in which a plurality of components are connected by joints, and these components can be expressed hierarchically.
  • the components can be set arbitrarily.
  • Figures 9 and 10 show one model, which can be set by combining other components.
  • the lumbar region is the upper layer
  • the upper body is the chest, upper arm (upper left arm, upper right arm) and head, forearm (left forearm, upper arm) hand (left hand)
  • the lower body is connected to the lower layer in the order of the thigh (left thigh and right thigh), the shin (left shin and right shin), and the legs (left foot and right foot). Be done.
  • This model is prepared for the human body to be measured, and each layer represented by the model corresponds to the component of the articulated body described above. For each component, set constraints on the length, distance between components, and angle. In motion chap- ters, multi-view images of the human body are acquired and obtained. 16080
  • the posture and motion of the human body are acquired by obtaining position information and angle information of a specific part of each component from the captured image according to the above-described process.
  • a marker is attached to a specific part of the human body in order to obtain position information of the specific part.
  • the markers are not attached to all components of the human body, but are attached to selected ones of the components of the model using a small amount of force.
  • the number of forces to be attached is such that the subject is not burdened, and the position of force attachment is such that the force is always reflected on the image even when the human body is in various postures. Choose a similar site.
  • an appropriate hierarchical spacing is provided between the components to which the markers are attached, in consideration of the operation complexity in the model-image matching process. While matching between models and images reduces the number of models that can be obtained by the constraints determined by the position information of a specific part, the number of possible models also increases as the number of components between specific parts increases. Do. Therefore, in determining the posture and motion of the human body, place a force on specific parts at appropriate intervals that reduce the total amount of computation.
  • a force is attached to the upper waist of the hierarchy, a marker is attached to the joints of the upper arm and forearm in the upper body, and a marker is attached to the joints of the thigh and shin in the lower body.
  • the waist is attached to both sides, and the joints of the upper and lower arms and the joints of the thigh and leg are attached to the left and right.
  • the marker 20 comprises a plurality of light emitting diodes (L E D) including one first light emitting diode 21 and a plurality of second light emitting diodes 22.
  • the first light emitting diode 21 is used to obtain position information of a specific site
  • the second light emitting diode 22 is used to identify each marker and distinguish the specific site from other specific sites. Use.
  • the first light emitting diode 21 and the second light emitting diode 22 have different light emitting colors, and the power identification is performed by a combination of the light emitting colors.
  • the combinations of light emission colors are selected so as to make the hue angles symmetrical with each other in the H S I color system, and to be easily identified by the threshold value.
  • the first light emitting diode 21 is disposed above the base 23, and the plurality of second light emitting diodes 22 are disposed in a ring around the lower periphery thereof.
  • a power supply 25 of each light emitting diode is provided on the base 21 .
  • the power source 25 can use, for example, a battery.
  • Reference numeral 26 in the figure is a switch for controlling the connection between the power supply 25 and the light emitting diode.
  • a removable insulator is used to remove this insulator. Can cause the light emitting diode to emit light.
  • the second light emitting diodes 22 are arranged at equal angular intervals around the first light emitting diode 21 such that the light emitting directions are equal angular intervals.
  • a light emitting diode usually has directivity. Since the human body takes various postures, it is desirable that the light emission of the marker be non-directional in order to make the marker well reflected to the camera. Therefore, the light emitting directions of a plurality of light emitting diodes are arranged at equal angular intervals.
  • the five second light emitting diodes 2 2 are arranged in the direction of, for example, 72 °.
  • the number of the second light emitting diodes 22 can be set arbitrarily, the relationship between the physical size of the light emitting diodes and the size of the small force of the light load, and the detection condition in the acquired image Set in consideration of various conditions such as.
  • the force sensor 20 includes a light transmitting cover 2'4 for internally housing the light emitting diodes 21 and 22 and the power source 25 and the like.
  • the light of the light emitting diodes 2 1 and 2 2 is emitted to the outside through the cover 2 4.
  • the inner surface or outer surface of the cover 24 may be a scattering surface, or the material constituting the cover 24 may be a light scatterer. By making the cover 24 scatter, the light emitted from the light emitting diode is scattered by the cover 24 and the reflection to the camera can be improved.
  • a light shielding body 2 8 is provided between the first light emitting diode 2 1 and the second light emitting diode 2 2 so that the installation position can be changed.
  • the light shield 2 8 is a mixture of the light emission of the first light emitting diode 2 1 and the light emission of the second light emitting diode 2 2 on the image, and a color different from the light emission color of the light emitting diode This is to ensure that it does not occur. If different colors occur in the marker portion on the image, it may be difficult to identify the force, or it may be a factor in detecting the position of the force incorrectly.
  • the light shield 2 8 is positioned between the light emitted from the first light emitting diode 21 and the light emitted from the second light emitting diode 22, whereby the first light emitting diode projected onto the image is formed. Separate the images of diode 2 1 and the second light emitting diode 2 2 to prevent mixing of both lights.
  • the light shielding body 2 8 is an annular body having an opening at the central portion
  • an annular recessed portion 2 7 may be provided around the outer peripheral surface of the cover 24, and the cover 24 may be fitted into the recessed portion 2 7.
  • the recessed portion 27 may be formed in multiple stages in the vertical direction of the cover 24 so that the mounting position on the cover 24 can be changed.
  • Figures 1 2 (a) and (c) show a state in which the light shield 2 8 is attached to the lower part of the depressed portion 27.
  • Figures 1 2 (b) and (d) show the light shield 2 8 in the depressed portion 2 It is shown attached to the upper part of 7.
  • the area of the annular portion of the second light emitting diode 2 2 displayed in the image can be increased or decreased.
  • the marker image is adjusted according to the imaging environment such as the distance between the human body and the camera, the state of the background color, the state of illumination, etc. It can be done well.
  • the number of markers attached can be reduced, the burden on the subject can be reduced, and position information and angle state can be acquired with a small burden.
  • processing time can be shortened, and real-time (real-time) processing of posture and motion can be approached. about Can. Also, if the processing time is the same, the measurement accuracy can be improved.
  • the burden of attaching the marker can be reduced, the analysis processing time can be shortened, and high extraction accuracy can be obtained.
  • the present invention can be used for analysis of mobile objects such as people and objects, and formation of a virtual space, and can be applied to the fields of industry, medicine, sports and the like.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

In order to acquire the positional information and the angular information of the components of an articulated body, e.g. a human body, using a small number of markers, restrictive conditions of each component of the articulated body are utilized. A candidate attitude of any other component satisfying the restrictive conditions is extracted from the positional information of some component obtained from a small number of markers and then the angular information is acquired by superposing the images of the candidate attitude and the articulated body.

Description

明 細 書 モ一シ ョ ンキヤ プチャ方法、 モーシ ョ ンキヤ プチャ装置、 及びモ ーショ ンキヤプチャ用マーカ 技術分野  Specification of motion capture method, motion capture device and marker for motion capture
本発明は、 現実世界の物体の動きを計算機上に取り込むモーシ ョ ンキヤプチヤ に関する。 背景技術  The present invention relates to a motion canopy that captures the motion of an object in the real world on a computer. Background art
工業、 医学の他、 スポーツなど種々の分野において、 現実の世界にあ る物体を計算機上に取り込み、 計算機上で種々の処理を行う ことが試み られている。 例えば、 人や物の移動あるいは物体の形状の情報を取得し て、 人や物の移動解析や、 仮想空間の形成等に利用される。  In various fields such as industry, medicine, and sports, it is attempted to take objects in the real world onto a computer and perform various processing on the computer. For example, information on the movement of a person or object or the shape of an object is acquired and used for analysis of movement of a person or object, formation of a virtual space, and the like.
しかし、実際に評価したい人や物体は様々な環境下で作業を行うため、 必ずしもこれら情報を取得するに適した場所ではない。 また、 現実世界 を行われている事象をそのまま計算機上に取り込むには、 人や物体等の 対象物やその周辺環境に時間をとらせず、 作業に支障が生じないことが 必要である。  However, because people and objects that want to be evaluated actually work in various environments, they are not necessarily suitable places for acquiring such information. In addition, in order to directly capture events that are taking place in the real world on a computer, it is necessary not to give time to an object such as a person or object and its surrounding environment, and to cause no trouble in work.
従来、 このような現実の世界にある物体を計算機上に取り込む手法と して、 モーショ ンキヤプチヤ と呼ばれるものが知られている。 このモー ショ ンキヤプチャは、 人などの動体の動きをシミュレートするものであ る。  Heretofore, as a method of capturing such an object in the real world on a computer, there is known a motion canopy. This motion capture simulates the motion of a person or other moving body.
モーショ ンキヤプチヤ として、 例えば機械式、 磁気式、 光学式が知ら れている。 機械式のモーショ ンキヤプチヤでは、 演技者の体に角度検出 器や感圧器を取り付け、 関節部の折れ曲がり角度を検出することにより 演技者の動きを検出し、 磁気式のモーショ ンキヤプチヤでは、 演技者の 身体の各部に磁気センサを取り付け、 人工的に生成された磁場の中で演 技者を動かして、 磁力線の密度と角度を磁気センサによって検出するこ とにより、 磁気センサが存在する絶対的な位置を導出して演技者の動き を検出する。 For example, mechanical, magnetic and optical types are known as motion caps. In the mechanical motion canopy, an angle detector or pressure sensor is attached to the performer's body, and the bending angle of the joint is detected. The motion of a performer is detected, and in the case of a magnetic motion cap, a magnetic sensor is attached to each part of the performer's body, the performer is moved in an artificially generated magnetic field, and the density and angle of magnetic lines of force are determined. By detecting it with a magnetic sensor, the movement of the performer is detected by deriving the absolute position where the magnetic sensor is present.
また、 光学式のモーショ ンキヤプチヤでは、 演技者の体の動きを計測 したい場所にマーカを取り付け、 このマーカをカメラで撮像することに よ り、 マーカの位置から各部の動きを計測する。  In addition, in the case of an optical motion canopy, a marker is attached to the place where it is desired to measure the movement of the performer's body, and the movement of each part is measured from the position of the marker by imaging this marker with a camera.
何れの方式においても被験者に検出器やセンサあるいはマ—力を取り 付ける必要があり、 被験者の負担となっている。 高精度が得られる光学 式のモーショ ンキヤプチヤにおいても、 人体全体の動きを取得するには 数十個のマーカを装着する必要があり、 その用途が限定される。  In either method, it is necessary to attach a detector, sensor or marker to the subject, which is a burden on the subject. Even in optical motion cameras that can obtain high accuracy, it is necessary to attach dozens of markers in order to obtain the motion of the whole human body, and its application is limited.
これに対して、 被験者に対して負担のかからないモ一ショ ンキヤプチ ャも提案されている。 このモ一ショ ンキヤプチャは、 多視点カメラの画 像を用いて、 仮想の 3次元人体モデルとの対応をとることにより、 非接 触により人体の動きを取り込む。 このような多視点映像とモデルとのマ ツチングを行う方法として、 例えば、 文献 1 , 2がある。 文献 1では、 各画像において被験者にみを切り出したシルエツ ト画像と、 3次元モデ ルとを重ね合わせて評価することにより姿勢を推定する。 また'、 文献 2 では、 現在の画像と次の画像との差分を求め、 この差分を用いて姿勢を 決定する方法である。  On the other hand, there is also proposed a motion canopy that does not burden the subject. This motion capture captures the motion of the human body in a non-contact manner by correlating with a virtual three-dimensional human body model using images of a multi-viewpoint camera. There are, for example, documents 1 and 2 as a method for matching such multi-view images with models. In Document 1, the posture is estimated by overlaying and evaluating a three-dimensional model and a silhouette image obtained by extracting only the subject in each image. In addition, '2' is a method of determining the difference between the current image and the next image, and using this difference to determine the posture.
文献 : 電子情報通信学会論文誌 D— II Vol. J 8 2— D— I I N o . 1 0 p p . 1 7 3 9 - 1 7 4 9 1 9 9 9年 1 0月 Literature: Transactions of the Institute of Electronics, Information and Communication Engineers D— II Vol. J 8 2 — D— I I N o. 10 p. 1 7 3 9-1 7 4 9 1 9 9 9 1 9
「動きと形成モデルによる人物の姿勢決定」  "Attitude determination of person by movement and formation model"
文献 2 : 電子情報通信学会論文誌 D— II Vol. J 8 0— D— I I N 0. 6 p p . 1 5 8 1 - 1 5 8 9 1 9 9 7年 6月 発明の開示 Reference 2: Transactions of the Institute of Electronics, Information and Communication Engineers D— II Vol. J 8 0 — D— IIN 0. 6 pp. 1 5 8 1-1 5 8 9 1 9 9 June 7 Disclosure of the invention
上記したマーカを用いない非接触型のモーショ ンキヤプチャは、 動き を決定するための要素を画像から取得する必要があり、 そのため、 動き の抽出精度や解析処理時間の点で実用レベルに達していない。  The above-mentioned non-contact type motion capture that does not use a marker needs to acquire an element for determining the motion from the image, and therefore has not reached a practical level in terms of motion extraction accuracy and analysis processing time.
これは、 人体が採り得る姿勢の数が膨大であるため、 画像処理で扱う データ量が多くなり、 正確な姿勢を高速に抽出する ことが困難であるた めである。 例えば、 人体を図 1 3 に示すような 3次元人体モデルで表し たとき、 2軸方向あるいは 3軸方向に自由度を持つ構成要素で構成され、 人体モデルの姿勢はこれらの構成要素を回転させることで表される。 構 成要素は互いに階層構造を形成し、 各構成要素の回転動作は下位の構成 要素に反映される。 したがって、 各構成要素がとり うる場合の数 Mは、 自由度の数を n とし、 各自由度で評価する数を P としたとき、 M = P n で表される。 This is because the number of postures that can be taken by the human body is huge, so the amount of data handled in image processing is large, and it is difficult to extract accurate postures at high speed. For example, when the human body is represented by a three-dimensional human body model as shown in FIG. 13, it is composed of components having degrees of freedom in two or three axis directions, and the posture of the human body model rotates these components. Is represented by The components form a hierarchical structure with each other, and the rotational movement of each component is reflected in the lower components. Therefore, the number M of cases that each component can take is represented by M = P n, where n is the number of degrees of freedom and P is the number to be evaluated in each degree of freedom.
図 1 3で表される簡易な人体モデルであっても、 全自由度は 2 9であ るため、 各自由度について 1 0 0 回動かして評価すると、 評価数 Mは M = 1 0 0 2 9となる。 Even in the case of the simple human body model shown in Fig. 13, since all the degrees of freedom are 2 9, the evaluation number M is M = 1 0 0 2 when it is evaluated by rotating 100 degrees for each degree of freedom. It will be nine .
このように、 人体等の複数の構成要素が互いに関節で接続されてなる 多関節体の姿勢を、 マーカを用いない非接触型のモーショ ンキヤプチャ によ り取得するには、 その姿勢の数が多いことから膨大な演算処理が必 要であり、 正確にかつ高速で処理することは困難である。 さ らに、 動い ている多関節体を実時間で処理することは困難である。  As described above, in order to obtain the posture of an articulated body in which a plurality of components such as the human body are connected with each other by a non-contact motion capture without using a marker, the number of postures is large. This requires a huge amount of arithmetic processing, making it difficult to process accurately and at high speed. Furthermore, it is difficult to process moving articulated bodies in real time.
したがって、 従来提案されている光学式のモ一シヨ ンキヤプチヤにお いて、 マ一力を用いたモ一ショ ンキヤプチヤでは、 多数のマーカを取り 付けることにより被験者に対する負担が大きいという問題があ り、一方、 マーカを用いないモーショ ンキヤプチヤではマーカ装着による負担はな いものの、 と り うる姿勢の数が多いことから解析処理時間と抽出精度の 点で問題がある。 Therefore, there is a problem that in the case of the conventionally proposed optical motion canopy, the motion burden on the subject is large by attaching a large number of markers in the motion canopy using a force. Although there is no burden due to marker attachment in motion markers without markers, the number of possible postures is large, so analysis processing time and extraction accuracy There is a problem in point.
したがって、 実時間で人体等の多関節体の姿勢を決定するためには、 被験者等に対する負担が少なく、 かつ解析処理時間が短く抽出精度が高 いモーショ ンキヤプチヤが求められている。  Therefore, in order to determine the posture of an articulated body such as a human body in real time, there is a need for a motion canopy that has a small burden on the subject, has a short analysis processing time, and has high extraction accuracy.
そこで、 本発明は前記した従来の問題点を解決し、 モーショ ンキヤプ チヤにおいて、 マーカを装着することによる負担を少なく し、 かつ解析 処理時間を短縮し、 高い抽出精度を得ることを目的とする。  Therefore, the present invention is intended to solve the above-mentioned conventional problems and to reduce the burden of attaching a marker in motion capture, shorten the analysis processing time, and obtain high extraction accuracy.
本発明は、 装着するマーカの個数を少なくすることでマーカ装着によ る負担を軽減し、 また、 マーカにより得られる位置情報と構成要素間を 関係付ける制約条件とを用いることによって採り得る姿勢の数を減ら し, これによ り解析処理時間を短縮し、 高い抽出精度を得る。  The present invention reduces the load due to marker attachment by reducing the number of markers attached, and can adopt an attitude that can be taken by using positional information obtained by the markers and constraints that relate the components. The number is reduced, which shortens the analysis processing time and obtains high extraction accuracy.
本発明は、 モーショ ンキヤプチャ方法、 モーショ ンキヤプチャ装置の 各態様、 及びこのモーショ ンキヤプチヤに好適なマーカを含む。  The present invention includes a motion capture method, each aspect of a motion capture device, and a marker suitable for the motion capture.
複数の構成要素が互いに関節で接続されてなる多関節体の姿勢及び動 作の決定において、 多関節体の姿勢は多関節体を構成する構成要素の位 置と互いの角度とにより定めることができ、 また、 多関節体の動作はこ の姿勢の時間変化としてとらえることができる。 したがって、 モーショ ンキヤプチヤが取得目的とする情報は、 多関節体を構成する構成要素の 位置情報と角度情報である。  In the determination of the posture and operation of an articulated body in which a plurality of components are jointed with each other, the posture of the articulated body can be determined by the positions of the components constituting the articulated body and the angle with each other. The movement of the articulated body can be regarded as a temporal change of this posture. Therefore, the information that motion canopy wants to acquire is the position information and angle information of the components that make up the articulated body.
本発明のモーショ ンキヤプチャは、 少数のマーカを用いて人体等の多 関節体の構成要素の位置情報と角度情報を取得する  The motion capture of the present invention obtains positional information and angular information of components of an articulated body such as a human body using a small number of markers.
ここで、 多関節体の各構成要素は、 構成要素の距離 (長さ) や、 互い に採り得る角度関係等について制約条件を備えている。 例えば、 人体を 構成する上腕と前腕は肘関節により連接されており、上腕や前腕の長さ、 上腕と前腕との距離、 上腕に対して前腕が採る得る角度には人体構造上 から制限があ り、 この制限を越えた位置や角度をとることはできない。 したがって、これらの距離や角度は構成要素の採り得る制約条件となる。 通常、 マーカを用いて人体等の多関節体の構成要素の位置情報と角度 情報を取得するには、 各構成要素に多数のマーカを取り付ける必要があ り、 少数のマーカを取り付けただけでは、 充分な位置情報や角度情報を 取得することができず、 多関節体の姿勢を決定することはできない。 本発明は、 少数のマ一力を用いて人体等の多関節体の構成要素の位置 情報と角度情報を取得するために、 多関節体の各構成要素が備える制約 条件を利用する。 本発明のモーショ ンキヤプチャは、 少数のマーカから 得られたある構成要素の位置情報から、 この制約条件を満足するような 他の構成要素の姿勢候補を抽出し、 この姿勢候補と多関節体の画像との 重ね合わせにより角度情報を取得する。 このように、 多数の姿勢の中か ら位置情報で定まる制約条件を満足する姿勢を絞り込むことで、 評価す る姿勢の数を減少させる。 このように、 姿勢決定に際して評価すべき姿 勢の数を減少させることにより、 解析時間を短縮し、 姿勢抽出の精度を 高める。 Here, each component of the articulated body has constraints on the distance (length) of the components and the angular relationship that can be taken with each other. For example, the upper arm and the forearm constituting the human body are connected by an elbow joint, and the length of the upper arm and the forearm, the distance between the upper arm and the forearm, and the angle that the forearm can take with the upper arm are limited due to human body structure. You can not take positions or angles beyond this limit. Therefore, these distances and angles become possible constraints of the component. Usually, in order to obtain position information and angle information of components of an articulated body such as a human body using markers, it is necessary to attach a large number of markers to each component. Sufficient position information and angle information can not be obtained, and the posture of the articulated body can not be determined. The present invention uses the constraints of each component of the articulated body in order to obtain position information and angle information of the component of the articulated body such as a human body using a small amount of force. The motion capture of the present invention extracts candidate postures of other components that satisfy this constraint condition from position information of certain components obtained from a small number of markers, and generates images of the posture candidates and the articulated body. Angle information is acquired by superimposing with. Thus, the number of postures to be evaluated is reduced by narrowing down the postures satisfying the constraint condition determined by the position information from among a large number of postures. In this way, by reducing the number of postures to be evaluated in posture determination, the analysis time is shortened and the accuracy of posture extraction is increased.
そこで、 本発明のモーショ ンキヤプチャ方法の第 1 の態様は、 複数の 構成要素が互いに関節で接続されてなる多関節体の姿勢及び動作の決定 において、 多関節体を構成する構成要素の特定部位の位置情報を求め、 位置情報と複数の構成要素を互いに関係付ける制約条件とに基づいて位 置情報で定まる制約条件を満たす各構成要素間の角度情報を求め、 位置 情報及び角度情報とにより多関節体の姿勢及び動作を決定する。  Therefore, according to the first aspect of the motion capture method of the present invention, in the determination of the posture and motion of an articulated body in which a plurality of components are jointed with each other, a specific portion of the component constituting the articulated body is determined. Position information is determined, and angle information between each component that satisfies a constraint condition determined by the position information is determined based on the position information and a constraint that relates the plurality of components to each other. Determine body posture and movement.
位置情報を構成要素の特定部位から取得し、 他の角度情報は制約条件 を利用して取得することにより、 多関節体が備える全ての構成要素につ いて位置情報を取得することなく多関節体の姿勢を取得することができ る。  The position information is acquired from the specific part of the component, and the other angle information is acquired using the constraint condition, whereby the articulated body is obtained without acquiring the position information for all the components provided in the articulated body. You can get your attitude.
本発明のモ一ショ ンキヤプチャ方法の第 2 の態様は、 複数の構成要素 'が互いに関節で接続されてなる多関節体の姿勢及び動作の決定において, 複数の構成要素を互いに関係付ける距離及び角度の制約条件を予め定め ておき、 少なく とも一つの構成要素を挟んで連接される構成要素の特定 部位に個々に識別可能なマーカを設け、 このマーカを含む多関節体の画 像を求める。 特定部位の位置情報は、 求めた画像中のマーカの位置から 求める。 また、 構成要素の角度情報は、 画像中の多関節体の像と求めた 位置情報と複数の構成要素を互いに関係付ける制約条件とに基づいて行 レ 位置情報で定まる制約条件を満たす各構成要素間の角度情報の中か ら多関節体の像に一致する角度情報を求めことで取得する。 この位置情 報及び角度情報とによ り多関節体の姿勢及び動作を決定する。 The second aspect of the motion capture method of the present invention comprises a plurality of components In the determination of the posture and motion of an articulated body in which the 's are connected to each other, distance and angle constraints that relate a plurality of components to each other are determined in advance, and at least one component is connected A marker which can be individually identified is provided at a specific site of the component to be obtained, and an image of an articulated body including this marker is obtained. The position information of the specific part is obtained from the position of the marker in the obtained image. In addition, the component angle information is a component that satisfies the constraints defined by the line position information based on the image of the articulated body in the image, the position information determined, and the constraints that relate the plurality of components to each other. The angle information corresponding to the image of the articulated body is obtained by finding the angle information among them. From the position information and the angle information, the posture and motion of the articulated body are determined.
各マーカはそれぞれ個別識別が可能であり、 マーカを識別することに よ りマーカの取り付け位置を識別することができる。 また、 マ一力が取 り付けられる構成要素は、 間に他の構成要素を挟んで配置する。 これに より、 人体等の多関節体に取り付けるマ一力を間引いて配置することが でき、 姿勢取得に要するマーカの個数を少数とすることができる。  Each marker can be individually identified, and the marker attachment position can be identified by identifying the marker. Also, the component to which the force is attached should be placed with the other component in between. As a result, the force to be attached to an articulated body such as a human body can be disposed by thinning out, and the number of markers required for posture acquisition can be reduced.
本発明のモーショ ンキヤプチャ方法の第 3 の態様は、 複数の構成要素 が互いに関節で接続される多関節体の姿勢及び動作の決定において、 複 数の構成要素を互いに関係付ける制約条件、 及び多関節体の構成要素の 姿勢を表す複数のモデルを予め定めておき、 少なく とも一つの構成要素 を挟んで連接される構成要素の特定部位に設けられた個々に識別可能な マーカを含む多関節体の画像を求める。 特定部位の位置情報は、 求めた 画像中のマーカの位置から求める。 また、 構成要素の角度情報は、 複数 のモデルから位置情報により定まる制約条件に適合するモデルを抽出し. 抽出したモデルの中から画像中の多関節体の像に最も近似するモデルを 抽出し、 抽出したモデルの構成要素間の角度から多関節体の構成要素の 角度情報を求める。 この位置情報及び角度情報とにより多関節体の姿勢 及び動作を決定する。 According to a third aspect of the motion capture method of the present invention, there is provided a constraint that relates a plurality of components to each other in determining the posture and motion of an articulated body in which the plurality of components are articulated to each other, and A plurality of models representing postures of body components are predetermined, and an articulated body including individually distinguishable markers provided at specific portions of the components connected with at least one component interposed therebetween. Ask for an image. The position information of the specific part is obtained from the position of the marker in the obtained image. In addition, component angle information is extracted from a plurality of models that conform to constraints determined by position information, and a model that is most approximate to the image of the articulated body in the image is extracted from the extracted models, The angle information of the components of the articulated body is obtained from the angles between the components of the extracted model. Posture of the articulated body by this position information and angle information And determine the action.
従来のマーカを用いないモーショ ンキヤプチャは、 この制約条件を用 いずに全てのモデルについて取得した画像との重ね合わせを評価するも のであり、 階層的に連接された構成要素が採り得る姿勢の数は、 構成要 素の個数及びその自由度の乗数で増加するため膨大な数となる。 多関節 体を構成する構成要素の個数を増やしてよ り詳細なモデルを設定する場 合には、 さらに採り得る姿勢の数が増加することになる。  The motion capture without using a conventional marker is to evaluate the superposition with the image acquired for all models without using this constraint condition, and the number of postures that hierarchically connected components can take. Is an enormous number because it increases with the number of components and the multiplier of their degrees of freedom. If a more detailed model is set by increasing the number of components constituting an articulated body, the number of possible postures will increase.
これに対して、 本発明のモーショ ンキヤプチヤでは、 評価すべき姿勢 であるモデルの個数を減らすことができるため、 解析時間を短縮するこ とができ、 実時間 (リアルタイム) での動作解析に近づけることが可能 となる。  On the other hand, in the motion canopy of the present invention, the number of models as attitudes to be evaluated can be reduced, so that the analysis time can be shortened, and it becomes closer to real-time (real-time) motion analysis. Is possible.
また、 本発明のモーショ ンキヤプチヤの第 4の態様は、 人体及び人体 の特定部位に取り付けたマーカの画像を取得し、 この画像からマ一力を 個々に識別して抽出して人体の特定部位の位置情報を求め、 予め用意し たモデルの中から特定部位の位置情報に基づいて抽出したモデルと取得 画像との像合わせにより人体の構成要素の角度情報を求め、 求めた位置 情報及び角度情報から人体の姿勢を決定する。  In the fourth aspect of the motion canopy according to the present invention, an image of a marker attached to a human body and a specific part of the human body is acquired, and the individual force is individually identified and extracted from this image to detect the specific part of the human body. The position information is determined, and the angle information of the component of the human body is determined by image matching between the model extracted based on the position information of the specific part from among the prepared models and the acquired image, and the determined position information and angle information Determine the posture of the human body.
モデルの抽出は、 特定部位の位置により定まる人体姿勢の制約条件に 適合する少なく とも 1つのモデルを選出することにより行う。 画像とモ デルとの像合わせは、 抽出したモデルの中から画像中の人体画像に最も 近似するモデルを選出することにより行う。 近似するモデルの選出は、 例えば、 構成要素上に設けた頂点について、 画像とモデルとの距離を各 姿勢について求め、 その距離が最も短くなる姿勢が得られるモデルを選 出する。 角度情報は選出したモデルにおいて、 構成要素の角度から求め ることができる。  Models are extracted by selecting at least one model that meets the human posture constraints determined by the position of a specific part. Image alignment between the image and the model is performed by selecting from among the extracted models the model that most closely approximates the human body image in the image. For the selection of an approximate model, for example, with respect to the vertexes provided on the component, the distance between the image and the model is obtained for each posture, and a model that can obtain the posture with the shortest distance is selected. Angle information can be obtained from the angle of the component in the selected model.
本発明のモーショ ンキヤプチャは、 マーカの識別をマーカが発光する 03 016080 In the motion capture of the present invention, the marker emits light to identify the marker. 03 016080
8 8
少なく とも 2つの異なる色の組み合わせにより行い、 また、 色の組合せ のいずれか一つの色の位置からマ一力位置を求める。 これにより、 各マ —力の個別識別を行う ことができると共に、 位置情報を取得することが できる。 It is performed by the combination of at least two different colors, and the position of a single force is determined from the position of one of the color combinations. Thus, individual identification of each force can be performed and position information can be obtained.
特定部位は、 人体中の腰部、 肘部、 及び膝部とすることができる。 こ の各部の左右両側にマ一力を設けた場合であってもマーカの個数は 6個 ですみ、 少数のマ一力により被験者の負担を減少させることができる。 本発明のモーショ ンキヤプチャ装置の第 1 の態様は、 複数の構成要素 が互いに関節で接続される多関節体の姿勢及び動作を決定するモーショ ンキヤプチャ装置であって、 多関節体を構成する構成要素の特定部位の 位置情報を求める位置検出手段と、 位置情報と前記複数の構成要素を互 いに関係付ける制約条件とに基づき、 位置情報で定まる制約条件を満た す各構成要素間の角度情報を求める角度検出手段とを備えた構成とし、 検出した位置情報及び角度情報とにより多関節体の姿勢及び動作を決定 する。  The specific site can be the waist, elbow, and knee in the human body. Even when the left and right sides of each part are provided with an equal force, the number of markers is only six, and a small amount of force can reduce the burden on the subject. A first aspect of the motion capture device of the present invention is a motion capture device for determining the posture and operation of an articulated body in which a plurality of components are connected to each other by articulation, the motion capture device comprising the components constituting the articulated body. Based on position detection means for obtaining position information of a specific part, position information, and constraints that relate the plurality of components to each other, angle information between each component that satisfies the constraint defined by the position information is obtained. An attitude of the articulated body is determined based on the detected position information and angle information.
本発明のモーショ ンキヤプチャ装置の第 2の態様は、 複数の構成要素 が互いに関節で接続される多関節体の姿勢及び動作を決定するモーショ ンキヤプチャ装置であって、 複数の構成要素を互いに関係付ける制約条 件、 及び多関節体の構成要素の姿勢を表す複数のモデルを予め記憶する 記憶手段と、 少なく とも一つの構成要素を挟んで連接される構成要素の 特定部位に設けられた個々に識別可能なマーカを含む多関節体の画像を 求める画像取得手段と、 画像中のマーカの位置から特定部位の位置情報 を求める位置情報検出手段と、 複数のモデルから、 特定部位の位置によ り定まる多関節体の制約条件に適合するモデルを選択するモデル選択手 段と、 選択したモデルの中から画像中の多関節体の像に最も近似するモ デルを抽出し、 抽出したモデルの構成要素間の角度から多関節体の構成 要素の角度情報を求めるマッチング手段とを備えた構成とし、 位置情報 及び角度情報とにより多関節体の姿勢及び動作を決定する。 A second aspect of the motion capture device of the present invention is a motion capture device for determining the posture and motion of an articulated body in which a plurality of components are articulated to one another, wherein a constraint relating the plurality of components to one another A storage means for pre-storing a plurality of models representing the conditions and postures of the components of the articulated body, and individually distinguishable at specific parts of the components connected across at least one component Image acquisition means for obtaining an image of an articulated body including various markers, position information detection means for obtaining position information of a specific site from the position of the marker in the image, multiple models determined by the position of the specific site The model selection means for selecting a model that meets the constraint conditions of the joint body, and the model closest to the image of the articulated body in the image is extracted from the selected models, Configuration from the angle between the components of the out model of the multi-joint body It is configured to have matching means for obtaining angle information of the element, and the posture and motion of the articulated body are determined by the position information and the angle information.
また、 本発明のモーショ ンキヤプチヤに好適なマーカは、 複数の構成 要素が互いに関節で接続される多関節体の姿勢及び動作を決定するモー シヨ ンキヤプチヤにおいて、 多関節体の特定部位を識別するマーカであ り、 中央に配置する 1 つの第 1 の発光ダイオー ドと、 第 1 の発光ダイォ ー ドに周囲に等角度間隔で配置する複数の第 2 の発光ダイオー ドを備え た構成とし、 第 1 の発光ダイオー ドと第 2 の発光ダイオー ドは発光色を 異にする。  In addition, a marker suitable for the motion spacer of the present invention is a marker for identifying a specific part of the articulated body in the motion capsule, which determines the posture and operation of the articulated body in which a plurality of components are connected to each other. A first light emitting diode disposed at the center, and a plurality of second light emitting diodes disposed equiangularly spaced around the first light emitting diode; The light emitting diode and the second light emitting diode have different light emitting colors.
第 1及び第 2 の発光ダイオードによる発光を上方から撮像した場合に は、 中心に第 1 の発光ダイオードによる発光色と、 その外周に第 2の発 光ダイォー ドによる発光色とからなる環状の画像が得られ、 これらの 2 色の色の組合せによ り各マーカを個別識別し、 さ らに多関節体上におい てマーカが設けられた部位を特定することができる。  When light emission from the first and second light emitting diodes is imaged from above, an annular image consisting of the light emitting color of the first light emitting diode at the center and the light emitting color of the second light emitting diode at the outer periphery By combining these two colors, it is possible to identify each marker individually, and to identify the site on the articulated body where the marker is provided.
また、 発光体のマ一力を用いることにより影や照明に変化に対する色 情報の変化う防ぐことができ、 また、 2色の色の組合せを用いることに より、 対象物以外に同一色情報を持つ背景や物体があることによる誤認 識ゃ抽出精度の低下を防ぐことができる。  In addition, it is possible to prevent change in color information due to changes in shadows and lighting by using the power of the light emitter, and by using a combination of two colors, it is possible to use the same color information other than the object. Mistakes caused by the presence of backgrounds or objects can prevent degradation in extraction accuracy.
複数の第 2 の発光ダイオー ドを第 1 の発光ダイオー ドに周囲に等角度 間隔で配置することにより、 多関節体の姿勢にかかわらず画像への取り こぼしを少なくする ことができる。  By disposing a plurality of second light emitting diodes at equal angular intervals around the first light emitting diode, it is possible to reduce the image dropout regardless of the posture of the articulated body.
また、 第 1 の発光ダイオー ド と第 2 の発光ダイ オー ド との間に 遮光体を設ける構成とする こ どもできる。 この遮光体によ り 、 画 像上において 2 色に色が混じる こ とを防ぐこ とができる。 2 色の 混ざる と、 異なる色と して識別される こ とにな り 、 マーカ の識別 が困難となっ た り 、 誤認識の原因 となる。 遮光体は、 画像上にお ける 2 色に混色を防ぎ、 マーカの識別を容易とする。 図面の簡単な説明 In addition, a light shield can be provided between the first light emitting diode and the second light emitting diode. This light shield can prevent the mixing of the two colors on the image. When two colors are mixed, they are identified as different colors, which makes it difficult to identify markers and causes false recognition. The light shield is on the image Prevents mixing of two colors and makes marker identification easier. Brief description of the drawings
第 1 図は本発明のモーショ ンキヤプチヤの概略を説明するための図で あり、 第 2図は本発明のモーショ ンキヤプチヤによる姿勢決定の手順を 説明するためのフローチャートであり、 第 3図は本発明のモーショ ンキ ャプチヤによる姿勢決定の手順を説明するための構成要素の概略図であ り、 第 4図は多関節体の構成要素を説明するための図であり、 第 5図は 特定部位の抽出及び位置情報の算出手順の一例を説明するためのフロー チャートであ り、 第 6図は H S I 表色系の双 6角錘のカラ一モデルであ り、 第 7 図は特定部位の抽出及び位置情報の算出手順の一例を説明する ための説明図であり、 第 8図は本発明のモ一ショ ンキヤプチャ装置を説 明するための図であり、 第 9図は本発明のモーショ ンキヤプチヤにより 人体の姿勢、 動作を求める場合のマーカの取り付け位置を説明するため の階層図であり、 第 1 0図は本発明のモーショ ンキヤプチヤにより人体 の姿勢、 動作を求める場合のマ一力の取り付け位置を説明するためのモ デル図であり、 第 1 1 図は本発明のモーシヨ ンキヤプチヤに好適なマー 力を説明するための図であり、 第 1 2図は本発明のモーショ ンキヤプチ ャに好適なマーカを説明するための図であ り、 第 1 3図は人体モデルの 一例を示す図である。 発明を実施するための最良の形態  FIG. 1 is a view for explaining an outline of a motion canopy according to the present invention, FIG. 2 is a flow chart for explaining a procedure of attitude determination by the motion canopy according to the present invention, and FIG. FIG. 4 is a schematic view of components for explaining a procedure of posture determination by motion capture, FIG. 4 is a diagram for explaining components of an articulated body, and FIG. FIG. 6 is a flow chart for explaining an example of a calculation procedure of position information, FIG. 6 is a color model of a bi-hexagonal pyramid of HSI color system, and FIG. 7 is an extraction of a specific part and position information FIG. 8 is a view for explaining an example of the calculation procedure of the present invention, FIG. 8 is a view for explaining a motion capture device of the present invention, and FIG. 9 is a posture of a human body by the motion capture of the present invention. , Work Figure 10 is a hierarchy diagram for explaining the mounting position of the force when the posture and motion of the human body are determined by the motion canopy of the present invention. FIG. 11 is a diagram for explaining a marker suitable for the motion of the present invention, and FIG. 12 is a diagram for explaining a marker suitable for the motion capture of the present invention. Thus, FIG. 13 is a view showing an example of a human body model. BEST MODE FOR CARRYING OUT THE INVENTION
以下、 本発明の実施の形態について図を参照しながら説明する。  Hereinafter, embodiments of the present invention will be described with reference to the drawings.
図 1 は、 本発明のモーショ ンキヤプチヤの概略を説明するための図で ある。 図 1 において、 はじめに多視点映像を取得する。 多視点映像は、 複数のカメラを配置することで取得する。各カメラで撮像された画像は、 例えばフレーム単位で同期をとつて取得する (図中の A )。 取得した多 視点映像はほぼ実時間で処理され、 対象物である多関節体の位置情報及 び角度情報を求め、 姿勢、 動作を求める。 FIG. 1 is a view for explaining the outline of the motion chart of the present invention. First, multi-view video is acquired in Fig.1. Multi-view images are acquired by arranging multiple cameras. The images taken by each camera are For example, synchronization is acquired in frame units (A in the figure). The acquired multi-view video is processed almost in real time, and the position information and angle information of the articulated object as the object are obtained, and the posture and motion are obtained.
位置情報は、 多視点映像の各画像の中から、 対象物である多関節体の 特定部位を抽出し、 その位置から求める (図中の B )。 特定部位は、 多 関節体に構成要素上に定めた任意の部位である。  The position information is obtained by extracting a specific part of the multi-joint body that is the object from each image of the multi-view video, and from the position (B in the figure). The specific site is any site defined on the component in the articulated body.
本発明のモ一ショ ンキヤプチヤで求める位置情報は、 多閼節体に構成 要素上に定めた特定部位に関するものであるため、 この位置情報だけで は他の構成要素の姿勢を決定することはできない。  Since the position information obtained by the motion canopy of the present invention relates to a specific part defined on a component in a multi-segment body, it is not possible to determine the posture of the other component by this position information alone. .
本発明は対象物の姿勢を決定する要素として、 位置情報に加えて各構 成要素の角度情報を用いる  The present invention uses angle information of each component in addition to position information as an element for determining the posture of the object.
本発明のモーショ ンキヤプチャは、 多視点映像の各画像と予め定めて おいたモデル (図中の D ) とを重ね合わせ、 画像と一致するモデルを抽 出し、 そのモデルの各構成要素の角度から角度情報を求める (図中の C ) , この画像とモデルとの重ね合わせにより、 最も画像に適合したモデルを 抽出することをマッチングと呼ぶ。  In the motion capture of the present invention, each image of the multi-view video is superimposed on a predetermined model (D in the figure), a model matching the image is extracted, and an angle from the angle of each component of the model is extracted. Finding information (C in the figure), Extraction of the model that most closely matches the image by superimposing this image on the model is called matching.
本発明は、 この角度情報の抽出において、 多関節体を構成する構成要 素間の関係を定める制約条件を利用する。 多関節体では、 特定部位と各 構成要素との間に多関節体の特性に応じた関係があり、 特定部位の位置 が定まれば、 他の構成要素はその関係を制約条件として定められ範囲内 の姿勢をとり、 この制約条件からはずれた姿勢をとることはできない。 本発明のモーショ ンキヤプチャは、 特定部位の位置情報と上記制約条 件を用いてマッチングを行うモデルを絞ることによ り、 マッチングに要 する時間を短縮する。  The present invention uses the constraint that determines the relationship between the constituent elements of the articulated body in extracting the angle information. In an articulated body, there is a relationship between the specific part and each component according to the characteristics of the articulated body, and once the position of the specific part is determined, the other components are defined with the relationship as the constraint and the range It is not possible to take an internal attitude and take an attitude that deviates from this constraint. The motion capture of the present invention reduces the time required for matching by narrowing down the model to be matched using the position information of the specific part and the above-mentioned restriction condition.
以下、 図 2 に示すフローチャー ト、 及び図 3 に示す構成要素の概略図 を用いて、 本発明のモーショ ンキヤプチヤによる姿勢決定の手順を説明 する。 Hereinafter, the procedure of the attitude determination by the motion canopy according to the present invention will be described using the flowchart shown in FIG. 2 and the schematic view of the components shown in FIG. Do.
対象物である多関節体の周囲に配置した複数のカメラによ り、 多関節 体の多視点画像を取得する。 多視点画像は、 それぞれ同期がとられたフ レーム単位の画像で取得することができる。 以下の処理は、 これら多視 点映像で取得された複数の画像により 3次元上の位置及び角度を求める が、 図 3では簡易的に 2次元で説明する。  A multi-viewpoint image of the articulated body is acquired by a plurality of cameras arranged around the object, the articulated body. Multi-viewpoint images can be acquired as synchronized frame images. In the following processing, a three-dimensional position and angle are obtained from a plurality of images acquired by these multi-viewpoint images, but in FIG.
図 3 ( a ) は、 多視点映像で取得された一画像を簡易的に示している。 図で示す多関節体は、 3つの構成要素がそれぞれ関節により互いの角度 関係を変更可能に連接した構成例を示している。  Fig. 3 (a) simply shows an image acquired in multi-view video. The articulated body shown in the figure shows a configuration example in which three components are connected so as to be able to change the angular relationship of each other by joints.
図 4は、 この多関節体の構成要素を説明するための図である。 ここで、 多関節体 1 0 は、 構成要素 1 1 , 1 2, 1 3で構成され、 構成要素 1 1 と構成要素 1 2 は関節 1 4で回動可能に連接され、 構成要素 1 2 と構成 要素 1 3 は関節 1 5で回動可能に連接される。 この多閼節体 1 0 におい て、 構成要素 1 1 の一方の端部を特定部位 1 6 とし、 構成要素 1 3 の一 方の端部を特定部位 1 7 とする。 これらの特定部位は、 構成要素 1 2 を 間に挟む構成要素 1 1及び 1 3 に設けられているが、 連接する構成要素 であれば、 間には挟む構成要素は 1 つ以上とすることができる。 ここで、 各構成要素 1 1 , 1 2, 及び 1 3の長さ、 あるいは各構成要素間の距離 は、 人体等の多関節体を構成する構造から定められている。  FIG. 4 is a view for explaining the components of this articulated body. Here, the articulated body 10 is composed of the components 1 1, 1 2 and 1 3, and the component 1 1 and the component 1 2 are rotatably articulated at the joint 1 4, and the component 1 2 and the component 1 2 The component 1 3 is pivotably articulated at the joint 1 5. In this multi-segment body 10, one end of the component 11 is designated as a specific site 16 and one end of the component 13 is designated as a specific site 17. These specific parts are provided in the components 1 1 and 1 3 sandwiching the component 1 2, but if they are connected components, there may be one or more components placed in between. it can. Here, the lengths of the respective components 1 1, 1 2, and 13 or the distances between the respective components are determined from the structure constituting an articulated body such as a human body.
また、 これらの長さや距離が定められているため、 例えば、 特定部位 Also, because these lengths and distances are determined, for example,
1 6, 1 7の空間上の位置が定まれば、 構成要素家 1 1 , 1 2, 及び 1 3 の互いの角度関係はある範囲内に制限される。 例えば、 特定部位 1 6 において構成要素 1 1がとり得る角度 0 1 0の範囲は一 0 1 1 ~ θ 1 2 であ り、 特定部位 1 7 において構成要素 1 3がとり得る角度 0 4 0 の範 囲は一 0 4 1〜 0 4 2である。 また、 関節 1 4において構成要素 1 2が 構成要素 1 1 に対してとり得る角度 0 2 0の範囲は一 0 2 :! 〜 0 2 2で あり、 関節 1 5 において構成要素 1 3が構成要素 1 2 に対してとり得る 角度 0 3 0 の範囲は— Θ 3 1〜 0 3 2である。 なお、 ここでは、 時計周 りの角度は負の角度で示している。 なお、 図 4では、 画像から特定部位 を抽出するために、 特定部位 1 6 にマーカ M 1 を設け、 特定部位 1 7 に マーカ M 2 を設けるものとする。 Once the spatial positions of 16, 17 are determined, the angular relationships among the component houses 1, 1, 2, and 13 are limited within a certain range. For example, the range of angles 0 1 0 that can be taken by component 1 1 at a specific site 1 6 is 1 0 1 1 to θ 1 2 2, and the angle 0 4 0 at which components 1 3 can take at a specific site 1 7 The range is from 1 0 4 1 to 0 4 2. Also, the range of angles 0 2 0 that component 1 2 can take with respect to component 1 1 at joint 1 4 is 1 0 2:! ~ 0 2 2 The range of the angle 0 3 0 that the component 1 3 can take on the component 1 2 at the joint 1 5 is − Θ 3 1 to 0 3 2. Here, the clockwise angle is shown as a negative angle. In addition, in FIG. 4, in order to extract a specific site | part from an image, marker M1 shall be provided in specific site | part 16 and marker M2 shall be provided in specific site | part 17. FIG.
このように、 多関節体を構成する各構成要素は、 ある部位の位置が定 められると、 とり得る姿勢が制限され、 これらの長さや角度は、 とり得 る姿勢の制約条件とみることができる。 なお、 図 4 に示す構成要素の関 係は二次元で表しているが、 多視点映像から得られる画像により、 これ らの関係及び制約条件は 3次元上で設定される (ステップ S l )。  As described above, each component constituting an articulated body is limited in possible postures when the position of a part is determined, and these lengths and angles may be regarded as possible posture constraints. it can. Although the relationship between the components shown in Fig. 4 is expressed in two dimensions, these relationships and constraints are set in three dimensions by the image obtained from the multi-viewpoint image (step S l).
ステップ S 1 の工程で取得した画像から、はじめに位置情報を求める。 位置情報を求めるために、 画像から特定部位を抽出し (ステップ S 2 )、 抽出した特定部位の位置情報を求める。 図 3 ( b ) は、 図 3 ( a ) の多 関節体の画像から特定部位を抽出した状態を簡易的に表している。 特定 部位 1 6 に設けたマ一力 M 1 を抽出することにより位置情報 ( X 1, y 1, z 1 ) を求め、 特定部位 1 7 に設けたマ一力 M 2 を抽出することに よ り位置情報 (x 2, y 2 , ζ 2 ) を求める。  First, position information is obtained from the image acquired in the process of step S 1. In order to obtain position information, a specific part is extracted from the image (step S 2), and position information of the extracted specific part is obtained. Figure 3 (b) simply shows the state in which the specific part is extracted from the image of the articulated body in Figure 3 (a). The position information (X1, y1, z1) is obtained by extracting the force M 1 provided at the specific portion 16, and the force M 2 provided at the specific portion 17 is extracted. The position information (x2, y2, ζ2) is obtained.
ここで、 特定部位の抽出及び位置情報の算出手順の一例について、 図 5 のフローチャー ト、 及び図 7 の説明図を用いて説明する。  Here, an example of the extraction procedure of the specific part and the calculation procedure of the position information will be described using the flowchart of FIG. 5 and the explanatory diagram of FIG. 7.
C C Dカメ ラ等の撮像手段で得られる画像は、 通常 R G B信号で得ら れる。 この R G B信号はその信号強度を、 例えば 0から 2 5 5段階の階 調値で表しているが、 この階調値は明るさや色合い等の要素を混在した 形態で含んでいるため、 特定部位に設けたマーカを色で識別して抽出す ることができない。  An image obtained by an imaging means such as a CCD camera is usually obtained as an RGB signal. This RGB signal represents the signal strength by, for example, a gradation value of 0 to 25 steps, but since this gradation value includes elements such as brightness and color tone in a mixed form, Markers provided can not be identified and extracted by color.
本発明のモーショ ンキヤプチャは、 各特定部位に設けたマーカを、 そ のマーカ毎に色分けすることによ り識別して抽出し、 これによ り特定部 位を特定している。 そのため、 画像中のマーカを色で識別する必要があ る。 The motion capture of the present invention identifies and extracts markers provided at each specific site by color-coding the markers. Identify the rank. Therefore, markers in the image need to be identified by color.
そこで、 画像データの R G B信号を H S I信号に変換する。 H S I表 色系は、 色相 (H: hue), 彩度 ( S : saturation), 明度 ( I : intensity) の 3つの属性を備える。  Therefore, the R G B signal of the image data is converted to the H S I signal. The H S I color system has three attributes: hue (H: hue), saturation (S: saturation), and lightness (I: intensity).
R G B表色系を H S I表色系に変換する方法として、 以下の式が知ら れている。  The following equation is known as a method of converting the R G B color system to the H S I color system.
V =max ( R , G, B), X =min < R , G, B )  V = max (R, G, B), X = min <R, G, B)
S = (V - X) /V  S = (V-X) / V
としたとき、 And when
色相 Hは、  Hue H is
R = Vの場合, Η= (π/ 3 ) * ( b - g )  If R = V, then Η = (π / 3) * (b-g)
G = Vの場合, H= ( C / 3 ) * ( 2 + r - b )  When G = V, H = (C / 3) * (2 + r-b)
B = Vの場合, H= ( 7t / 3 ) * ( 4 + g - r )  When B = V, H = (7t / 3) * (4 + g-r)
ただし、  However,
r = ( V - R ) / ( V - X)  r = (V-R) / (V-X)
g = (V - G) / (V— X)  g = (V-G) / (V-X)
b = ( V - B ) / ( V - X )  b = (V-B) / (V-X)
図 6は H S I表色系の双 6角錘のカラ一モデルである。 例えば、 色相 (H) は、 red を 0 ° として yellow, green, cyan, lue, magenta の 順で角度値で表すことができる。 なお、 この色相数は一例であって他の 色相数としてもよい。 また、 明度 (V) についても数値で表することが できる。  Figure 6 is a color model of the double hexagonal pyramid in the H S I color system. For example, hue (H) can be represented by angle values in the order of yellow, green, cyan, lue and magenta, where red is 0 °. The number of hues is an example and may be another number of hues. In addition, lightness (V) can also be expressed numerically.
したがって、 色相 ( H ) 及び明度 ( V ) について、 あらかじめ所定の マ一力が発する色に応じて、 色相 (H〉 及び明度 (V) を識別するため に闞値を定めておき、 変換して得られた色相信号及び明度信号をこの閾 0 Therefore, for hue (H) and lightness (V), in order to identify hue (H) and lightness (V) in advance according to the color emitted by the predetermined power, threshold values are determined and converted. The obtained hue signal and lightness signal are 0
15 15
値で選別することによ り、 色で区分した個々のマ一力を識別することが できる。 なお、 ここでは、 彩度はマ一力の識別には使用しない (ステツ プ S 1 1 )。 By sorting by value, it is possible to identify individual values grouped by color. Here, the saturation is not used to identify the power (step S 1 1).
そこで、 変換して得られた明度信号 (V ) を予め設定しておいた閾値 と比較する ことによ り選別し、 画像中の高輝度の部分を値する。 図 7 ( a )は画像中から高輝度の部分を抽出した例を模式的に示している(ス テツプ S 1 2 )。  Therefore, the lightness signal (V) obtained by the conversion is selected by comparing it with a preset threshold value, and the high luminance part in the image is evaluated. Fig. 7 (a) schematically shows an example of extracting a high-intensity part from the image (step S12).
次に、 特定部位のマーカの色情報を基にして、 色相信号 (H ) を予め 設定しておいた色相の閾値と比較することにより選別し、 画像中の高輝 度の部分から特定部位 (マーカ) が存在する領域を抽出する。 図 7 ( b ) は画像中の高輝度の部分から特定部位を抽出した例を模式的に示してい る (ステップ S 1 3 )。  Next, based on the color information of the marker of the specific part, the hue signal (H) is selected by comparing with the threshold of the hue set in advance, and the specific part (marker from the high brightness part in the image Extract the region where) exists. Fig. 7 (b) schematically shows an example in which a specific part is extracted from the high-intensity part in the image (step S13).
色相による抽出では、 マーカ等の特定部位の色部分が面積を持っため 画像上では、 複数の画素に渡る領域として検出される。 そこで、 特定部 位の位置を定めるために、 ステップ S 1 3で検出した特定部位の領域か ら特定部位の位置を算出する。 特定部位の位置を算出は、 輝度のピーク 位置の算出、 領域の重心位置の算出、 輝度を用いて重み付けした重心位 置の算出等の任意の演算方法を用いることができる。  In extraction by hue, the color part of a specific part such as a marker has an area, and therefore, it is detected as an area across multiple pixels on the image. Therefore, in order to determine the position of the specific part, the position of the specific part is calculated from the area of the specific part detected in step S13. For calculating the position of the specific part, any calculation method such as calculation of peak position of luminance, calculation of barycentric position of region, calculation of barycentric position weighted using luminance can be used.
なお、 発光ダイオード等の発光体は発光状態が一様とは限らず、 同一 マーカの頜域内であっても発光部分とその周囲では色に差が生じること がある。 この色変化により、 本来ならば連結している領域が分断されて いる場合がある。 これを補うために、 画像上の領域を膨張あるいは収縮 させる処理を行っても良い。 これらの特定部位の位置情報は、 多視点映 像により異なる配置位置のカメラによ り複数得られ、 各カメラ、 各フレ ームで行う。 図 7 ( c ) は画像中で抽出した特定部位の位置例を模式的 に示している (ステップ S 1 4 )。 得られた位置情報は力メラ座標系であるため、 実空間における位置座 標を取得するには、 画像上の位置座標から実空間における位置座標に変 換する必要がある。 カメラ座標系と実空間座標系との間は、 回転行列と 平行移動行列の組合せによ り変換することができる。 なお、 このとき、 カメラ座標系上に位置と実空間座標系上に投影された位置との間におい て光学系により生じる非線形は、 焦点距離やレンズのひずみ係数等のパ ラメータにより補正することができる (ステップ S 1 5 )。 A light emitting body such as a light emitting diode does not necessarily have a uniform light emitting state, and even within the same marker, there may be a difference in color between the light emitting portion and the periphery thereof. Due to this color change, the originally connected area may be divided. In order to compensate for this, processing to expand or contract the area on the image may be performed. A plurality of position information of these specific parts are obtained by cameras of different arrangement positions by multi-viewpoint images, and are performed by each camera and each frame. Figure 7 (c) schematically shows an example of the position of a specific part extracted in the image (step S14). Since the obtained position information is a force camera coordinate system, in order to obtain the position coordinate in real space, it is necessary to convert the position coordinate on the image to the position coordinate in real space. The camera coordinate system and the real space coordinate system can be transformed by the combination of the rotation matrix and the translation matrix. At this time, non-linearity caused by the optical system between the position on the camera coordinate system and the position projected on the real space coordinate system may be corrected by parameters such as focal length and lens distortion coefficient. Yes (step S1 5).
次に、 各カメラで得られた特定部位の位置を用いて 3次元位置データ を算出する。 画像中のマーカを、 カメラ座標系から現実座標系に重みを 考慮して変換する方法として、 Ts a i の手法が知られている。  Next, three-dimensional position data is calculated using the position of the specific part obtained by each camera. The method of Ts a i is known as a method of converting markers in an image from a camera coordinate system to a real coordinate system in consideration of weights.
現実座標系においてマーカ位置の 3次元位置の算出は、 例えば、 図 7 ( d ) において、 各カメラ画像から延びた複数の直線の交点を求めるこ とに対応する。 なお、 実際には、 カメラパラメ一夕の誤差により、 直線 の角度にずれが生じるため、 一点で交差しない場合がある。 このような 場合には、 直線間の距離が最小となる直線上の中点を直線の交点とする ことで求めるよう してもよい。 また、 n本の直線の組合せにおいて中点 を求め、 これらの中点の平均をマーカの 3次元位置とする (ステップ S 1 6 )。 これにより、 特定部位に位置情報を求める (ステップ S 3 )。 次に、 ステップ S 4〜ステップ S 8の工程で、 構成要素の角度情報を 求める。  The calculation of the three-dimensional position of the marker position in the real coordinate system corresponds to, for example, finding the intersection of a plurality of straight lines extending from each camera image in FIG. 7 (d). Note that in practice, errors in the camera parameters may cause a shift in the angle of the straight line, and therefore may not intersect at one point. In such a case, the middle point on the straight line at which the distance between the straight lines is minimum may be determined as the point of intersection of the straight lines. Further, the midpoint is determined in the combination of n straight lines, and the average of these midpoints is set as the three-dimensional position of the marker (step S 16). Thereby, position information is obtained for the specific part (step S 3). Next, in steps S4 to S8, the component angle information is determined.
前記ステツプ S 3で取得した位置情報で定まる制約条件に基づいて、 とり得るモデルを選択する。 モデルは、 対象物である多関節体の種々の 姿勢についてに予め求めておき、 また、 この多関節体について各構成要 素の長さや互いの距離関係や角度関係について、 その大きさや許容範囲 を制約条件として設定しておく。 この制約条件は、 例えば、 特定部位の 位置に対して、 各構成要素の相互の角度の許容範囲を定めておく ことが できる。 許容範囲は、 数値で表すことも、 あるいは相互の関係を表す関 数で表すこともできる。 Based on the constraints determined by the position information acquired at step S3, a possible model is selected. The model is obtained in advance for various postures of the articulated body as the object, and the sizes and tolerances of the lengths, the distance relationships, and the angular relationships of each component of the articulated body are also determined. Set as a constraint condition. This restriction condition may, for example, define the range of mutual angle tolerance of each component with respect to the position of a specific part. it can. The tolerance range can be expressed numerically or as a function representing a mutual relationship.
制約条件による選別を行わない場合には、 画像と一致するモデルを選 択するには、 用意した全てのモデルについて、 モデルと画像との位置関 係を変化させながら重ね合わせ状態を評価しなければならない。 これに 対して、 制約条件を適用することによ り、 画像との重ね合わせを行うモ デルの個数を絞ることができ、 マッチング処理の演算量を減らすことが できる。  When sorting by constraints is not performed, in order to select a model that matches the image, it is necessary to evaluate the superposition state while changing the positional relationship between the model and the image for all prepared models. It does not. On the other hand, by applying constraints, the number of models to be superimposed on the image can be reduced, and the amount of calculation of the matching process can be reduced.
図 3 ( c ) はモデルの選択を模式的に示している。 例えば、 構成要素 のモデルとして c — 1 , c 一 2, c 一 3があるとしたとき、 このモデル の中から制約条件を満足するモデルを選択する。 ここでは、 例えば、 各 構成要素のとり得る角度範囲を制約条件としてモデルを選択する場合を 示している。  Figure 3 (c) schematically shows the model selection. For example, if there are c-1, c-1, c-3 as models of component, choose a model that satisfies the constraints from among these models. Here, for example, a case is shown where a model is selected with the possible angular range of each component as a constraint.
c— l , c 一 2, c— 3 のモデルの内で、 c 一 1, c — 3のモデルが 制約条件を満たし、 c 一 2 のモデルが制約条件を満たさない場合には、 c — 1 , c 一 3のモデルを選択する (ステップ S 4 )。  Among the models c−l, c−12, c−3, c−1, c−3 models satisfy the constraints, and c−12 do not satisfy the constraints. , c Choose one of the three models (step S 4).
ステップ S 4で選択したモデルと画像との重ね合わせを行って画像と モデルのモデルを行う。 画像とモデルとのマッチングは、 例えば、 3次 元モデルを仮想空間上に配置し、 このモデルの姿勢を変えながらモデル 上の明度や彩度の分布をヒス トグラムとして求めておき、 画像について も同様に明度や彩度の分布のヒス トグラムを求め、 このヒス トグラムの 近似を評価することで行う ことができる (ステップ S 5 )。 上記の画像 とモデルとのマッチングを選択した全てのモデルについて行う。この際、 とり得る全ての姿勢について全て評価する方法の他に、 非線形の最小二 乗法や遗伝的アルゴリズム等を用いて、 ある評価関数を最小にするよう にして求めることができる。 6080 The model selected in step S4 is superimposed on the image to model the image and the model. For matching between an image and a model, for example, a three-dimensional model is placed in a virtual space, and the distribution of lightness and saturation on the model is determined as a histogram while changing the posture of the model. The histogram of the distribution of lightness and saturation can be obtained and the approximation of this histogram can be evaluated (step S 5). Perform the matching of the above image with the model for all selected models. At this time, in addition to a method of evaluating all possible postures, a nonlinear least squares method, a genetic algorithm or the like can be used to obtain an evaluation function so as to minimize it. 6080
18 18
例えば、 あるパラメ一夕値 ( 0や位置) を仮定して評価値を求め、 そ の結果を基にパラメ一夕を変化させて評価を繰り替えし、 最小の評価値 を求める ことにより得ることができる。 この評価関数は、 シルエッ ト画 像による結果やオプティ カルフローによる結果とモデルとの比較で決ま るずれの度合いを表すものである (ステップ S 6 )。 画像と選択した全 てのモデルとのマッチングにおいて、 画像に最も近いモデルを求める。 図 3 ( d ) は、 このマッチングの処理を模式的に表している (ステップ S 7 )。  For example, it is possible to obtain an evaluation value by assuming a certain parameter value (0 or position), changing the parameter and repeating evaluation based on the result, and finding the minimum evaluation value. it can. This evaluation function represents the degree of deviation determined by comparing the result of the silled image and the result of the optical flow with the model (step S 6). Find the model closest to the image in matching the image with all selected models. Figure 3 (d) schematically shows this matching process (step S7).
ステップ S 7 において求めた最適なマッチングで得れらたモデルから, 構成要素の角度情報を求める。 図 3 ( e ) は求めたモデルから角度情報 ( Θ 1 ~ Θ 4 ) を求める状態を示している (ステップ S 8 )。  From the model obtained by the optimal matching found in step S7, the component angle information is found. Figure 3 (e) shows how to obtain angle information (Θ1 to Θ4) from the obtained model (step S8).
前記ステップ S 3で求めた位置情報と、 前記ステップ S 9で求めた角 度情報とから、 多関節体の姿勢を求める。 この姿勢を各フレーム取得に ついて行う ことにより、 多関節体の動きをほぼ実時間で求めることがで きる。  The posture of the articulated body is determined from the position information determined in step S3 and the angle information determined in step S9. By performing this posture for each frame acquisition, the motion of the articulated body can be determined in almost real time.
次に、 本発明のモーショ ンキヤプチャ装置について図 8 を用いて説明 する。  Next, the motion capture device of the present invention will be described with reference to FIG.
モーショ ンキヤプチャ装置 1 は、 人体等の多閧節体の多視点映像を撮 像して画像を取得する画像取得手段 2 と、 この画像から構成要素の位置 情報、 及び角度情報を求める演算手段 3 を備え、 位置情報と角度情報と により多関節体の姿勢及び動作を決定する。  The motion capture device 1 is an image acquisition unit 2 that acquires multi-viewpoint images of multiple joints such as a human body and acquires an image, and an operation unit 3 that obtains position information and angle information of components from this image. The position and movement of the articulated body are determined by the position information and the angle information.
画像取得手段 2 は、 少なく とも一つの構成要素を挟んで連接される構 成要素の特定部位に設けられた個々に識別可能なマーカを含む多関節体 の画像を求める。 多視点画像を取得するために複数のカメラを配置して なり、 各画像フレームは同期がとられている。  The image acquisition means 2 obtains an image of an articulated body including individually distinguishable markers provided at specific parts of constituent elements connected across at least one constituent element. A plurality of cameras are arranged to acquire multi-view images, and each image frame is synchronized.
演算手段 3は、 画像中の特定部位 (マーカ) を検出する検出手段 3 a 6080 The calculating means 3 is a detecting means 3a for detecting a specific part (marker) in the image. 6080
19 19
と、 特定部位 (マーカ) の位置情報を求める位置情報検出手段 3 b と、 複数の構成要素を互いに関係付ける制約条件や、 多関節体の構成要素の 姿勢を表す複数のモデルを予め記憶する記憶手段 3 c と、 複数のモデル から特定部位の位置により定まる多関節体の制約条件に適合するモデル を選択するモデル選択手段 3 d と、 選択したモデルの中から画像中の多 関節体の像に最も近似するモデルを抽出し、 抽出したモデルの構成要素 間の角度から多関節体の構成要素の角度情報を求めるマッチング手段 3 e とを備える。 A position information detecting means 3 b for obtaining position information of a specific part (marker), a constraint that relates a plurality of components to one another, and a memory for storing in advance a plurality of models representing the postures of components of an articulated body Means 3 c, model selecting means 3 d for selecting a model that fits the constraints of the articulated body determined by the position of the specific part from a plurality of models, and an image of the articulated body in the image from among the selected models A matching unit 3 e for extracting the most approximate model and obtaining angle information of the component of the articulated body from the angle between the components of the extracted model.
なお、 演算手段 3が備える各手段は、 演算手段が行う各機能を説明す るものであり、 必ずしも各機能を実行するハー ドウェアを備えるもので はなく、 ソフ トウェアで実行することができる。  Each means provided in the computing means 3 is for describing each function performed by the computing means, and is not necessarily provided with hardware for executing each function, and can be implemented by software.
次に、 本発明のモーショ ンキヤプチヤにより人体の姿勢、 動作を求め る場合のマーカの取り付け位置について図 9 , 1 0 を用いて説明する。 人体は、 複数の構成要素が関節で連接された多関節体としてとらえる ことができ、 この構成要素を階層状に連接して表すことができる。 なお、 構成要素は任意に設定することができる。 図 9 , 1 0は一つのモデルを 示しており、 他の構成要素の組合せで設定することもできる。  Next, the attachment position of the marker in the case where the posture and motion of the human body are determined by the motion canopy of the present invention will be described with reference to FIGS. The human body can be regarded as an articulated body in which a plurality of components are connected by joints, and these components can be expressed hierarchically. The components can be set arbitrarily. Figures 9 and 10 show one model, which can be set by combining other components.
図 9 に示すモデルでは、 腰部を階層の上層とし、 上半身については胸 部、 上腕部 (左上腕部、 右上腕部) 及び頭部、 前腕部 (左前腕部、 お前 腕部) 手 (左手、 右手) の順で下層に連接されて表され、 下半身につい ては腿 (左腿、 右腿)、 脛 (左脛、 右脛)、 足 (左足、 右足) の順で下層 に連接されて表される。  In the model shown in Fig. 9, the lumbar region is the upper layer, and the upper body is the chest, upper arm (upper left arm, upper right arm) and head, forearm (left forearm, upper arm) hand (left hand) The lower body is connected to the lower layer in the order of the thigh (left thigh and right thigh), the shin (left shin and right shin), and the legs (left foot and right foot). Be done.
このモデルは測定対象である人体について用意し、 モデルで表される 各層は前記した多関節体の構成要素に該当する。 各構成要素について、 その長さ、 構成要素間の距離や角度等について制約条件を設定する。 モーショ ンキヤプチヤにおいては、 人体の多視点映像を取得し、 得ら 16080 This model is prepared for the human body to be measured, and each layer represented by the model corresponds to the component of the articulated body described above. For each component, set constraints on the length, distance between components, and angle. In motion chap- ters, multi-view images of the human body are acquired and obtained. 16080
20 20
れた画像から前記した工程に従って各構成要素の特定部位の位置情報と 角度情報を求めることにより、 人体の姿勢及び動作を取得する。 The posture and motion of the human body are acquired by obtaining position information and angle information of a specific part of each component from the captured image according to the above-described process.
ここで、 特定部位の位置情報を求めるために人体の特定部位にマーカ を取り付ける。 マーカは人体の全構成要素に設けるのではなく、 少数の マ一力を用いてモデルの構成要素の内から選択した部分に取り付ける。 取り付けるマ一力の数は、 被験者の負担に成らない程度とし、 また、 マ —力の取り付け位置は、 人体が種々の姿勢をとつた場合においても、 マ —力が画像上に常に写し込まれるような部位を選択する。  Here, a marker is attached to a specific part of the human body in order to obtain position information of the specific part. The markers are not attached to all components of the human body, but are attached to selected ones of the components of the model using a small amount of force. The number of forces to be attached is such that the subject is not burdened, and the position of force attachment is such that the force is always reflected on the image even when the human body is in various postures. Choose a similar site.
また、 マーカの取り付け位置において、 モデルと画像とのマッチング 処理における演算の複雑さを考慮して、 マーカを取り付ける構成要素の 間に適当な階層間隔を設ける。 モデルと画像とのマッチングでは、 特定 部位の位置情報で定まる制約条件によってと り得るモデルの数を低減さ せているが、 特定部位間の構成要素の数が多ければとり得るモデルの数 も増加する。 そこで、 人体の姿勢、 動作の決定において、 全演算量が少 なくなるような適当な間隔の特定部位にマ一力を取り付ける。  In addition, at the marker installation position, an appropriate hierarchical spacing is provided between the components to which the markers are attached, in consideration of the operation complexity in the model-image matching process. While matching between models and images reduces the number of models that can be obtained by the constraints determined by the position information of a specific part, the number of possible models also increases as the number of components between specific parts increases. Do. Therefore, in determining the posture and motion of the human body, place a force on specific parts at appropriate intervals that reduce the total amount of computation.
図 9 、 1 0 に示すモデルでは、 階層の上位である腰部にマ一力を取り 付け、 上半身では上腕と前腕の関節部にマーカを取り付け、 下半身では 腿と脛の関節部にマーカを取り付ける。腰部については両側に取り付け、 上腕と前腕の関節部及び腿と脛の関節部については左右の両側に取り付 ける。  In the model shown in Fig. 9 and 10, a force is attached to the upper waist of the hierarchy, a marker is attached to the joints of the upper arm and forearm in the upper body, and a marker is attached to the joints of the thigh and shin in the lower body. The waist is attached to both sides, and the joints of the upper and lower arms and the joints of the thigh and leg are attached to the left and right.
これにより、 合計 6個のマ一力を人体に取り付け、 これにより腰部の 位置情報、 左右の両肘の位置情報、 及び左右の両膝の位置情報を求める。 また、 角度情報については、 例えば、 胸部や上腕部の角度情報は、 腰部 の位置情報と肘部の位置情報で定まる胸部と上腕部の制約条件を用いる ことによ り、 胸部と上腕部がとり得る姿勢の中から評価する姿勢を抽出 して処理数を減少させて、 モデルと画像とのマッチングにより求める。 次に、 本発明のモーショ ンキヤプチヤに好適なマ一力について、 図 1 1, 1 2を用いて説明する。 図 1 1 ( a) はマーカの断面図であり、 図 1 1 ( ) は図 1 1 ( a) 中の E— Eで示す位置から下方を見た図であ る。 In this way, a total of six forces are attached to the human body, and the position information of the waist, the position information of both left and right elbows, and the position information of both knees are calculated. For angle information, for example, the chest and upper arm are obtained by using the chest and upper arm constraints determined by the position information of the waist and the position of the elbow, for example, the angle information of the chest and upper arm. The posture to be evaluated is extracted from among the postures to be obtained, the number of processing is reduced, and the model is found by matching the image. Next, the force suitable for the motion canopy of the present invention will be described with reference to FIGS. Figure 1 1 (a) is a cross-sectional view of the marker, and Figure 1 1 () is a view looking downward from the position shown by E-E in Figure 1 1 (a).
マーカ 2 0は、 一つの第 1の発光ダイオー ド 2 1 と、 複数の第 2の発 光ダイオー ド 2 2を含む複数の発光ダイオード (L E D) を備える。 第 1の発光ダイォ一 ド 2 1 は特定部位の位置情報を取得するために用い、 第 2の発光ダイオー ド 2 2は、 各マーカを識別して特定部位を他の特定 部位から区別するために用いる。 このマ一力の識別は、 第 1の発光ダイ オー ド 2 1 と第 2の発光ダイオード 2 2とは発光色を異ならせ、 この発 光色の組合せによ りマ一力の識別を行う。 発光色の組合せは、 H S I表 色系において互いの色相角度を対称となるようにして、 閾値により識別 が容易となる組合せを選択する。 例えば、 中央の発光ダイオードの色を red とし、 周囲の発光ダイオードの色を blue とする組合せ、 中央の発 光ダイオードの色を red とし、 周囲の発光ダイオー ドの色を greenとす る組合せ、 あるいは中央の発光ダイオードの色を yellow とし、 周囲の 発光ダイオー ドの色を blueとする組合せ等の組合せを選択する。  The marker 20 comprises a plurality of light emitting diodes (L E D) including one first light emitting diode 21 and a plurality of second light emitting diodes 22. The first light emitting diode 21 is used to obtain position information of a specific site, and the second light emitting diode 22 is used to identify each marker and distinguish the specific site from other specific sites. Use. In the identification of the power, the first light emitting diode 21 and the second light emitting diode 22 have different light emitting colors, and the power identification is performed by a combination of the light emitting colors. The combinations of light emission colors are selected so as to make the hue angles symmetrical with each other in the H S I color system, and to be easily identified by the threshold value. For example, the combination of the central light emitting diode as red and the surrounding light emitting diode as blue, the central light emitting diode as red and the surrounding light emitting diode as green, or Select a combination such that the color of the central light emitting diode is yellow and the color of the surrounding light emitting diodes is blue.
異なる色の組合せとすることにより、 画像中の背景等にマ一力と同色 の部分が存在する場合であっても、 これらの部分との区別を容易に行う ことができる。  By combining different colors, it is possible to easily distinguish them from each other even when there is a portion with the same color as the background in the image.
マ一力 2 0は、 ベース 2 3の上方に第 1の発光ダイオード 2 1を配置 し、 その下方の周囲に複数個の第 2の発光ダイオー ド 2 2を環状に配置 する。 また、 ベース 2 1上には各発光ダイオードの電源 2 5が設けられ る。 電源 2 5は、 例えばポタン電池を用いることができる。 なお、 図中 の符号 2 6は、 電源 2 5 と発光ダイオー ドとの接続を制御するスィ ッチ であり、 例えば、 取り外し自在な絶縁体とし、 この絶縁体を取り外すこ とにより発光ダイオードを発光させることができる。 In the arrangement 20, the first light emitting diode 21 is disposed above the base 23, and the plurality of second light emitting diodes 22 are disposed in a ring around the lower periphery thereof. Also, on the base 21 is provided a power supply 25 of each light emitting diode. The power source 25 can use, for example, a battery. Reference numeral 26 in the figure is a switch for controlling the connection between the power supply 25 and the light emitting diode. For example, a removable insulator is used to remove this insulator. Can cause the light emitting diode to emit light.
第 2の発光ダイォード 2 2は、 図 1 1 ( b ) に示すように、 第 1 の発 光ダイオード 2 1 の周囲に等角度間隔で、 その発光方向を等角度間隔と なるように配置する。 発光ダイオードは、 通常、 指向性を有する。 人体 は種々の姿勢をとるため、 カメラに対するマーカの写り込みを良好とす るには、 マーカの発光は指向性がないことが望ましい。 そこで、 複数の 発光ダイオー ドの発光方向を等角度間隔でずら して配置する。 図 1 1 ( b ) では、 5つの第 2 の発光ダイオード 2 2 をそれぞれ例えば 7 2 ° の方向となるように配置する。 なお、 第 2 の発光ダイオード 2 2の配置 数は任意とすることができるが、 発光ダイオー ドの物理的な大きさと、 負担が少ないマ一力の大きさとの関係、 及び取得画像での検出状態等の 各種条件を考慮して設定する。  As shown in FIG. 11 (b), the second light emitting diodes 22 are arranged at equal angular intervals around the first light emitting diode 21 such that the light emitting directions are equal angular intervals. A light emitting diode usually has directivity. Since the human body takes various postures, it is desirable that the light emission of the marker be non-directional in order to make the marker well reflected to the camera. Therefore, the light emitting directions of a plurality of light emitting diodes are arranged at equal angular intervals. In FIG. 1 1 (b), the five second light emitting diodes 2 2 are arranged in the direction of, for example, 72 °. Although the number of the second light emitting diodes 22 can be set arbitrarily, the relationship between the physical size of the light emitting diodes and the size of the small force of the light load, and the detection condition in the acquired image Set in consideration of various conditions such as.
マ一力 2 0は、 これらの発光ダイオー ド 2 1 , 2 2や電源 2 5等を内 部に収納する透光性を有するカバー 2' 4を備える。発光ダイオー ド 2 1 , 2 2の光は、 カバー 2 4 と通して外部に発光される。 このカバ一 2 4の 内面あるいは外面を散乱面としたり、 カバー 2 4を構成する素材を光の 散乱体としてもよい。 カバ一 2 4に散乱性を持たせることにより、 発光 ダイオードの発光はカバー 2 4によって散乱され、 カメラに対する写し 込みを向上させることができる。  The force sensor 20 includes a light transmitting cover 2'4 for internally housing the light emitting diodes 21 and 22 and the power source 25 and the like. The light of the light emitting diodes 2 1 and 2 2 is emitted to the outside through the cover 2 4. The inner surface or outer surface of the cover 24 may be a scattering surface, or the material constituting the cover 24 may be a light scatterer. By making the cover 24 scatter, the light emitted from the light emitting diode is scattered by the cover 24 and the reflection to the camera can be improved.
第 1 の発光ダイオード 2 1 と第 2の発光ダイオー ド 2 2 との間に遮光 体 2 8 を設置位置を変更可能に備える。 遮光体 2 8 は、 第 1 の発光ダイ ォー ド 2 1 の発光と第 2の発光ダイォード 2 2 の発光とが、 画像上にお いて混じり合って、 発光ダイオードの発光色とは異なる色を生じないよ う にするためである。 画像上においてマーカ部分に異なる色が生じた場 合には、 マ一力の識別が困難となったり、 マ一力位置を誤って検出する 要因となるおそれがある。 遮光体 2 8 は、 第 1 の発光ダイオー ド 2 1が発する光と第 2の発光ダ ィオード 2 2が発する光との間に位置することにより、 画像上に写し出 される第 1 の発光ダイオー ド 2 1 と第 2 の発光ダイオー ド 2 2の像間を 分離させ、 両光の混合を防止する。 A light shielding body 2 8 is provided between the first light emitting diode 2 1 and the second light emitting diode 2 2 so that the installation position can be changed. The light shield 2 8 is a mixture of the light emission of the first light emitting diode 2 1 and the light emission of the second light emitting diode 2 2 on the image, and a color different from the light emission color of the light emitting diode This is to ensure that it does not occur. If different colors occur in the marker portion on the image, it may be difficult to identify the force, or it may be a factor in detecting the position of the force incorrectly. The light shield 2 8 is positioned between the light emitted from the first light emitting diode 21 and the light emitted from the second light emitting diode 22, whereby the first light emitting diode projected onto the image is formed. Separate the images of diode 2 1 and the second light emitting diode 2 2 to prevent mixing of both lights.
遮光体 2 8 は、 中央部に開口部を有する環状体とし、 開口部にマ一力 The light shielding body 2 8 is an annular body having an opening at the central portion,
2 1 の力パー 2 4の突出部分を通すことにより取り付けることができる。 また、 遮光体 2 8 の素材を弹性材とすることにより、 カバ一 2 4への取 り付けを容易とすることができる。 また、 カバ一 2 4の外周面の周囲に 環状の窪み部分 2 7 を設け、 この窪み部分 2 7 にカバ一 2 4をはめ込む ことで取り付けることもできる。 窪み部分 2 7はカパ一 2 4の上下方向 に多段に形成してもよく、 これによりカバー 2 4への取り付け位置を変 更することができる。 It can be attached by passing the protruding part of 2 1 force par 2 4. In addition, by using the material of the light shield 2 8 as the inertia material, mounting to the cover 14 can be facilitated. Alternatively, an annular recessed portion 2 7 may be provided around the outer peripheral surface of the cover 24, and the cover 24 may be fitted into the recessed portion 2 7. The recessed portion 27 may be formed in multiple stages in the vertical direction of the cover 24 so that the mounting position on the cover 24 can be changed.
図 1 2 ( a ), ( c ) は、 遮光体 2 8 を窪み部分 2 7の下段に取り付け た状態を示し、 図 1 2 ( b ), ( d ) は、 遮光体 2 8 を窪み部分 2 7の上 段に取り付けた状態を示している。 遮光体 2 8 の取り付け位置を変更す ることにより、 画像に映し出される第 2 の発光ダイオー ド 2 2 の環状部 分の面積が増減する。 この第 2の発光ダイオード 2 2の発光面積を調整 することにより、 人体とカメラとの距離や、 背景色の状態、 照明状態等 の撮像環境に応じてマーカの像を調整し、 マーカの識別を良好とするこ とができる。  Figures 1 2 (a) and (c) show a state in which the light shield 2 8 is attached to the lower part of the depressed portion 27. Figures 1 2 (b) and (d) show the light shield 2 8 in the depressed portion 2 It is shown attached to the upper part of 7. By changing the mounting position of the light shield 2 8, the area of the annular portion of the second light emitting diode 2 2 displayed in the image can be increased or decreased. By adjusting the light emitting area of this second light emitting diode 22, the marker image is adjusted according to the imaging environment such as the distance between the human body and the camera, the state of the background color, the state of illumination, etc. It can be done well.
本発明の態様によれば、 取り付けるマーカの数を少数とすることがで き、 被験者の負担を軽減することができ、 少ない負担で位置情報と角度 状態を取得することができる。  According to the aspect of the present invention, the number of markers attached can be reduced, the burden on the subject can be reduced, and position information and angle state can be acquired with a small burden.
本発明の態様によれば、 画像とモデルとのマッチングにおいて、 評価 するモデルの個数を少なくすることができるため、 処理時間を短縮する ことができ、 姿勢や動きの実時間 (リアルタイム) 処理に近づけること ができる。 また、 同じ処理時間であれば、 測定精度を高めることができ る。 According to the aspect of the present invention, since the number of models to be evaluated can be reduced in matching between an image and a model, processing time can be shortened, and real-time (real-time) processing of posture and motion can be approached. about Can. Also, if the processing time is the same, the measurement accuracy can be improved.
以上説明したように、 本発明によれば、 モーショ ンキヤプチャ において、 マーカを装着する ことによる負担を少なくする ことが でき、 かつ解析処理時間を短縮し、 高い抽出精度を得る ことがで さる。 産業上の利用の可能性  As described above, according to the present invention, in the motion capture, the burden of attaching the marker can be reduced, the analysis processing time can be shortened, and high extraction accuracy can be obtained. Industrial Applicability
本発明は、 人や物等の移動体の解析や仮想空間の形成に利用す る ことができ、 工業、 医学、 スポーツ等の分野に適用する ことが できる。  The present invention can be used for analysis of mobile objects such as people and objects, and formation of a virtual space, and can be applied to the fields of industry, medicine, sports and the like.

Claims

請 求 の 範 囲 The scope of the claims
1 . 複数の構成要素が互いに関節で接続されてなる多関節体の姿勢及び 動作の決定において、 1. In the determination of the posture and movement of an articulated body in which a plurality of components are articulated with one another,
前記多関節体を構成する構成要素の特定部位の位置情報を求め、 前記位置情報と前記複数の構成要素を互いに関係付ける制約条件とに基 づき、 前記位置情報で定まる制約条件を満たす各構成要素間の角度情報 ¾ί求め、 The position information of a specific part of a component constituting the articulated body is determined, and each component satisfying a constraint defined by the position information is determined based on the position information and a constraint that relates the plurality of components to each other. Angle information between 3⁄4 ί,
前記位置情報及び角度情報とにより多関節体の姿勢及び動作を決定する ことを特徴とするモーショ ンキヤプチャ方法。 A motion capture method characterized in that an attitude and an action of an articulated body are determined based on the position information and the angle information.
2 . 複数の構成要素が互いに関節で接続されてなる多関節体の姿勢及び 動作の決定において、 2. In the determination of the posture and movement of an articulated body in which a plurality of components are articulated with one another,
前記複数の構成要素を互いに関係付ける距離及び角度の制約条件を予め 定めておき、 The distance and angle constraints relating the plurality of components to one another are determined in advance,
少なく とも一つの構成要素を挟んで連接される構成要素の特定部位に、 個々に識別可能なマーカを設け、 Providing individually identifiable markers at specific parts of the components to be linked across at least one component,
当該マ一力を含む多閧節体の画像を求め、 Find an image of a multi-segmented body including the said power,
前記画像中のマ一力の位置から特定部位の位置情報を求め、 The position information of the specific part is determined from the position of the force in the image,
前記画像中の多関節体の像と前記位置情報と前記複数の構成要素を互い に関係付ける制約条件とに基づき、 前記位置情報で定まる制約条件を満 たす各構成要素間の角度情報の中から多関節体の像に一致する角度情報 を求め、 Among the angle information between each component that satisfies the constraint defined by the position information, based on the image of the articulated body in the image, the position information, and the constraint that relates the plurality of components to each other. Find angle information that matches the image of the articulated body from
当該位置情報及び角度情報とにより多関節体の姿勢及び動作を決定する ことを特徴とするモーショ ンキヤプチャ方法。 A motion capture method characterized in that the posture and motion of an articulated body are determined based on the position information and the angle information.
3 . 複数の構成要素が互いに関節で接続される多関節体の姿勢及び動作 の決定において、  3. In the determination of the posture and movement of an articulated body in which several components are articulated to one another,
前記複数の構成要素を互いに関係付ける制約条件、 及び多関節体の構成 要素の姿勢を表す複数のモデルを予め定めておき、 Constraint which relates the plurality of components to each other, and configuration of articulated body Predetermine multiple models that represent the attitude of the element,
少なく とも一つの構成要素を挟んで連接される構成要素の特定部位に設 けられた個々に識別可能なマーカを含む多関節体の画像を求め、 前記画像中のマ一力の位置から特定部位の位置情報を求め、 An image of an articulated body including individually distinguishable markers provided at a specific site of a component to be connected across at least one component is obtained, and a specific site is obtained from the position of the force in the image. Find location information of
前記複数のモデルから、 前記位置情報により定まる制約条件に適合する モデルを抽出し、 当該抽出したモデルの中から前記画像中の多関節体の 像に最も近似するモデルを抽出し、 From the plurality of models, a model that meets constraints defined by the position information is extracted, and from among the extracted models, a model that is most approximate to the image of the articulated body in the image is extracted.
当該抽出したモデルの構成要素間の角度から多関節体の構成要素の角度 情報を求め、 The angle information of the component of the articulated body is determined from the angle between the components of the extracted model,
前記位置情報及び角度情報とにより多関節体の姿勢及び動作を決定する ことを特徴とするモ一シヨ ンキヤプチャ方法。 A posture capture method characterized in that an attitude and an action of an articulated body are determined based on the position information and the angle information.
4 . 人体及び人体の特定部位に取り付けたマーカの画像を取得し、 前記画像から前記マーカを個々に識別して抽出して人体の特定部位の位 置情報を求め、  4. Acquire an image of a marker attached to the human body and a specific part of the human body, individually identify and extract the markers from the image, and obtain position information of the specific part of the human body,
前記画像と、 予め用意したモデルの中から前記特定部位の位置情報に基 づいて抽出したモデルとの像合わせにより人体の構成要素の角度情報を 求め、 Angle information of a component of the human body is determined by image alignment of the image and a model extracted based on position information of the specific part from among the models prepared in advance,
前記位置情報及び角度情報から人体の姿勢を決定することを特徴とする. モーショ ンキヤプチャ方法。 The posture of the human body is determined from the position information and the angle information. Motion capture method.
5 . 前記モデルの抽出は、 前記特定部位の位置により定まる人体姿勢の 制約条件に適合する少なく とも 1 つのモデルを選出することにより行い. 前記画像とモデルとの像合わせは、 抽出したモデルの中から画像中の人 体画像に最も近似するモデルを選出することにより行う ことを特徴とす る、 請求の範囲第 4項に記載のモーショ ンキヤプチャ方法。  5. The extraction of the model is performed by selecting at least one model that meets the constraint condition of the human body posture determined by the position of the specific part. Image alignment of the image and the model is among the extracted models. The motion capture method according to claim 4, wherein the motion capture method is performed by selecting a model that most closely matches a human image in the image.
6 . マーカが発光する少なく とも 2つの異なる色の組み合わせにより当 該マーカを識別し、 前記色の組合せのいずれか一つの色の位置からマーカ位置を求めること を特徴とする、 請求の範囲第 2項乃至第 5項のいずれかに記載のモ一 ショ ンキヤプチヤ方法。 6. Identify the marker by the combination of at least two different colors that the marker emits, The method according to any one of claims 2 to 5, wherein the marker position is determined from the position of any one color of the color combination.
7 . 前記特定部位は、 人体中の腰部、 肘部、 及び膝部であることを特徴 とする、 請求の範囲第 1 項乃至第 6項のいずれかに記載のモーショ ン キャプチャ方法。  7. The motion capture method according to any one of claims 1 to 6, wherein the specific part is a waist, an elbow, and a knee in a human body.
8 . 複数の構成要素が互いに関節で接続される多関節体の姿勢及び動作 を決定するモーショ ンキヤプチャ装置であって、  8. A motion capture device for determining the posture and motion of an articulated body in which a plurality of components are articulated to one another,
多関節体を構成する構成要素の特定部位の位置情報を求める位置検出手 段と、 Position detection means for obtaining position information of a specific part of a component constituting an articulated body;
前記位置情報と前記複数の構成要素を互いに関係付ける制約条件とに基 づき、 前記位置情報で定まる制約条件を満たす各構成要素間の角度情報 を求める角度検出手段とを備え、 And angle detection means for obtaining angle information between each component that satisfies a constraint defined by the position information based on the position information and a constraint that relates the plurality of components to each other,
前記検出した位置情報及び角度情報とによ り多関節体の姿勢及び動作を 決定することを特徴とするモーショ ンキヤプチャ装置。 A motion capture device characterized in that the posture and motion of an articulated body are determined based on the detected position information and angle information.
9 . 複数の構成要素が互いに関節で接続される多関節体の姿勢及び動作 を決定するモーショ ンキヤプチャ装置であって、  9. A motion capture device for determining the posture and motion of an articulated body in which a plurality of components are articulated to one another,
前記複数の構成要素を互いに関係付ける制約条件、 及び多関節体の構成 要素の姿勢を表す複数のモデルを予め記憶する記憶手段と、 A constraint that relates the plurality of components to each other; and storage means for pre-storing a plurality of models representing the postures of the components of the articulated body.
少なく とも一つの構成要素を挟んで連接される構成要素の特定部位に設 けられた個々に識別可能なマーカを含む多関節体の画像を求める画像取 得手段と、 An image acquisition means for obtaining an image of an articulated body including individually identifiable markers provided at specific parts of the components connected across at least one component;
前記画像中のマーカの位置から特定部位の位置情報を求める位置情報検 出手段と、 Position information detecting means for obtaining position information of a specific part from positions of markers in the image;
前記複数のモデルから、 前記特定部位の位置により定まる多関節体の制 約条件に適合するモデルを選択するモデル選択手段と、 前記択したモデルの中から前記画像中の多関節体の像に最も近似するモ デルを抽出し、 当該抽出したモデルの構成要素間の角度から多関節体の 構成要素の角度情報を求めるマッチング手段とを備え Model selecting means for selecting from the plurality of models, a model that meets the constraint condition of the articulated body determined by the position of the specific part; A matching unit that extracts a model that most approximates the image of the articulated body in the image from the selected models, and obtains angle information of the component of the articulated body from the angle between the components of the extracted model. And
前記位置情報及び角度情報とにより多関節体の姿勢及び動作を決定する ことを特徵とするモーショ ンキヤプチャ装置。 A motion capture device characterized in that the posture and motion of an articulated body are determined based on the position information and the angle information.
1 0 . 複数の構成要素が互いに関節で接続される多関節体の姿勢及び動 作を決定するモーショ ンキヤプチヤにおいて、 多関節体の特定部位を識 別するマーカであって、  1 0. A marker that identifies a specific part of an articulated body in motion masks that determine the posture and operation of an articulated body in which a plurality of components are articulated to each other,
前記マーカは、 中央に配置する 1つの第 1 の発光ダイオードと、 前記第 1 の発光ダイオードに周囲に等角度間隔で配置する複数の第 2 の発光ダ ィオードを備え、 前記第 1 の発光ダイオードと第 2 の発光ダイオー ドは 発光色を異にすることを特徴とする、 モ一ショ ンキヤプチャ用マーカ。The marker includes one first light emitting diode disposed at the center, and a plurality of second light emitting diodes disposed at equal angular intervals around the first light emitting diode, and the first light emitting diode A marker for motion capture, wherein the second light emitting diode is different in light emitting color.
1 1 . 前記第 1 の発光ダイォー ドと第 2の発光ダイォ一 ドとの間に遮光 体を設けることを特徴とする、 請求の範囲第 1 0項に記載のモーショ ン キヤプチャ用マーカ。 11. The motion capture marker according to claim 10, wherein a light shield is provided between the first light emitting diode and the second light emitting diode.
PCT/JP2003/016080 2003-04-22 2003-12-16 Motion capturing method, motion capturing device, and motion capturing marker WO2004094943A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2004571106A JPWO2004094943A1 (en) 2003-04-22 2003-12-16 Motion capture method, motion capture device, and motion capture marker
AU2003289108A AU2003289108A1 (en) 2003-04-22 2003-12-16 Motion capturing method, motion capturing device, and motion capturing marker

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2003-116631 2003-04-22
JP2003116631 2003-04-22

Publications (1)

Publication Number Publication Date
WO2004094943A1 true WO2004094943A1 (en) 2004-11-04

Family

ID=33307995

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2003/016080 WO2004094943A1 (en) 2003-04-22 2003-12-16 Motion capturing method, motion capturing device, and motion capturing marker

Country Status (3)

Country Link
JP (1) JPWO2004094943A1 (en)
AU (1) AU2003289108A1 (en)
WO (1) WO2004094943A1 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007071660A (en) * 2005-09-06 2007-03-22 Toshiba Corp Working position measuring method in remote inspection, and instrument therefor
JP2008537815A (en) * 2005-03-17 2008-09-25 本田技研工業株式会社 Pose estimation based on critical point analysis
JP2010025855A (en) * 2008-07-23 2010-02-04 Sakata Denki Track displacement measuring device
JP2010524113A (en) * 2007-04-15 2010-07-15 エクストリーム リアリティー エルティーディー. Man-machine interface device system and method
JP2011007578A (en) * 2009-06-24 2011-01-13 Fuji Xerox Co Ltd Position measuring system, computation device for position measurement and program
US8432390B2 (en) 2004-07-30 2013-04-30 Extreme Reality Ltd Apparatus system and method for human-machine interface
US8462199B2 (en) 2005-10-31 2013-06-11 Extreme Reality Ltd. Apparatus method and system for imaging
US8548258B2 (en) 2008-10-24 2013-10-01 Extreme Reality Ltd. Method system and associated modules and software components for providing image sensor based human machine interfacing
JP2014013256A (en) * 2013-09-13 2014-01-23 Sakata Denki Track displacement measurement apparatus
US8872899B2 (en) 2004-07-30 2014-10-28 Extreme Reality Ltd. Method circuit and system for human to machine interfacing by hand gestures
US8878779B2 (en) 2009-09-21 2014-11-04 Extreme Reality Ltd. Methods circuits device systems and associated computer executable code for facilitating interfacing with a computing platform display screen
US8928654B2 (en) 2004-07-30 2015-01-06 Extreme Reality Ltd. Methods, systems, devices and associated processing logic for generating stereoscopic images and video
US9046962B2 (en) 2005-10-31 2015-06-02 Extreme Reality Ltd. Methods, systems, apparatuses, circuits and associated computer executable code for detecting motion, position and/or orientation of objects within a defined spatial region
US9177220B2 (en) 2004-07-30 2015-11-03 Extreme Reality Ltd. System and method for 3D space-dimension based image processing
US9218126B2 (en) 2009-09-21 2015-12-22 Extreme Reality Ltd. Methods circuits apparatus and systems for human machine interfacing with an electronic appliance
JP2017101961A (en) * 2015-11-30 2017-06-08 株式会社ソニー・インタラクティブエンタテインメント Light-emitting device adjusting unit and drive current adjustment method
JP2019141262A (en) * 2018-02-19 2019-08-29 国立大学法人 筑波大学 Method for analyzing motion of martial art
JP2020160568A (en) * 2019-03-25 2020-10-01 日本電信電話株式会社 Image synchronization device, image synchronization method, and program
JP2023521952A (en) * 2020-07-27 2023-05-26 テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド 3D Human Body Posture Estimation Method and Apparatus, Computer Device, and Computer Program

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6629055B2 (en) * 2015-11-30 2020-01-15 株式会社ソニー・インタラクティブエンタテインメント Information processing apparatus and information processing method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000258123A (en) * 1999-03-12 2000-09-22 Sony Corp Method and device for processing image, and presentation medium
JP2003035515A (en) * 2001-07-23 2003-02-07 Nippon Telegr & Teleph Corp <Ntt> Method, device and marker for detecting three- dimensional positions
JP2003109015A (en) * 2001-10-01 2003-04-11 Masanobu Yamamoto System for measuring body action

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000258123A (en) * 1999-03-12 2000-09-22 Sony Corp Method and device for processing image, and presentation medium
JP2003035515A (en) * 2001-07-23 2003-02-07 Nippon Telegr & Teleph Corp <Ntt> Method, device and marker for detecting three- dimensional positions
JP2003109015A (en) * 2001-10-01 2003-04-11 Masanobu Yamamoto System for measuring body action

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8872899B2 (en) 2004-07-30 2014-10-28 Extreme Reality Ltd. Method circuit and system for human to machine interfacing by hand gestures
US9177220B2 (en) 2004-07-30 2015-11-03 Extreme Reality Ltd. System and method for 3D space-dimension based image processing
US8928654B2 (en) 2004-07-30 2015-01-06 Extreme Reality Ltd. Methods, systems, devices and associated processing logic for generating stereoscopic images and video
US8432390B2 (en) 2004-07-30 2013-04-30 Extreme Reality Ltd Apparatus system and method for human-machine interface
JP2008537815A (en) * 2005-03-17 2008-09-25 本田技研工業株式会社 Pose estimation based on critical point analysis
JP4686595B2 (en) * 2005-03-17 2011-05-25 本田技研工業株式会社 Pose estimation based on critical point analysis
JP2007071660A (en) * 2005-09-06 2007-03-22 Toshiba Corp Working position measuring method in remote inspection, and instrument therefor
US8085296B2 (en) 2005-09-06 2011-12-27 Kabushiki Kaisha Toshiba Method and apparatus for measuring an operating position in a remote inspection
US8878896B2 (en) 2005-10-31 2014-11-04 Extreme Reality Ltd. Apparatus method and system for imaging
US9131220B2 (en) 2005-10-31 2015-09-08 Extreme Reality Ltd. Apparatus method and system for imaging
US8462199B2 (en) 2005-10-31 2013-06-11 Extreme Reality Ltd. Apparatus method and system for imaging
US9046962B2 (en) 2005-10-31 2015-06-02 Extreme Reality Ltd. Methods, systems, apparatuses, circuits and associated computer executable code for detecting motion, position and/or orientation of objects within a defined spatial region
KR101379074B1 (en) * 2007-04-15 2014-03-28 익스트림 리얼리티 엘티디. An apparatus system and method for human-machine-interface
JP2010524113A (en) * 2007-04-15 2010-07-15 エクストリーム リアリティー エルティーディー. Man-machine interface device system and method
JP2010025855A (en) * 2008-07-23 2010-02-04 Sakata Denki Track displacement measuring device
US8548258B2 (en) 2008-10-24 2013-10-01 Extreme Reality Ltd. Method system and associated modules and software components for providing image sensor based human machine interfacing
JP2011007578A (en) * 2009-06-24 2011-01-13 Fuji Xerox Co Ltd Position measuring system, computation device for position measurement and program
US8928749B2 (en) 2009-06-24 2015-01-06 Fuji Xerox Co., Ltd. Position measuring system, processing device for position measurement, processing method for position measurement, and computer readable medium
US8878779B2 (en) 2009-09-21 2014-11-04 Extreme Reality Ltd. Methods circuits device systems and associated computer executable code for facilitating interfacing with a computing platform display screen
US9218126B2 (en) 2009-09-21 2015-12-22 Extreme Reality Ltd. Methods circuits apparatus and systems for human machine interfacing with an electronic appliance
JP2014013256A (en) * 2013-09-13 2014-01-23 Sakata Denki Track displacement measurement apparatus
JP2017101961A (en) * 2015-11-30 2017-06-08 株式会社ソニー・インタラクティブエンタテインメント Light-emitting device adjusting unit and drive current adjustment method
JP2019141262A (en) * 2018-02-19 2019-08-29 国立大学法人 筑波大学 Method for analyzing motion of martial art
JP2020160568A (en) * 2019-03-25 2020-10-01 日本電信電話株式会社 Image synchronization device, image synchronization method, and program
WO2020195815A1 (en) * 2019-03-25 2020-10-01 日本電信電話株式会社 Image synchronization device, image synchronization method, and program
JP7067513B2 (en) 2019-03-25 2022-05-16 日本電信電話株式会社 Video synchronization device, video synchronization method, program
JP2023521952A (en) * 2020-07-27 2023-05-26 テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド 3D Human Body Posture Estimation Method and Apparatus, Computer Device, and Computer Program
JP7503643B2 (en) 2020-07-27 2024-06-20 テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド 3D human body posture estimation method and apparatus, computer device, and computer program

Also Published As

Publication number Publication date
AU2003289108A1 (en) 2004-11-19
JPWO2004094943A1 (en) 2006-07-13

Similar Documents

Publication Publication Date Title
WO2004094943A1 (en) Motion capturing method, motion capturing device, and motion capturing marker
EP2870428B1 (en) System and method for 3d measurement of the surface geometry of an object
US20160134860A1 (en) Multiple template improved 3d modeling of imaged objects using camera position and pose to obtain accuracy
EP3069100B1 (en) 3d mapping device
US10782780B2 (en) Remote perception of depth and shape of objects and surfaces
JP6255125B2 (en) Image processing apparatus, image processing system, and image processing method
CN106546230B (en) Positioning point arrangement method and device, and method and equipment for measuring three-dimensional coordinates of positioning points
JP7194015B2 (en) Sensor system and distance measurement method
WO2015054426A1 (en) Single-camera motion capture system
CN106981091A (en) Human body three-dimensional modeling data processing method and processing device
JP2010256253A (en) Image capturing device for three-dimensional measurement and method therefor
JP2016170610A (en) Three-dimensional model processing device and camera calibration system
JP2010256252A (en) Image capturing device for three-dimensional measurement and method therefor
CN104680570A (en) Action capturing system and method based on video
KR20180094253A (en) Apparatus and Method for Estimating Pose of User
JP2005140547A (en) 3-dimensional measuring method, 3-dimensional measuring device and computer program
JP2004086929A5 (en)
WO2019156990A1 (en) Remote perception of depth and shape of objects and surfaces
JP4590780B2 (en) Camera calibration three-dimensional chart, camera calibration parameter acquisition method, camera calibration information processing apparatus, and program
JP2016170031A (en) Three-dimensional model processing device and camera calibration system
CN109410272A (en) A kind of identification of transformer nut and positioning device and method
JPH10151591A (en) Discriminating device and method, position detecting device and method, and robot device and color sampling device
JP2003023562A (en) Image photographic system and camera system
JP3860287B2 (en) Motion extraction processing method, motion extraction processing device, and program storage medium
JP3052926B2 (en) 3D coordinate measuring device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2004571106

Country of ref document: JP

122 Ep: pct application non-entry in european phase