WO2004094943A1 - Marqueur pour capturer des images - Google Patents

Marqueur pour capturer des images Download PDF

Info

Publication number
WO2004094943A1
WO2004094943A1 PCT/JP2003/016080 JP0316080W WO2004094943A1 WO 2004094943 A1 WO2004094943 A1 WO 2004094943A1 JP 0316080 W JP0316080 W JP 0316080W WO 2004094943 A1 WO2004094943 A1 WO 2004094943A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
components
position information
articulated
articulated body
Prior art date
Application number
PCT/JP2003/016080
Other languages
English (en)
Japanese (ja)
Inventor
Hiroshi Arisawa
Kazunori Sakaki
Original Assignee
Hiroshi Arisawa
Kazunori Sakaki
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hiroshi Arisawa, Kazunori Sakaki filed Critical Hiroshi Arisawa
Priority to JP2004571106A priority Critical patent/JPWO2004094943A1/ja
Priority to AU2003289108A priority patent/AU2003289108A1/en
Publication of WO2004094943A1 publication Critical patent/WO2004094943A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches

Definitions

  • the present invention relates to a motion canopy that captures the motion of an object in the real world on a computer.
  • motion caps For example, mechanical, magnetic and optical types are known as motion caps.
  • an angle detector or pressure sensor is attached to the performer's body, and the bending angle of the joint is detected.
  • the motion of a performer is detected, and in the case of a magnetic motion cap, a magnetic sensor is attached to each part of the performer's body, the performer is moved in an artificially generated magnetic field, and the density and angle of magnetic lines of force are determined.
  • the movement of the performer is detected by deriving the absolute position where the magnetic sensor is present.
  • a marker is attached to the place where it is desired to measure the movement of the performer's body, and the movement of each part is measured from the position of the marker by imaging this marker with a camera.
  • a motion canopy that does not burden the subject.
  • This motion capture captures the motion of the human body in a non-contact manner by correlating with a virtual three-dimensional human body model using images of a multi-viewpoint camera.
  • documents 1 and 2 as a method for matching such multi-view images with models.
  • the posture is estimated by overlaying and evaluating a three-dimensional model and a silhouette image obtained by extracting only the subject in each image.
  • '2' is a method of determining the difference between the current image and the next image, and using this difference to determine the posture.
  • the above-mentioned non-contact type motion capture that does not use a marker needs to acquire an element for determining the motion from the image, and therefore has not reached a practical level in terms of motion extraction accuracy and analysis processing time.
  • the present invention is intended to solve the above-mentioned conventional problems and to reduce the burden of attaching a marker in motion capture, shorten the analysis processing time, and obtain high extraction accuracy.
  • the present invention reduces the load due to marker attachment by reducing the number of markers attached, and can adopt an attitude that can be taken by using positional information obtained by the markers and constraints that relate the components.
  • the number is reduced, which shortens the analysis processing time and obtains high extraction accuracy.
  • the present invention includes a motion capture method, each aspect of a motion capture device, and a marker suitable for the motion capture.
  • the posture of the articulated body can be determined by the positions of the components constituting the articulated body and the angle with each other.
  • the movement of the articulated body can be regarded as a temporal change of this posture. Therefore, the information that motion canopy wants to acquire is the position information and angle information of the components that make up the articulated body.
  • the motion capture of the present invention obtains positional information and angular information of components of an articulated body such as a human body using a small number of markers.
  • each component of the articulated body has constraints on the distance (length) of the components and the angular relationship that can be taken with each other.
  • the upper arm and the forearm constituting the human body are connected by an elbow joint, and the length of the upper arm and the forearm, the distance between the upper arm and the forearm, and the angle that the forearm can take with the upper arm are limited due to human body structure. You can not take positions or angles beyond this limit. Therefore, these distances and angles become possible constraints of the component.
  • position information and angle information of components of an articulated body such as a human body using markers, it is necessary to attach a large number of markers to each component.
  • the present invention uses the constraints of each component of the articulated body in order to obtain position information and angle information of the component of the articulated body such as a human body using a small amount of force.
  • the motion capture of the present invention extracts candidate postures of other components that satisfy this constraint condition from position information of certain components obtained from a small number of markers, and generates images of the posture candidates and the articulated body.
  • Angle information is acquired by superimposing with.
  • the number of postures to be evaluated is reduced by narrowing down the postures satisfying the constraint condition determined by the position information from among a large number of postures. In this way, by reducing the number of postures to be evaluated in posture determination, the analysis time is shortened and the accuracy of posture extraction is increased.
  • a specific portion of the component constituting the articulated body is determined.
  • Position information is determined, and angle information between each component that satisfies a constraint condition determined by the position information is determined based on the position information and a constraint that relates the plurality of components to each other.
  • the position information is acquired from the specific part of the component, and the other angle information is acquired using the constraint condition, whereby the articulated body is obtained without acquiring the position information for all the components provided in the articulated body. You can get your attitude.
  • the second aspect of the motion capture method of the present invention comprises a plurality of components
  • distance and angle constraints that relate a plurality of components to each other are determined in advance, and at least one component is connected
  • a marker which can be individually identified is provided at a specific site of the component to be obtained, and an image of an articulated body including this marker is obtained.
  • the position information of the specific part is obtained from the position of the marker in the obtained image.
  • the component angle information is a component that satisfies the constraints defined by the line position information based on the image of the articulated body in the image, the position information determined, and the constraints that relate the plurality of components to each other.
  • the angle information corresponding to the image of the articulated body is obtained by finding the angle information among them. From the position information and the angle information, the posture and motion of the articulated body are determined.
  • Each marker can be individually identified, and the marker attachment position can be identified by identifying the marker. Also, the component to which the force is attached should be placed with the other component in between. As a result, the force to be attached to an articulated body such as a human body can be disposed by thinning out, and the number of markers required for posture acquisition can be reduced.
  • a constraint that relates a plurality of components to each other in determining the posture and motion of an articulated body in which the plurality of components are articulated to each other, and A plurality of models representing postures of body components are predetermined, and an articulated body including individually distinguishable markers provided at specific portions of the components connected with at least one component interposed therebetween. Ask for an image. The position information of the specific part is obtained from the position of the marker in the obtained image.
  • component angle information is extracted from a plurality of models that conform to constraints determined by position information, and a model that is most approximate to the image of the articulated body in the image is extracted from the extracted models,
  • the angle information of the components of the articulated body is obtained from the angles between the components of the extracted model. Posture of the articulated body by this position information and angle information And determine the action.
  • the motion capture without using a conventional marker is to evaluate the superposition with the image acquired for all models without using this constraint condition, and the number of postures that hierarchically connected components can take. Is an enormous number because it increases with the number of components and the multiplier of their degrees of freedom. If a more detailed model is set by increasing the number of components constituting an articulated body, the number of possible postures will increase.
  • the number of models as attitudes to be evaluated can be reduced, so that the analysis time can be shortened, and it becomes closer to real-time (real-time) motion analysis. Is possible.
  • an image of a marker attached to a human body and a specific part of the human body is acquired, and the individual force is individually identified and extracted from this image to detect the specific part of the human body.
  • the position information is determined, and the angle information of the component of the human body is determined by image matching between the model extracted based on the position information of the specific part from among the prepared models and the acquired image, and the determined position information and angle information Determine the posture of the human body.
  • Models are extracted by selecting at least one model that meets the human posture constraints determined by the position of a specific part.
  • Image alignment between the image and the model is performed by selecting from among the extracted models the model that most closely approximates the human body image in the image.
  • an approximate model for example, with respect to the vertexes provided on the component, the distance between the image and the model is obtained for each posture, and a model that can obtain the posture with the shortest distance is selected.
  • Angle information can be obtained from the angle of the component in the selected model.
  • the marker emits light to identify the marker.
  • a first aspect of the motion capture device of the present invention is a motion capture device for determining the posture and operation of an articulated body in which a plurality of components are connected to each other by articulation, the motion capture device comprising the components constituting the articulated body. Based on position detection means for obtaining position information of a specific part, position information, and constraints that relate the plurality of components to each other, angle information between each component that satisfies the constraint defined by the position information is obtained. An attitude of the articulated body is determined based on the detected position information and angle information.
  • a second aspect of the motion capture device of the present invention is a motion capture device for determining the posture and motion of an articulated body in which a plurality of components are articulated to one another, wherein a constraint relating the plurality of components to one another
  • a storage means for pre-storing a plurality of models representing the conditions and postures of the components of the articulated body, and individually distinguishable at specific parts of the components connected across at least one component
  • Image acquisition means for obtaining an image of an articulated body including various markers, position information detection means for obtaining position information of a specific site from the position of the marker in the image, multiple models determined by the position of the specific site
  • the model selection means for selecting a model that meets the constraint conditions of the joint body, and the model closest to the image of the articulated body in the image is extracted from the selected models, Configuration from the angle between the components of the out model of the multi-joint body It is configured to have matching means for obtaining angle information of the element, and the posture and motion of the articulated body are determined by the
  • a marker suitable for the motion spacer of the present invention is a marker for identifying a specific part of the articulated body in the motion capsule, which determines the posture and operation of the articulated body in which a plurality of components are connected to each other.
  • a light shield can be provided between the first light emitting diode and the second light emitting diode.
  • This light shield can prevent the mixing of the two colors on the image. When two colors are mixed, they are identified as different colors, which makes it difficult to identify markers and causes false recognition.
  • the light shield is on the image Prevents mixing of two colors and makes marker identification easier.
  • FIG. 1 is a view for explaining an outline of a motion canopy according to the present invention
  • FIG. 2 is a flow chart for explaining a procedure of attitude determination by the motion canopy according to the present invention
  • FIG. FIG. 4 is a schematic view of components for explaining a procedure of posture determination by motion capture
  • FIG. 4 is a diagram for explaining components of an articulated body
  • FIG. FIG. 6 is a flow chart for explaining an example of a calculation procedure of position information
  • FIG. 6 is a color model of a bi-hexagonal pyramid of HSI color system
  • FIG. 7 is an extraction of a specific part and position information
  • FIG. 8 is a view for explaining an example of the calculation procedure of the present invention
  • FIG. 8 is a view for explaining an example of the calculation procedure of the present invention
  • FIG. 8 is a view for explaining a motion capture device of the present invention
  • FIG. 9 is a posture of a human body by the motion capture of the present invention.
  • Work Figure 10 is a hierarchy diagram for explaining the mounting position of the force when the posture and motion of the human body are determined by the motion canopy of the present invention.
  • FIG. 11 is a diagram for explaining a marker suitable for the motion of the present invention
  • FIG. 12 is a diagram for explaining a marker suitable for the motion capture of the present invention.
  • FIG. 13 is a view showing an example of a human body model.
  • FIG. 1 is a view for explaining the outline of the motion chart of the present invention.
  • multi-view video is acquired in Fig.1.
  • Multi-view images are acquired by arranging multiple cameras. The images taken by each camera are For example, synchronization is acquired in frame units (A in the figure).
  • the acquired multi-view video is processed almost in real time, and the position information and angle information of the articulated object as the object are obtained, and the posture and motion are obtained.
  • the position information is obtained by extracting a specific part of the multi-joint body that is the object from each image of the multi-view video, and from the position (B in the figure).
  • the specific site is any site defined on the component in the articulated body.
  • the position information obtained by the motion canopy of the present invention relates to a specific part defined on a component in a multi-segment body, it is not possible to determine the posture of the other component by this position information alone. .
  • the present invention uses angle information of each component in addition to position information as an element for determining the posture of the object.
  • each image of the multi-view video is superimposed on a predetermined model (D in the figure), a model matching the image is extracted, and an angle from the angle of each component of the model is extracted.
  • Finding information C in the figure
  • Extraction of the model that most closely matches the image by superimposing this image on the model is called matching.
  • the present invention uses the constraint that determines the relationship between the constituent elements of the articulated body in extracting the angle information.
  • an articulated body there is a relationship between the specific part and each component according to the characteristics of the articulated body, and once the position of the specific part is determined, the other components are defined with the relationship as the constraint and the range It is not possible to take an internal attitude and take an attitude that deviates from this constraint.
  • the motion capture of the present invention reduces the time required for matching by narrowing down the model to be matched using the position information of the specific part and the above-mentioned restriction condition.
  • a multi-viewpoint image of the articulated body is acquired by a plurality of cameras arranged around the object, the articulated body.
  • Multi-viewpoint images can be acquired as synchronized frame images.
  • a three-dimensional position and angle are obtained from a plurality of images acquired by these multi-viewpoint images, but in FIG.
  • Fig. 3 (a) simply shows an image acquired in multi-view video.
  • the articulated body shown in the figure shows a configuration example in which three components are connected so as to be able to change the angular relationship of each other by joints.
  • FIG. 4 is a view for explaining the components of this articulated body.
  • the articulated body 10 is composed of the components 1 1, 1 2 and 1 3, and the component 1 1 and the component 1 2 are rotatably articulated at the joint 1 4, and the component 1 2 and the component 1 2
  • the component 1 3 is pivotably articulated at the joint 1 5.
  • one end of the component 11 is designated as a specific site 16 and one end of the component 13 is designated as a specific site 17.
  • These specific parts are provided in the components 1 1 and 1 3 sandwiching the component 1 2, but if they are connected components, there may be one or more components placed in between. it can.
  • the lengths of the respective components 1 1, 1 2, and 13 or the distances between the respective components are determined from the structure constituting an articulated body such as a human body.
  • the angular relationships among the component houses 1, 1, 2, and 13 are limited within a certain range.
  • the range of angles 0 1 0 that can be taken by component 1 1 at a specific site 1 6 is 1 0 1 1 to ⁇ 1 2 2, and the angle 0 4 0 at which components 1 3 can take at a specific site 1 7
  • the range is from 1 0 4 1 to 0 4 2.
  • the range of angles 0 2 0 that component 1 2 can take with respect to component 1 1 at joint 1 4 is 1 0 2:! ⁇ 0 2 2
  • the range of the angle 0 3 0 that the component 1 3 can take on the component 1 2 at the joint 1 5 is ⁇ ⁇ 3 1 to 0 3 2.
  • the clockwise angle is shown as a negative angle.
  • marker M1 shall be provided in specific site
  • each component constituting an articulated body is limited in possible postures when the position of a part is determined, and these lengths and angles may be regarded as possible posture constraints. it can.
  • the relationship between the components shown in Fig. 4 is expressed in two dimensions, these relationships and constraints are set in three dimensions by the image obtained from the multi-viewpoint image (step S l).
  • position information is obtained from the image acquired in the process of step S 1.
  • a specific part is extracted from the image (step S 2), and position information of the extracted specific part is obtained.
  • Figure 3 (b) simply shows the state in which the specific part is extracted from the image of the articulated body in Figure 3 (a).
  • the position information (X1, y1, z1) is obtained by extracting the force M 1 provided at the specific portion 16, and the force M 2 provided at the specific portion 17 is extracted.
  • the position information (x2, y2, ⁇ 2) is obtained.
  • RGB signal represents the signal strength by, for example, a gradation value of 0 to 25 steps, but since this gradation value includes elements such as brightness and color tone in a mixed form, Markers provided can not be identified and extracted by color.
  • the motion capture of the present invention identifies and extracts markers provided at each specific site by color-coding the markers. Identify the rank. Therefore, markers in the image need to be identified by color.
  • the H S I color system has three attributes: hue (H: hue), saturation (S: saturation), and lightness (I: intensity).
  • Figure 6 is a color model of the double hexagonal pyramid in the H S I color system.
  • hue (H) can be represented by angle values in the order of yellow, green, cyan, lue and magenta, where red is 0 °.
  • the number of hues is an example and may be another number of hues.
  • lightness (V) can also be expressed numerically.
  • hue (H) and lightness (V) in order to identify hue (H) and lightness (V) in advance according to the color emitted by the predetermined power, threshold values are determined and converted. The obtained hue signal and lightness signal are 0
  • Fig. 7 (a) schematically shows an example of extracting a high-intensity part from the image (step S12).
  • the hue signal (H) is selected by comparing with the threshold of the hue set in advance, and the specific part (marker from the high brightness part in the image Extract the region where) exists.
  • Fig. 7 (b) schematically shows an example in which a specific part is extracted from the high-intensity part in the image (step S13).
  • the color part of a specific part such as a marker has an area, and therefore, it is detected as an area across multiple pixels on the image. Therefore, in order to determine the position of the specific part, the position of the specific part is calculated from the area of the specific part detected in step S13.
  • any calculation method such as calculation of peak position of luminance, calculation of barycentric position of region, calculation of barycentric position weighted using luminance can be used.
  • a light emitting body such as a light emitting diode does not necessarily have a uniform light emitting state, and even within the same marker, there may be a difference in color between the light emitting portion and the periphery thereof. Due to this color change, the originally connected area may be divided. In order to compensate for this, processing to expand or contract the area on the image may be performed.
  • a plurality of position information of these specific parts are obtained by cameras of different arrangement positions by multi-viewpoint images, and are performed by each camera and each frame.
  • Figure 7 (c) schematically shows an example of the position of a specific part extracted in the image (step S14).
  • the obtained position information is a force camera coordinate system
  • the camera coordinate system and the real space coordinate system can be transformed by the combination of the rotation matrix and the translation matrix.
  • non-linearity caused by the optical system between the position on the camera coordinate system and the position projected on the real space coordinate system may be corrected by parameters such as focal length and lens distortion coefficient.
  • Ts a i is known as a method of converting markers in an image from a camera coordinate system to a real coordinate system in consideration of weights.
  • the calculation of the three-dimensional position of the marker position in the real coordinate system corresponds to, for example, finding the intersection of a plurality of straight lines extending from each camera image in FIG. 7 (d).
  • errors in the camera parameters may cause a shift in the angle of the straight line, and therefore may not intersect at one point.
  • the middle point on the straight line at which the distance between the straight lines is minimum may be determined as the point of intersection of the straight lines.
  • the midpoint is determined in the combination of n straight lines, and the average of these midpoints is set as the three-dimensional position of the marker (step S 16). Thereby, position information is obtained for the specific part (step S 3).
  • steps S4 to S8 the component angle information is determined.
  • a possible model is selected.
  • the model is obtained in advance for various postures of the articulated body as the object, and the sizes and tolerances of the lengths, the distance relationships, and the angular relationships of each component of the articulated body are also determined.
  • This restriction condition may, for example, define the range of mutual angle tolerance of each component with respect to the position of a specific part. it can.
  • the tolerance range can be expressed numerically or as a function representing a mutual relationship.
  • Figure 3 (c) schematically shows the model selection. For example, if there are c-1, c-1, c-3 as models of component, choose a model that satisfies the constraints from among these models. Here, for example, a case is shown where a model is selected with the possible angular range of each component as a constraint.
  • the model selected in step S4 is superimposed on the image to model the image and the model.
  • a model for example, a three-dimensional model is placed in a virtual space, and the distribution of lightness and saturation on the model is determined as a histogram while changing the posture of the model.
  • the histogram of the distribution of lightness and saturation can be obtained and the approximation of this histogram can be evaluated (step S 5).
  • a nonlinear least squares method, a genetic algorithm or the like can be used to obtain an evaluation function so as to minimize it. 6080
  • an evaluation value For example, it is possible to obtain an evaluation value by assuming a certain parameter value (0 or position), changing the parameter and repeating evaluation based on the result, and finding the minimum evaluation value. it can.
  • This evaluation function represents the degree of deviation determined by comparing the result of the silled image and the result of the optical flow with the model (step S 6). Find the model closest to the image in matching the image with all selected models.
  • Figure 3 (d) schematically shows this matching process (step S7).
  • Figure 3 (e) shows how to obtain angle information ( ⁇ 1 to ⁇ 4) from the obtained model (step S8).
  • the posture of the articulated body is determined from the position information determined in step S3 and the angle information determined in step S9. By performing this posture for each frame acquisition, the motion of the articulated body can be determined in almost real time.
  • the motion capture device 1 is an image acquisition unit 2 that acquires multi-viewpoint images of multiple joints such as a human body and acquires an image, and an operation unit 3 that obtains position information and angle information of components from this image.
  • the position and movement of the articulated body are determined by the position information and the angle information.
  • the image acquisition means 2 obtains an image of an articulated body including individually distinguishable markers provided at specific parts of constituent elements connected across at least one constituent element.
  • a plurality of cameras are arranged to acquire multi-view images, and each image frame is synchronized.
  • the calculating means 3 is a detecting means 3a for detecting a specific part (marker) in the image. 6080
  • a position information detecting means 3 b for obtaining position information of a specific part (marker), a constraint that relates a plurality of components to one another, and a memory for storing in advance a plurality of models representing the postures of components of an articulated body Means 3 c, model selecting means 3 d for selecting a model that fits the constraints of the articulated body determined by the position of the specific part from a plurality of models, and an image of the articulated body in the image from among the selected models
  • a matching unit 3 e for extracting the most approximate model and obtaining angle information of the component of the articulated body from the angle between the components of the extracted model.
  • Each means provided in the computing means 3 is for describing each function performed by the computing means, and is not necessarily provided with hardware for executing each function, and can be implemented by software.
  • the human body can be regarded as an articulated body in which a plurality of components are connected by joints, and these components can be expressed hierarchically.
  • the components can be set arbitrarily.
  • Figures 9 and 10 show one model, which can be set by combining other components.
  • the lumbar region is the upper layer
  • the upper body is the chest, upper arm (upper left arm, upper right arm) and head, forearm (left forearm, upper arm) hand (left hand)
  • the lower body is connected to the lower layer in the order of the thigh (left thigh and right thigh), the shin (left shin and right shin), and the legs (left foot and right foot). Be done.
  • This model is prepared for the human body to be measured, and each layer represented by the model corresponds to the component of the articulated body described above. For each component, set constraints on the length, distance between components, and angle. In motion chap- ters, multi-view images of the human body are acquired and obtained. 16080
  • the posture and motion of the human body are acquired by obtaining position information and angle information of a specific part of each component from the captured image according to the above-described process.
  • a marker is attached to a specific part of the human body in order to obtain position information of the specific part.
  • the markers are not attached to all components of the human body, but are attached to selected ones of the components of the model using a small amount of force.
  • the number of forces to be attached is such that the subject is not burdened, and the position of force attachment is such that the force is always reflected on the image even when the human body is in various postures. Choose a similar site.
  • an appropriate hierarchical spacing is provided between the components to which the markers are attached, in consideration of the operation complexity in the model-image matching process. While matching between models and images reduces the number of models that can be obtained by the constraints determined by the position information of a specific part, the number of possible models also increases as the number of components between specific parts increases. Do. Therefore, in determining the posture and motion of the human body, place a force on specific parts at appropriate intervals that reduce the total amount of computation.
  • a force is attached to the upper waist of the hierarchy, a marker is attached to the joints of the upper arm and forearm in the upper body, and a marker is attached to the joints of the thigh and shin in the lower body.
  • the waist is attached to both sides, and the joints of the upper and lower arms and the joints of the thigh and leg are attached to the left and right.
  • the marker 20 comprises a plurality of light emitting diodes (L E D) including one first light emitting diode 21 and a plurality of second light emitting diodes 22.
  • the first light emitting diode 21 is used to obtain position information of a specific site
  • the second light emitting diode 22 is used to identify each marker and distinguish the specific site from other specific sites. Use.
  • the first light emitting diode 21 and the second light emitting diode 22 have different light emitting colors, and the power identification is performed by a combination of the light emitting colors.
  • the combinations of light emission colors are selected so as to make the hue angles symmetrical with each other in the H S I color system, and to be easily identified by the threshold value.
  • the first light emitting diode 21 is disposed above the base 23, and the plurality of second light emitting diodes 22 are disposed in a ring around the lower periphery thereof.
  • a power supply 25 of each light emitting diode is provided on the base 21 .
  • the power source 25 can use, for example, a battery.
  • Reference numeral 26 in the figure is a switch for controlling the connection between the power supply 25 and the light emitting diode.
  • a removable insulator is used to remove this insulator. Can cause the light emitting diode to emit light.
  • the second light emitting diodes 22 are arranged at equal angular intervals around the first light emitting diode 21 such that the light emitting directions are equal angular intervals.
  • a light emitting diode usually has directivity. Since the human body takes various postures, it is desirable that the light emission of the marker be non-directional in order to make the marker well reflected to the camera. Therefore, the light emitting directions of a plurality of light emitting diodes are arranged at equal angular intervals.
  • the five second light emitting diodes 2 2 are arranged in the direction of, for example, 72 °.
  • the number of the second light emitting diodes 22 can be set arbitrarily, the relationship between the physical size of the light emitting diodes and the size of the small force of the light load, and the detection condition in the acquired image Set in consideration of various conditions such as.
  • the force sensor 20 includes a light transmitting cover 2'4 for internally housing the light emitting diodes 21 and 22 and the power source 25 and the like.
  • the light of the light emitting diodes 2 1 and 2 2 is emitted to the outside through the cover 2 4.
  • the inner surface or outer surface of the cover 24 may be a scattering surface, or the material constituting the cover 24 may be a light scatterer. By making the cover 24 scatter, the light emitted from the light emitting diode is scattered by the cover 24 and the reflection to the camera can be improved.
  • a light shielding body 2 8 is provided between the first light emitting diode 2 1 and the second light emitting diode 2 2 so that the installation position can be changed.
  • the light shield 2 8 is a mixture of the light emission of the first light emitting diode 2 1 and the light emission of the second light emitting diode 2 2 on the image, and a color different from the light emission color of the light emitting diode This is to ensure that it does not occur. If different colors occur in the marker portion on the image, it may be difficult to identify the force, or it may be a factor in detecting the position of the force incorrectly.
  • the light shield 2 8 is positioned between the light emitted from the first light emitting diode 21 and the light emitted from the second light emitting diode 22, whereby the first light emitting diode projected onto the image is formed. Separate the images of diode 2 1 and the second light emitting diode 2 2 to prevent mixing of both lights.
  • the light shielding body 2 8 is an annular body having an opening at the central portion
  • an annular recessed portion 2 7 may be provided around the outer peripheral surface of the cover 24, and the cover 24 may be fitted into the recessed portion 2 7.
  • the recessed portion 27 may be formed in multiple stages in the vertical direction of the cover 24 so that the mounting position on the cover 24 can be changed.
  • Figures 1 2 (a) and (c) show a state in which the light shield 2 8 is attached to the lower part of the depressed portion 27.
  • Figures 1 2 (b) and (d) show the light shield 2 8 in the depressed portion 2 It is shown attached to the upper part of 7.
  • the area of the annular portion of the second light emitting diode 2 2 displayed in the image can be increased or decreased.
  • the marker image is adjusted according to the imaging environment such as the distance between the human body and the camera, the state of the background color, the state of illumination, etc. It can be done well.
  • the number of markers attached can be reduced, the burden on the subject can be reduced, and position information and angle state can be acquired with a small burden.
  • processing time can be shortened, and real-time (real-time) processing of posture and motion can be approached. about Can. Also, if the processing time is the same, the measurement accuracy can be improved.
  • the burden of attaching the marker can be reduced, the analysis processing time can be shortened, and high extraction accuracy can be obtained.
  • the present invention can be used for analysis of mobile objects such as people and objects, and formation of a virtual space, and can be applied to the fields of industry, medicine, sports and the like.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Selon l'invention, afin d'obtenir des informations de position et des informations angulaires des composants d'un corps articulé, par exemple d'un corps humain, au moyen d'un petit nombre de marqueurs, des conditions restrictives de chaque composant du corps articulé sont utilisées. Une attitude candidate d'un autre composant satisfaisant les conditions restrictives est extraite des informations de position de certains composants obtenus par un petit nombre de marqueurs, puis les informations angulaires sont acquises par superposition des images de l'attitude candidate et du corps articulé.
PCT/JP2003/016080 2003-04-22 2003-12-16 Marqueur pour capturer des images WO2004094943A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2004571106A JPWO2004094943A1 (ja) 2003-04-22 2003-12-16 モーションキャプチャ方法、モーションキャプチャ装置、及びモーションキャプチャ用マーカ
AU2003289108A AU2003289108A1 (en) 2003-04-22 2003-12-16 Motion capturing method, motion capturing device, and motion capturing marker

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2003-116631 2003-04-22
JP2003116631 2003-04-22

Publications (1)

Publication Number Publication Date
WO2004094943A1 true WO2004094943A1 (fr) 2004-11-04

Family

ID=33307995

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2003/016080 WO2004094943A1 (fr) 2003-04-22 2003-12-16 Marqueur pour capturer des images

Country Status (3)

Country Link
JP (1) JPWO2004094943A1 (fr)
AU (1) AU2003289108A1 (fr)
WO (1) WO2004094943A1 (fr)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007071660A (ja) * 2005-09-06 2007-03-22 Toshiba Corp 遠隔検査における作業位置計測方法およびその装置
JP2008537815A (ja) * 2005-03-17 2008-09-25 本田技研工業株式会社 クリティカルポイント解析に基づくポーズ推定
JP2010025855A (ja) * 2008-07-23 2010-02-04 Sakata Denki 軌道変位測定装置
JP2010524113A (ja) * 2007-04-15 2010-07-15 エクストリーム リアリティー エルティーディー. 人−機械インターフェース装置システム及び方法
JP2011007578A (ja) * 2009-06-24 2011-01-13 Fuji Xerox Co Ltd 位置計測システム、位置計測用演算装置およびプログラム
US8432390B2 (en) 2004-07-30 2013-04-30 Extreme Reality Ltd Apparatus system and method for human-machine interface
US8462199B2 (en) 2005-10-31 2013-06-11 Extreme Reality Ltd. Apparatus method and system for imaging
US8548258B2 (en) 2008-10-24 2013-10-01 Extreme Reality Ltd. Method system and associated modules and software components for providing image sensor based human machine interfacing
JP2014013256A (ja) * 2013-09-13 2014-01-23 Sakata Denki 軌道変位測定装置
US8872899B2 (en) 2004-07-30 2014-10-28 Extreme Reality Ltd. Method circuit and system for human to machine interfacing by hand gestures
US8878779B2 (en) 2009-09-21 2014-11-04 Extreme Reality Ltd. Methods circuits device systems and associated computer executable code for facilitating interfacing with a computing platform display screen
US8928654B2 (en) 2004-07-30 2015-01-06 Extreme Reality Ltd. Methods, systems, devices and associated processing logic for generating stereoscopic images and video
US9046962B2 (en) 2005-10-31 2015-06-02 Extreme Reality Ltd. Methods, systems, apparatuses, circuits and associated computer executable code for detecting motion, position and/or orientation of objects within a defined spatial region
US9177220B2 (en) 2004-07-30 2015-11-03 Extreme Reality Ltd. System and method for 3D space-dimension based image processing
US9218126B2 (en) 2009-09-21 2015-12-22 Extreme Reality Ltd. Methods circuits apparatus and systems for human machine interfacing with an electronic appliance
JP2017101961A (ja) * 2015-11-30 2017-06-08 株式会社ソニー・インタラクティブエンタテインメント 発光デバイス調整装置および駆動電流調整方法
JP2019141262A (ja) * 2018-02-19 2019-08-29 国立大学法人 筑波大学 武道動作解析方法
JP2020160568A (ja) * 2019-03-25 2020-10-01 日本電信電話株式会社 映像同期装置、映像同期方法、プログラム
JP2023521952A (ja) * 2020-07-27 2023-05-26 テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド 3次元人体姿勢推定方法及びその装置、コンピュータデバイス、並びにコンピュータプログラム

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6629055B2 (ja) * 2015-11-30 2020-01-15 株式会社ソニー・インタラクティブエンタテインメント 情報処理装置および情報処理方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000258123A (ja) * 1999-03-12 2000-09-22 Sony Corp 画像処理装置および方法、並びに提供媒体
JP2003035515A (ja) * 2001-07-23 2003-02-07 Nippon Telegr & Teleph Corp <Ntt> 三次元位置検出方法,装置および三次元位置検出用のマーカ
JP2003109015A (ja) * 2001-10-01 2003-04-11 Masanobu Yamamoto 身体動作測定方式

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000258123A (ja) * 1999-03-12 2000-09-22 Sony Corp 画像処理装置および方法、並びに提供媒体
JP2003035515A (ja) * 2001-07-23 2003-02-07 Nippon Telegr & Teleph Corp <Ntt> 三次元位置検出方法,装置および三次元位置検出用のマーカ
JP2003109015A (ja) * 2001-10-01 2003-04-11 Masanobu Yamamoto 身体動作測定方式

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8432390B2 (en) 2004-07-30 2013-04-30 Extreme Reality Ltd Apparatus system and method for human-machine interface
US9177220B2 (en) 2004-07-30 2015-11-03 Extreme Reality Ltd. System and method for 3D space-dimension based image processing
US8928654B2 (en) 2004-07-30 2015-01-06 Extreme Reality Ltd. Methods, systems, devices and associated processing logic for generating stereoscopic images and video
US8872899B2 (en) 2004-07-30 2014-10-28 Extreme Reality Ltd. Method circuit and system for human to machine interfacing by hand gestures
JP2008537815A (ja) * 2005-03-17 2008-09-25 本田技研工業株式会社 クリティカルポイント解析に基づくポーズ推定
JP4686595B2 (ja) * 2005-03-17 2011-05-25 本田技研工業株式会社 クリティカルポイント解析に基づくポーズ推定
US8085296B2 (en) 2005-09-06 2011-12-27 Kabushiki Kaisha Toshiba Method and apparatus for measuring an operating position in a remote inspection
JP2007071660A (ja) * 2005-09-06 2007-03-22 Toshiba Corp 遠隔検査における作業位置計測方法およびその装置
US8462199B2 (en) 2005-10-31 2013-06-11 Extreme Reality Ltd. Apparatus method and system for imaging
US8878896B2 (en) 2005-10-31 2014-11-04 Extreme Reality Ltd. Apparatus method and system for imaging
US9131220B2 (en) 2005-10-31 2015-09-08 Extreme Reality Ltd. Apparatus method and system for imaging
US9046962B2 (en) 2005-10-31 2015-06-02 Extreme Reality Ltd. Methods, systems, apparatuses, circuits and associated computer executable code for detecting motion, position and/or orientation of objects within a defined spatial region
JP2010524113A (ja) * 2007-04-15 2010-07-15 エクストリーム リアリティー エルティーディー. 人−機械インターフェース装置システム及び方法
KR101379074B1 (ko) * 2007-04-15 2014-03-28 익스트림 리얼리티 엘티디. 인간 기계 인터페이스를 위한 장치 시스템 및 방법
JP2010025855A (ja) * 2008-07-23 2010-02-04 Sakata Denki 軌道変位測定装置
US8548258B2 (en) 2008-10-24 2013-10-01 Extreme Reality Ltd. Method system and associated modules and software components for providing image sensor based human machine interfacing
JP2011007578A (ja) * 2009-06-24 2011-01-13 Fuji Xerox Co Ltd 位置計測システム、位置計測用演算装置およびプログラム
US8928749B2 (en) 2009-06-24 2015-01-06 Fuji Xerox Co., Ltd. Position measuring system, processing device for position measurement, processing method for position measurement, and computer readable medium
US8878779B2 (en) 2009-09-21 2014-11-04 Extreme Reality Ltd. Methods circuits device systems and associated computer executable code for facilitating interfacing with a computing platform display screen
US9218126B2 (en) 2009-09-21 2015-12-22 Extreme Reality Ltd. Methods circuits apparatus and systems for human machine interfacing with an electronic appliance
JP2014013256A (ja) * 2013-09-13 2014-01-23 Sakata Denki 軌道変位測定装置
JP2017101961A (ja) * 2015-11-30 2017-06-08 株式会社ソニー・インタラクティブエンタテインメント 発光デバイス調整装置および駆動電流調整方法
JP2019141262A (ja) * 2018-02-19 2019-08-29 国立大学法人 筑波大学 武道動作解析方法
JP2020160568A (ja) * 2019-03-25 2020-10-01 日本電信電話株式会社 映像同期装置、映像同期方法、プログラム
WO2020195815A1 (fr) * 2019-03-25 2020-10-01 日本電信電話株式会社 Dispositif de synchronisation d'image, procédé de synchronisation d'image, et programme
JP7067513B2 (ja) 2019-03-25 2022-05-16 日本電信電話株式会社 映像同期装置、映像同期方法、プログラム
JP2023521952A (ja) * 2020-07-27 2023-05-26 テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド 3次元人体姿勢推定方法及びその装置、コンピュータデバイス、並びにコンピュータプログラム

Also Published As

Publication number Publication date
JPWO2004094943A1 (ja) 2006-07-13
AU2003289108A1 (en) 2004-11-19

Similar Documents

Publication Publication Date Title
WO2004094943A1 (fr) Marqueur pour capturer des images
JP4282216B2 (ja) 3次元位置姿勢センシング装置
US9816809B2 (en) 3-D scanning and positioning system
US20160134860A1 (en) Multiple template improved 3d modeling of imaged objects using camera position and pose to obtain accuracy
EP3069100B1 (fr) Dispositif de mappage 3d
JP2014199584A (ja) 画像処理装置および画像処理方法
US10782780B2 (en) Remote perception of depth and shape of objects and surfaces
JP6255125B2 (ja) 画像処理装置、画像処理システム、および画像処理方法
JP7194015B2 (ja) センサシステム及び距離測定方法
WO2015054426A1 (fr) Système de capture de mouvement par caméra unique
JP2010256253A (ja) 三次元計測用画像撮影装置及びその方法
CN106546230B (zh) 定位点布置方法及装置、测定定位点三维坐标的方法及设备
JP2010256252A (ja) 三次元計測用画像撮影装置及びその方法
CN104680570A (zh) 一种基于视频的动作捕捉系统及方法
KR20180094253A (ko) 사용자 자세 추정 장치 및 방법
JP2005140547A (ja) 3次元計測方法、3次元計測装置、及びコンピュータプログラム
JP2004086929A5 (fr)
WO2019156990A1 (fr) Perception à distance de profondeur et de forme d&#39;objets et de surfaces
JP4590780B2 (ja) カメラ校正用立体チャート、カメラの校正用パラメータの取得方法、カメラの校正用情報処理装置、およびプログラム
JP6374812B2 (ja) 三次元モデル処理装置およびカメラ校正システム
CN109410272A (zh) 一种变压器螺母识别与定位装置及方法
JPH10151591A (ja) 識別装置及び方法、位置検出装置及び方法、ロボツト装置並びに色抽出装置
JP2003023562A (ja) 画像撮影システムおよびカメラシステム
JP3860287B2 (ja) 動き抽出処理方法,動き抽出処理装置およびプログラム記憶媒体
JP3052926B2 (ja) 三次元座標計測装置

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2004571106

Country of ref document: JP

122 Ep: pct application non-entry in european phase