WO2004042548A1 - 運動検出装置 - Google Patents
運動検出装置 Download PDFInfo
- Publication number
- WO2004042548A1 WO2004042548A1 PCT/JP2003/014070 JP0314070W WO2004042548A1 WO 2004042548 A1 WO2004042548 A1 WO 2004042548A1 JP 0314070 W JP0314070 W JP 0314070W WO 2004042548 A1 WO2004042548 A1 WO 2004042548A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- feature point
- information
- motion detection
- detection device
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/014—Hand-worn input/output arrangements, e.g. data gloves
Definitions
- the present invention relates to a motion detection device for detecting the position, posture and motion of an object.
- a technology has been known in which a sensor is threaded on an operator's hand to detect the shape and movement of the hand and generate a signal based on the detection result.
- a three-axis angular velocity sensor and a three-axis acceleration sensor for detecting the position and posture of the back of the hand are arranged, and A one-axis angular velocity sensor that detects finger bending at the end, middle finger end, thumb end and center is located, and the shape of the hand is estimated from the position and posture of the back of the hand and the posture of the finger.
- an operation input device that generates a command signal based on a gesture such as the shape and movement of a hand. ing.
- the technique proposed in the above-mentioned Japanese Patent Application Laid-Open No. 2000-133205 that is, the position and posture of the back of the operator's hand is determined by an inertial sensor including a three-axis angular velocity sensor and a three-axis acceleration sensor.
- the technique of detecting and detecting the posture of the finger at the angular velocity of one axis at the end, estimating the shape of the hand based on these moments, and generating a command and a signal based on the shape of the hand is known.
- the size of an individual's hand is larger. Since there is no need to determine the sensor position taking into account the versatility, it is versatile and can be easily used by anyone.
- the gesture with the hand stopped and the movement of the hand at a substantially constant speed are difficult. It was difficult to distinguish from Jestia. Then, an image for detecting the movement of the natural feature points in the two-axis direction in the image imaged by the image sensor ⁇
- a method has been considered in which the posture of the back of the hand is correctly obtained by comparing the movement of the surrounding natural image with the inertial sensor, in combination with the IC sensor.
- the field ⁇ using the image IC sensor is also limited by the limited number of image IC sensors.
- the present invention has been made to overcome the problems described above, and its object is to provide a motion detecting device capable of accurately recognizing the spatial position ⁇ , posture and motion of an object to be mounted, and a motion detecting device such as this.
- the motion detection clothing according to the first invention is , Measurement object A motion detection device for detecting the position and orientation, wherein the inertial motion detecting means detects the inertial motion of the object to be measured using at least one of an acceleration sensor and an angular velocity sensor; and At different points in time between the image capturing means for capturing an image around the object to be measured and the image capturing means, and a result of comparing the images by the image comparing means for comparing the captured images and the image comparing means.
- the force camera position and attitude obtaining means for detecting the position and attitude of the object to be measured using the method described above, and by the inertial motion detecting means and the foreground camera position and attitude obtaining means.
- spatial position and orientation acquisition means for detecting the position and orientation of the object in space.
- the signal of the angular velocity sensor or the acceleration sensor is fetched by the inertial motion detecting means, and the attitude information of these sensors is obtained.
- the surrounding image is photographed by the imaging means, and the feature points are extracted from the continuously captured images.
- the image comparing means determines the movement of the characteristic occupancy by comparing the positions of the feature points in different temporal patterns, and furthermore, the inertial motion of the measurement object determined by the inertial motion detecting means and the force camera.
- the position and orientation of the object to be measured in the space are determined based on the information of the position and orientation of the object within the interval detected by the orientation detection means.
- the motion detection device is characterized in that, in the first invention, the imaging means further comprises an optical means for projecting an image, and
- the means includes at least four plane mirrors, and has an optical system for projecting an image around the object to be measured to the imaging means by the at least four plane mirrors.
- the peripheral image is projected onto the imaging means by using at least four mirrors, and thereby the four images orthogonal to the lens optical axis and the lens optical axis direction A total
- the motion detection clothing according to the third invention is directed to the first invention, and the imaging means further includes optical means for projecting an image.
- the optical means has a curved surface, and has an optical system for projecting an image around the object to be measured and projecting the image to three image means.
- an image in all directions orthogonal to the optical axis of the imaging lens system, an image in the optical axis direction, and an image transmitted through the center are projected in the same direction. Be done
- the moving device is characterized in that, in the first invention, the 5LJ imaging means further comprises an optical means for projecting an image; Is characterized by comprising an optical system for projecting an image around the object to be measured to the imaging means by a fish-eye lens.
- the image of the entire sky is projected onto the imaging means by the eye lens.
- the motion detection device is characterized in that in the second to fourth inventions, the motion detection device is mounted on a body part and detects a position and a posture of the mounted portion.
- the motion detection device is characterized in that in the second to fourth inventions, the motion detection device is attached to the back of the hand to detect the position and posture of the hand.
- the motion detection device is characterized in that the motion detection device according to the fourth aspect of the present invention is mounted on a part and detects a position and a posture of a head.
- the position and posture of the body wearing the motion detecting device are detected.
- the image comparison device uses the W1 inertia motion detection means for the posture information at the time of registration.
- One It has feature movement estimation means for estimating the movement position and search range of a feature point from the position and shape of the detected object and the relationship of the projection transformation of the memory means. Performs image matching processing to search for feature points from the current frame.
- the feature point being tracked is searched from the current frame by the feature point movement estimating means included in the image processing means.
- the motion detection device is the motion detection device according to the eighth aspect, wherein the feature point movement estimating means is the location of the feature occupancy at the time of registration and the image around the feature point. Estimated pixel position information of each pixel for tracking the information of each pixel and the information of each pixel, the current position and orientation information detected by the inertial motion detection means, and the projection transformation of the image means.
- the feature is that the position of each pixel movement is estimated from the schedule and the image after the characteristic occupancy movement is scanned within the search range to perform image matching processing.
- the motion estimation image position information is obtained for each pixel, and the information of each pixel around the feature occupation is tracked.
- the motion detection apparatus wherein the feature occupancy estimating means is provided at the time of registration.
- the position of each pixel is estimated from the relationship of the projective transformation of the imaging means, and the image matching processing is performed by running a special image and moving the image within the search range.
- the position of the pixel is estimated and the position coordinates are tracked.
- the search range is determined based on the current position and orientation information of each pixel and the movement position of each pixel estimated from the information from the imaging means, and the running process is performed within that range, thereby performing the matching process.
- the motion detecting device is characterized in that, in the first invention, ⁇ the writing force camera position / posture acquiring means is configured to obtain a motion lame of each feature point and a feature occupancy.
- the motion vector measured from the image including each feature point and the motion vector calculated from the camera position and orientation information are based on the force camera position and orientation information obtained from the depth information estimated for ⁇ .
- the characteristic point is obtained by calculating an error with respect to the torque, and determining that the characteristic point exceeding the threshold value is invalid, and further comprising an invalid characteristic point judging means for judging the characteristic point.
- the force camera position and posture acquirer obtains the power from the characteristic information, the motion motion of the vehicle, and the depth information derived based on the special feature point.
- the position and orientation information of the camera is estimated.
- the measured motion parameters from the image of each feature point are compared with the motion parameters estimated from the force camera location information. If the compared error exceeds a certain threshold, it is determined to be an invalid feature point.
- the motion detecting apparatus is the motion detection apparatus according to the eleventh invention, wherein the effective feature point determining means is configured to determine that the feature point is invalid. Until the information disappears, the information about the invalid feature is occupied, the position information is tracked, and the effective feature point tracking means is further added.
- the position information of the invalid feature point is tracked by the effective feature point determining means until the effective feature point disappears.
- the position of the special point is always grasped.
- the motion detecting apparatus ⁇ according to the thirteenth invention is the same as the first invention, wherein the mdd force camera position / posture acquisition means is used in the new registration of a feature point.
- the 3D information viewed from the frame image and the current camera frame are registered as the m occupation information, and the feature points between the appearance and disappearance of the feature points are tracked.
- the current feature is to acquire the current turtle frame.
- the three-dimensional information of the feature points is tracked and updated during the period from the appearance of the feature occupation to the disappearance of the feature point.
- Tertiary information 7C information is inherited.
- the motion detecting apparatus is the motion detection apparatus according to the first invention, wherein the self-imaging means further comprises optical means for projecting an image. Is characterized in that it further comprises an image projection section masking means for identifying and dividing the input light axis of the optical means.
- a mask is provided for identifying and classifying an input light axis of an optical system for projecting an image on the image means.
- the motion detecting apparatus is the motion detecting apparatus according to the first aspect, wherein the image comparing means analyzes the depth information of the characteristic occupation image, An adaptive feature point image setting means for switching the size of the feature point image at the time of registration and the feature point search range or the number of registered feature points is further provided.
- the adaptive feature point In the motion detection apparatus according to the fifteenth aspect of the present invention, the adaptive feature point
- Depth information of the feature point image is analyzed by Ah means, and the size of the feature point image at registration and the search range of the feature point or the number of registered feature points are switched according to the analyzed value.
- the motion detecting apparatus is characterized in that the motion detecting apparatus according to the fifth aspect captures an image of a feature point orifice identification mark of a known size or interval at initialization.
- an initialization means for obtaining and registering the information on the size of the image of the feature occupation and the identification image for wt, which is taken in, and depth information.
- a feature point of a known magnitude or interval is captured at the time of initialization. Depth information is requested and registered
- the motion detecting device is characterized in that, in the fifth invention, the image of the feature point i and the five different masks for a predetermined position in Special occupation. It is characterized in that it has an initialization means for initial registration of the depth information of the feature points of the stitches, taken in from a position that is a known distance from the mental identification disc.
- the image of the feature occupation B-by-ma is placed at a predetermined position ⁇ from a distance that is a known distance from the feature occupation identification mac.
- the motion detection device uses the whistle yw5 in the invention of the whistle y5 to initialize the body at the time of initialization.
- the shape of the body or the entire body, the size of the body, and the depth of the characteristic points Information is requested and registered
- the filling detection device is characterized in that, in the fifth invention, at the time of initialization, an image of a characteristic part of the body is captured from an image captured from a known distance. Characterized by further comprising an initialization means for initially registering depth information of the
- a body characteristic image at a known distance is captured in an input image from a mounting portion of the motion detecting apparatus.
- the motion detecting apparatus is the motion detecting apparatus according to the 5th aspect, wherein at the time of initialization, an image in a predetermined direction and distance is taken from a clothing portion, and Initialization means for initial registration of depth information of the characteristic feature in
- an image in a predetermined direction and distance is captured, and the depth information of the characteristic occupancy is registered based on the captured image.
- the motion detecting device provides the motion detecting device according to the fifth aspect of the present invention, in which, at the time of initialization, an operation of moving a mounting portion by a predetermined distance is performed on an image captured at the time of initialization.
- Initialization means for initial registration of depth information of feature occupation in an image is further provided.
- the operation of moving the part equipped with the motion detecting device by the distance of the reference line is performed.
- the depth information of the feature point is registered based on the image captured at this time.
- the motion detection device according to the second to the fourth aspects, wherein the motion detection device is fixed or attached to an object which is gripped or held by a hand and operated. It is characterized by detecting the position and orientation of an object.
- the motion detecting device mounts a motion detecting device on an object, and obtains the sky of the object based on the inertial motion information of the object and the camera position and orientation information obtained from the surrounding images. The position and orientation in the interval are detected.
- FIG. 1 is a block diagram for explaining an outline of a functional operation of the motion detection device according to the embodiment of the present invention.
- FIG. 2 is an external view of an operation input device including an imaging unit and the like.
- Figure 3 shows the relationship between the space sensor frame and the force sensor frame in the world coordinate space.
- FIG. 4A is a diagram showing a relationship between a light ray incident on an isosteric projection lens and its projected image.
- FIG. 4B is a diagram showing the relationship between the feature point search start position and the image projection classification mask image.
- FIG. 5A is a configuration diagram of an optical system in the motion detection device.
- FIG. 5B is a diagram illustrating an image captured by the optical system of FIG. 5A.
- FIG. 6A is a configuration of an optical system using the four-sided mirror system.
- FIG. 6B is a diagram showing an image captured by a mirror type optical system.
- FIG. 7A is a configuration diagram of an optical system using a parabolic surface.
- FIG. 7B is a diagram showing an image captured by an optical Byone system using a parabolic surface.
- FIG. 8 is a block diagram for illustrating a functional operation related to image processing in the motion detection device according to the embodiment of the present invention.
- Figure 9A is a diagram showing the appearance of the frame image captured through the optical system and the extracted feature point image.
- Fig. 9B is a diagram showing the state of the search feature and the image on the frame image captured after the spatial sensor has been rotated and moved.
- Fig. 10A shows the registered feature points. It is a conceptual diagram of a server when image information is registered in a memo.
- FIG. 10B is a diagram showing the state of the feature after the movement and the state of the image after the movement.
- FIG. 11A and FIG. 11B are examples of recording characteristic point images in the embodiment of the present invention.
- FIGS. 12A and 12B are examples of registered features and images in another embodiment of the present invention.
- Fig. 13 is an image diagram of the behavior of a feature on a continuous frame image.
- Figure 14 is an image diagram of the characteristics obtained by the image sensor, the image position ⁇ of the force frame of the image, the depth information, and its uncertain elements.
- Fig. 15 shows how the difference between the posture information and the depth information associated with the feature point information gradually becomes smaller by repeating the feature occupation and ⁇ switching processes.
- Fig. 16 is a diagram showing the uncertainty of the m-motion vector gradually decreasing.
- Figure 17 shows the relationship between the spatial attitude information k at the time of recording a particular feature point, the previous attitude information (n-1 1), and the attitude information n to be obtained next.
- Figure 18 shows the relationship between the uncertainties in the motion vector.
- FIG. 1 shows the schematic configuration of a space sensor system to which the motion detection device according to this embodiment is applied.
- Output signals from the angle sensor 10 and the acceleration sensor 20 arranged in the XYZ axis directions on the space sensor frame ⁇ H ⁇ are taken as inertial position and orientation information by the inertial motion detector 30.
- the O peripheral image information continuously captured by the imaging unit 40 is input to the image comparison unit 50.
- the force position sensor frame ⁇ C ⁇ is required by the camera position / posture acquisition unit 60. Then, the inertial motion detection unit 30 and the force camera 1 position and orientation acquisition unit 60 are used to calculate the field coordinate space ⁇
- the image comparison unit 50 uses information from the inertial motion detection unit 30 in order to lower the calculation processing cost. are doing.
- FIG. 2 shows a conventional inertial sensor group for detecting a hand shape and a posture position.
- FIG. 2 shows a state in which a space sensor 1 including an imaging unit 40 and the like described above are arranged on a hand.
- the inertial sensor group 2 includes the above-described three-axis angular velocity sensor 1 to detect the position and posture of the operator's hand and its movement.
- the rotational motion and the translational motion of the back of the hand can be obtained from the information obtained from the acceleration sensor 20 and the angular velocity sensor 10.
- the acceleration sensor 20 combines the gravitational acceleration due to gravity and the inertial acceleration due to inertial motion.
- the inertia acceleration information and the gravitational acceleration information are separated based on the angle information obtained by the calculation.
- the inertial acceleration information based on the inertial motion output from the acceleration sensor 20 obtained in this way becomes zero when the object is moving or stopping at a constant speed, so it is difficult to distinguish the motion state.
- the rotation attitude information in space by the angle sensor 10 includes an error due to the U-off, so the rotation attitude information by the angular velocity sensor 10 includes the gravitational acceleration of the acceleration sensor 20. Correction processing is performed as a base. Positive cannot correct the rotation s around the heavy shaft ⁇
- an optical system for projecting a surrounding image, and imaging for detecting interlocking information in six spatial directions from the image including the image sensor 40a as the unit 40
- the optical axis direction coincides with the back seat key axis direction of the hand, that is, the vertical direction of the back of the hand.
- the 40a lens is a fisheye lens having an angle of view of 1801.
- the optical axis direction and angle of view are not limited to
- an inertial sensor group 2 composed of an acceleration sensor 20 and an angular velocity sensor 10 in the space sensor 1 and an image sensor that acquires a posture from peripheral images
- FIG. 3 shows the relationship between the attitude frames of each sensor between the field seats.
- the sensor that detects the attitude (pose) in the field of coordinate space ⁇ W ⁇ is assumed to be a space (orientation) sensor 1.
- the frame that represents the position and orientation to be used is the space sensor frame ⁇
- the inertial sensor group 2 is a space sensor 1 Configured above.
- the axis of inertia sensor group 2 attitude detection (inertial sensor frame) matches the frame of space sensor 1 and that image sensor 40a has
- the camera position / posture information obtained from this is the position / posture (pose) information of the camera sensor frame ⁇ C ⁇ with respect to the world coordinate space ⁇ W ⁇ .
- Ux constant frame conversion matrix
- Fig. 4A outlines the incident light and the outgoing light of the optical system of this embodiment.
- This optical system has an image height (y) at the image plane with respect to the incident angle (0).
- the shape of the image changes as the incident angle increases, and the optical system of
- a wide-angle fisheye lens or the like can also be configured as a distance projection lens or other forms.
- the angle of view is an optical system with a 180-degree angle, and if the optical axis is oriented in the zenith direction, The image of the entire sky is projected onto the image sensor 40a
- the incident light beam is expressed as follows: the angle from the zenith, that is, the angle ( ⁇ ) with respect to the center of the optical axis (I) of the projected image shown in the lower part of Fig.
- the projection position changes to a position where the radius of the concentric circle increases.
- the ray at an incident angle of 90 degrees is a ray from the horizon, and is projected on ⁇ of the circumference.
- azimuth Light rays from the direction are projected onto a line that coincides with the azimuth of the center line of the circle in the projected image.
- FIG. 4A shows the relationship between the image height and the angle of force with respect to the incident ray angle on the projected image.
- Figure 5A shows an example of another implementation.
- a super wide-angle optical system lens 4 10 is installed above the cylindrical housing 4 6, and the image sensor
- FIG. 5B shows an image image of the optical system as shown in FIG. Fig. 6A, Fig. 6B, and Fig. 7A Fig. 7A show a modified example of another optical system and an image image by the optical system.
- Fig. 6A shows an example of a configuration in which four plane mirrors 6a, 6b, 6c, and 6d are attached to the side surfaces of the pyramid 411.
- Ra-It is also possible to obtain images in more directions by increasing the number of images.
- the projected image by the optical system is a central projection divided into five regions, so the image information
- the spatial position direction of the special point can be easily obtained by the linear transformation formula.
- the following figure 7A shows a modified example using a parabolic surface. Place what you have done. Parabolic mirror-The peripheral image reflected on 7a is projected onto the image sensor 420 through the imaging lens system 47, and the radiation surface shape in Fig. 7A 4 1
- the fortune-teller is under stress.
- Both the top surface 7c and the bottom surface 7b are transparent or transparent.
- the peripheral image in the optical axis direction of the imaging lens system 47 is transmitted and projected onto the image sensor 420, so as shown in Fig. Image 7A in all directions orthogonal to the optical axis and image 7B in the optical axis direction are simultaneously
- the image of the reflected image at the parabolic mirror 7a is circular, but an image 7D in the same direction as the transmitted image at the center is projected outside of the mirror. 5 It is possible to occasionally capture more ⁇ images than the first type
- FIG. 8 is a diagram functionally showing processing performed inside the image comparison unit 50 and the force camera position / posture unit 60 in FIG. 1 described above.
- the light is converted from an optical image into a signal by the imaging unit 40 through the projection optical system 410 and stored as image data. Next, this image data is input to the image comparison unit 50.
- a mask data processing 51 is first performed in the image comparison unit 50. And the mask one evening processing 5
- the peripheral image captured as a continuous frame image after 1 is processed with the image of the previous frame 500, and the processed image is further processed.
- this edge extraction process 52 in order to obtain an edge portion of the input image, differential markers in the X direction and the Y direction (for example, Sobe 1 array) are used. )
- a characteristic occupation image extraction processing 54 is performed to obtain the relative movement of the peripheral image from the image data.
- a wedge of the input image around a predetermined coordinate position is detected, and the point is evaluated to search for a feature point and extract a feature point.
- feature point initial search position information is defined with a search start coordinate position as a black point and a rectangular area around the black point as a search range.
- Information used for feature point extraction such as feature point initial search position information, is stored in the new feature point search table 59.
- V k a feature point image of a certain rectangular area centered on the feature point
- a spatial sensor frame that is the attitude information for the Whirl coordinate space ⁇ W ⁇ already obtained at the time of registration.
- the feature point registration process 56 registration work is performed for all feature points that can be extracted around the initial search position in FIG. 4A.
- the registered feature point information is input next. Used in matching processing in frame images
- the next switching processing 53 is not performed, and the processing of the image comparison unit 50 ends.
- an initialization process 64 is performed, a process of initializing the position / posture information is performed, and the next input frame image is processed in the same manner as described above. Performs mask mask overnight processing 51 and inter-frame processing and edge extraction processing 52
- the registered flag of the feature point information is checked, and if the feature point registered here, that is, the registered flag is set. In the vicinity of the registered feature point coordinates (U k, V k) in the current frame, a search is made for the part with the highest correlation with the registered image. If the correct position is found, it is stored as the current feature point coordinates, and the feature point search flag is set. If not found, the registered flag and feature point search flag of the registration information are reset. I do.
- the matching processing is performed while scanning the feature occupation image within a certain range with respect to the registered feature point position as the normal matching processing 53.
- the location with the highest correlation value is set as the matching position, and this point is set as the feature point matching coordinate (UV-1).
- a correlation value exceeding a certain reference value is determined, it is determined that a feature point has been correctly searched.
- the image captured through the projection optical system 410 according to this embodiment is determined.
- Figure 9A shows the state of the rem image and the extracted feature occupation image.
- Figure 9B shows the state of the search feature point image on the captured frame image after the spatial sensor 1 rotates. ;;
- Figure 1 shows the state of the search feature point image on the captured frame image after the spatial sensor 1 rotates.
- 0 A is the Togi data that registered the registered feature occupation image information of
- -It is a general instruction map of the evening where the registered feature occupation image is an image data of 15 x 15 pixels of 8 bit V gradation and the center pixel of the evening (77) coordinate value Is usually managed as a series of texts starting from the upper left pixel in the memory, which is the set of feature point coordinates (U k V k).
- the feature occupation image data moved as shown in Fig. 10B with respect to the input frame image is used in the matching process 53.
- Kahachi ⁇ Join the image data of a certain registered feature point image and search for the point where the integral value of the absolute value of the difference of each pixel is the smallest. For this reason, in the correlation processing, it is necessary to add the translational address of the size of the search area to the feature point coordinates (U k V k), which is the starting address, and also perform a rotation operation by affinity transformation. There is a need to perform a scan that combines the parallel and rotational movements to search for a single feature point image, which requires a great deal of calculation processing. In addition, this must be performed for all registered feature points, resulting in an extremely large number of computational processes.
- the position and orientation of the current spatial sensor frame n updated at this time (meaning the coordinate conversion to the current sensor frame n as viewed from the world coordinate system) ⁇ Q H n ⁇
- the sensor flat at the time when each feature point is registered Is k. Inverse of ⁇ H k ⁇ ⁇ k H. A ⁇ , each feature point using the registered by relative motion parameter representing the coordinate transformation to the current spatial Sensafure Ichimu n from Isseki ⁇ k H n ⁇ or Ru inverse transform der its ⁇ n H k ⁇ Estimate.
- this parameter is expressed as follows: For example, the coordinate value (x k , y k , z k ) at frame k becomes the coordinate value (x n , y n , ⁇ ⁇ ) at sensor frame n. If you respond,
- This matrix can be represented by three independent parameters ( ⁇ ,, ⁇ y , ⁇ ⁇ ).
- n T k represents the translation vector and three independent parameters
- the coordinate occupancy (U k V k) of the registered feature occupation (U k V k) in the current frame image is estimated from the relational expression between the motion parameters and the projection optical system.
- the search area can be reduced by searching near the obtained feature point movement estimation locus (UP rd V rd rd), which can reduce the cost of computational processing.
- Fig. 11B is a diagram showing the state in which the position and area of the feature point image at the time of recording have been moved to the position based on the estimated X ⁇ movement estimated pixel coordinates. The coordinates were thus transformed.
- Special occupation image is not original rectangular image shape ⁇ deformed by projective transformation
- the scenery is large ⁇ the X-box in front of the moving object, or there is a fluctuating area in the ov-box. If there is a fluctuating image component such as in the feature point image obtained from the feature point, a There may be a case where the line and the matching become inconsistent with the conventional method.
- the recorded XY pixel coordinate area is provided. Therefore, in order to obtain the coordinate value of each pixel at the time of recording, the pixel area of the registered feature point image is set to be a rectangular area fixed vertically and horizontally around the feature point. Therefore, the coordinate value of each pixel is obtained from the vertical and horizontal values of the rectangular area, and then the estimated coordinate value for the search is obtained from the coordinate value by a transformation formula of the projective transformation.
- the coordinate value of the registered pixel position can be registered, it is not necessary to limit the shape to a rectangle, and any shape of the feature point image can be registered. . Even seemingly complicated processes with complicated shapes can be executed by repeating simple processes.
- FIG. 12B shows a state in which the movement position and rotation of the feature point image at the time of registration and the feature point image for search in the above modified example are estimated.
- the registered feature point images need not be rectangular, c, ⁇ ⁇ , and even if the original shape of the estimated search feature point image changes, the matching process must be performed with only simple calculations. It comes out.
- the current spatial sensor frame ⁇ n H used to estimate the position of movement up to this point. Up to that point, the information has been updated by the inertial sensor group 2 to update the frame information in the i73 ⁇ 4-position / posture acquisition unit 70 ⁇ . Inverse of ⁇ H n ⁇ ⁇ n H. ⁇ , The image sensor 4 2 A new value of 0 indicates information that reflects the result obtained in the previous image processing. Therefore, errors due to the drift of the inertial sensor and the like may have accumulated.
- the error that occurs during the update rate by 20 is a very small value
- the current force camera posture information ⁇ nC ⁇ is obtained by the camera position / posture acquisition unit 60.
- the force camera position / posture acquisition unit 60 is formulated as tracking the movement (relative movement) of the Old coordinate system around the camera seat.
- the spatial sensor frame ⁇ H ⁇ and ⁇ Hw ⁇ coincide with each other.
- the characteristics obtained from the image sensor 420 are as follows. The feature points whose depth information z is known are registered. Note that this depth information indicates depth information for the sensor frame system. The initialization processing 64 will be described later. Than this
- ⁇ N C ⁇ and ⁇ n H ⁇ denotes the state to obtain extinction from appearing feature points on a plurality of images images Su Bok rie beam to perform the following description with as equivalent to FIG. 1 3 .
- the feature point i first appears in the frame image k, and the feature occupancy disappears in the frame image (n-2). For each of these frame images, matching processing 5 3 and feature point registration processing 5 3
- feature point coordinates (U, V) and depth information z are used as related parameters of the feature point.
- feature point coordinates of this assign a maximum value indeterminate element sigma z for the depth information z (UV
- the contents of each feature point information are represented as follows.
- the above-mentioned parameter-evening is associated with each feature point, using each feature point as a mark.
- Figure 17 shows the relationship between each frame.
- k is the frame number at which this feature point first appeared in the image
- (u ', V') are the coordinate values of the feature point in frame n.
- cov (u ', v') is the covariance matrix at frame n.
- ⁇ n H Q ⁇ is updated.
- ⁇ n H k Once the update of ⁇ is obtained, ⁇ n H. by using the Ka 1 man filter again. ⁇ Can be updated.
- N H 0 n H kk H 0 et al of the, ⁇ n H k ⁇ and ⁇ k H o ⁇ and measurement, and, K parameters that are calculated by alman Phil evening ⁇ n H. ⁇ . Therefore, (, ⁇ ⁇ , ⁇ ⁇ , ⁇ ,
- Equation 3 a, b, p, and 0 in (Equation 3) are vector amounts.
- ⁇ use a K alm a n filter.
- Figure 18 shows the state of p after updating.
- the depth information z defined in the frame k is estimated. Thereafter, the Kalman filter is applied to only the motion vector P in order. Then, after updating the p, it is possible to reduce the uncertainties sigma zeta of z by applying a K a 1 man filter again. This is done in the following way. ? ⁇ ⁇ / ⁇ ,,,! After ⁇ is calculated, H t ⁇ is again formulated using the following (Equation 4 ).
- the relationship between the position and orientation of the initial frame 0 based on the frame ⁇ indicating the current state is ⁇ ⁇ ⁇ .
- ⁇ Is estimated (updated) the position and orientation of the current frame with respect to the world coordinate system ⁇ .
- ⁇ ⁇ ⁇ is calculated as its inverse matrix (inverse transformation). That is,
- the camera position / posture acquisition unit 60 keeps track of the feature point information from the appearance of the feature point to the disappearance of the feature point, and continuously updates the three-dimensional information of this feature point. Furthermore, when another feature point appears, the camera frame information can be continuously updated by inheriting the three-dimensional information of the current feature point information. But the first space When the sensor is activated, none of the feature points has three-dimensional information, so each feature point can only acquire relative depth information. Therefore, in the initialization process 64, a process of giving depth information to one or more known feature points is performed. First, the first method of the initialization process will be described.
- an identification mark of known size can be, for example, a feature point mark separated by a known interval, or an identification mark having a shape of a known size.
- the initialization processing 64 is performed in a state such as a power-on reset processing after power is input or a forced reset processing by a set switch.
- the wearer performs this initialization process 64 at the position where this identification mark is input to the image sensor 40a.
- the reset and gesture of the hand (for example, when the shape of the hand and the shape of the hand are changed)
- the operation can be performed by defining the operation as a reset operation).
- the image sensor 40a first detects these identification marks, and initially registers depth information z of a feature point extracted from the identification marks based on the known size.
- the camera position and orientation The acquisition unit 60 can keep updating the force frame while associating the registered depth information Z with other feature point information.
- the identification mark is attached to the wearer's own body, as in the second method of the initialization processing.
- Some methods do not require information.
- the pose of the body for example, hand position / posture, head position / posture, etc.
- initialization processing is performed.
- a pendant serving as an identification mark is attached to the chest.
- a contour gesture operation is performed.
- the image sensor 40a on the back of the hand recognizes the characteristic occupation of the predetermined image input area, and registers the depth information z with a known value. Therefore, it is necessary for the wearer himself to measure the initialization pose and the distance from the image sensor 40a to the known feature point at that time in advance, and input that value as the initial value.
- it is sufficient that the position of the feature point can be recognized it is not necessary to recognize the size, distance, shape, or the like of the feature point, and the identification mark can be reduced.
- the third method of initialization processing does not attach a special identification mark to the body, and the size and positional relationship of the body itself are known. This method is used as a feature point. There are two types of methods, similar to the above-mentioned methods.
- the first method measures in advance the dimensions of the wearer's own body, which is the initial feature occupancy, and records it in the initial stage. Is used as a known distance. Alternatively, the width of K or the width of the shoulder may be used as the known distance.
- the initialization process is performed with a predetermined body pose, and the feature occupation position of a part of the body is always extracted from a fixed distance, and the feature point position is determined by the image sensor 40. This is to be input to a.
- the distance of the head is recorded as a known distance, or both hands are opened at regular intervals, and the position of the other hand from one hand Register a known distance by using a gesture for measuring the distance to the camera as the initialization pose
- a reset gesture operation is performed at a predetermined position in front of the body.
- the image sensor 40a on the back of the hand always extracts the image of the face within a certain range of azimuth and angle of incidence, and records the feature points as known feature point information.
- the search range of known feature points during the initialization process can be limited.
- the third method it is possible to perform initialization processing without wearing a special separate mask that is a characteristic occupation on the body, and it is possible to further improve the operability
- the fourth method of the initialization process uses the relationship between the surrounding environment and the posture at the time of initialization.However, when the above-mentioned input device is used for the manipulation input operation, the figure during the U-set gesture operation is used. For example, it is determined in advance that the force should be performed with the back of the hand facing down with the user standing upright tl, or with the hand posture such that the image sensor 40a can see the information of the foot. In addition, the distance from the back of the hand to the floor in this position should be measured in advance and registered in advance. At this time, the feature occupancy extracted from the footstep image can be registered as the feature occupancy of this known distance information.
- the motion detection device attached to the head or the like can be used as distance information, for example, the height of a person in a standing position as distance information. Since the image can be detected by the information of the acceleration sensor 20 in the space sensor, it is always possible to identify the special occupation that becomes the image information of the foot fox. The distance to the feature point can be estimated from the angle information of the characteristic occupation of the foot and the distance to the spatial sensor and the foot from the foot. Therefore, it can be used not only at the time of initialization processing, but also by occupying features that always know depth information.In the case of, it is necessary to use in a limited environment with open feet. Become.
- the fifth method of the initialization process is to perform the initialization operation in advance.
- This is a method that performs initialization processing by performing operations such as X-stitch that performs known luck.
- the initialization process is first started by an initialization start gesture operation (for example, a goo operation is a start operation). .
- the initialization information is obtained by moving the hand from the preset initial position to the end position ii to perform an initialization end gesture operation (for example, a paging operation as an end operation). End input of.
- an initialization end gesture operation for example, a paging operation as an end operation.
- End input of For this movement, it is important that the straight line distance is always constant and a known distance, including the first and last movement.For example, a series of movements from when the hand is extended forward to when it reaches the body The moving distance can be used as almost constant distance information.
- ⁇ - depth information can be registered. This method does not directly use information about the dimensions and positional relationship of the body, but rather uses information on the operating range of the operating unit of the body.
- the initial registration of the depth information z for the initial feature point can be performed by various methods. These can be used alone or in combination with each other.
- the initialization method to be used can be changed by the gesture operation at the time of initialization.
- the search method of the new starting position for the feature point search will be described. If the recorded feature point image cannot be searched in the matching process 53, the information will be discarded but the number of feature points will be small. The amount of information required to determine the position parameter is reduced and the calculation accuracy is reduced.
- the input direction of the feature occupation image is input from all directions as much as possible, the accuracy of the posture information will be increased, so that matching of feature points can be achieved. If it is discarded otherwise, it is necessary to find and register a new feature point, and furthermore, the input direction of the image of the feature point must be registered and tracked. Must be in a different direction than the direction
- Fig. 4B shows an image obtained by expanding the mask data used for the mask data processing 51 described above.
- Projection classification mask 7 The part where it is necessary to omit the peripheral image processing, etc., is set to ⁇ 0'.However, the inside of the circular shape of the projection image is embedded with numerical information other than Factory 0J. In addition, numerical data is embedded in a certain segmented area, and the input image direction of the feature point currently being searched is determined based on this data.
- the circular are divided in several areas of partitioning in the azimuth direction and two concentric circles, each concentric circle in section.
- the outer concentric circles are six regions, and the middle concentric circles are also six regions, but they are separated in a different direction from the outside.
- These territories In the area, there is embedded a classification text that serves as a symbol for identifying each area.
- the identification number of this area contains the search start coordinate for the new starting point for searching for a feature point.
- the search start coordinate value is determined by the position of a black circle drawn at the approximate center of each image projection section mask data area in FIG. 4B.
- the section data corresponds to the image projection section data 58 in FIG.
- the approximate incident direction can be known by obtaining the section data of the image projection section mask image corresponding to the tracking coordinate value. However, it is not necessary to particularly know the angle of this incident direction, etc.
- the search is performed for all the registered search points of the special points, and the segment number for which there is no special point currently being searched for
- the search start coordinate value of the section number may be registered in a new feature point search table for searching, so that the new search feature point
- the incident direction of the image can be dispersed
- the incident ray angle (0) and azimuth angle at each feature point are calculated by the transformation of the projective transformation from the current feature point image coordinates.
- the incident directions of all the feature points are analyzed, and then the next search direction is determined. More complex processing is required for this conversion process, such as splitting the incident direction.
- the number and direction of the search It can be easily changed just by changing the contents of the screen.
- a function of determining and managing the validity of the object as the feature point can be used.
- the force camera frame ⁇ n0 ⁇ obtained from the relative movement of all the feature points is obtained.
- the relative posture ⁇ between the image processing cycles is obtained from the previous posture information.
- the relative motion parameter of ⁇ ⁇ ⁇ !! ⁇ The difference between the estimated coordinate value at the time of estimating the current motion of each feature point and the actual matching coordinate value is evaluated. If the evaluation value is larger than a certain threshold, The characteristic divination is determined as an invalid feature point.
- the feature point registration flag is reset while the feature point search flag is set.
- the special occupation that is determined to be invalid is stored in the image comparison unit 50.
- the invalid feature point tracking processing 55 processing is performed so that the area where the feature occupancy determined to be invalid exists is not registered. That is, it is managed as invalid feature information so that it is not added to the new feature point search information again.
- the ant effect feature point is that the feature point registration flag is reset and the feature occupation search flag is set o, and the feature point registration flag or feature point search flag is 'set'
- the matching feature points are subjected to normal search processing in matching processing 53. Therefore, the tracking flag is set again for the feature point that has been correctly matched here.If the matching has not been performed correctly, the feature point registration flag and the feature point have been set. Both search flags are reset, and validity • Feature point information is discarded regardless of the effect. Next, the invalid feature points for which the feature point registration flag force has not been set are not used in subsequent calculations for obtaining the posture information in the force camera position / posture acquisition unit 60.
- the effective feature points cause three factors in camera posture 0, so if they are simply discarded, there is a high possibility that they will be extracted as feature points in the next image processing cycle. It will be used again in the posture calculation process, etc.o Therefore, tracking those positions and managing the effective special point 1- will reduce the calculation process and seek the accurate posture
- the size of the registered image and the search range for estimating the feature points are fixed as shown in Fig. 4A. % In terms of the translation of the measured object, the movement is small at each feature point of the 3S, 000 objects, but the movement at each feature point of the near-occupied object is larger than that of the long-distance occupation. Depth information estimated from nearby feature points has a large effect on the degree of measurement.
- the characteristic point is determined as a distant characteristic point.
- the process of reducing the size of the registered image ⁇ to change the size of the registered image and increasing the search range is performed by the ⁇ adaptive feature image determination means '' described in m It is time to respond to processing.
- the optimal occupation is required unless the optimal registered image size or search range is changed according to the distance information.
- the deformation and movement range of the registered image become larger with respect to the posture change, so that it is not possible to perform correct ⁇ matching in feature occupation search ⁇ o
- the space sensor of this embodiment is a body movement detection device that can be attached to a part of the body to be measured, and can measure the spatial appearance of the part in six degrees of freedom, as well as being worn on the hand or head. It can be used anywhere, as there is no need to install a reference signal source near or around unlike sensors that use light or magnetism. Even if multiple devices are installed at the same time, they can be used without mutual interference or a reduction in the data update rate.
- the space sensor of this embodiment is attached to a digital force camera or the like, it is possible to simultaneously record the spatial attitude information when continuously taking images. It can be used as information for reconstructing the 3D information of the target object, which is based on these shooting information and spatial attitude information, and can also be used as a 3D image data construction power camera. Monkey
- a motion detecting device capable of accurately recognizing the spatial position, posture, and motion of an object to be mounted, and such a motion detecting device directly or indirectly attached to the body.
- a motion detecting device capable of recognizing a motion of a body part, such as a gesture.
- a motion detection device that detects a motion such as a position and a posture of the device to be operated.
- the position and orientation in space are obtained by using the information of the peripheral image from the imaging means in addition to the information of the inertial motion obtained from the signal of the acceleration sensor or the angular velocity sensor, and the measurement is performed. Measure the position and orientation of the object more accurately.
- the projected image is a central projection divided into five regions, the spatial position and direction of the feature point can be easily obtained by the linear transformation formula of the image information.
- the number of plane mirrors is larger than that of the plane mirrors.
- images from the entire sky can be captured at the same time.
- a sensor using light or magnetism can be used without having to install a reference signal source near or around the sensor. Even if they are installed at the same time, there is no mutual interference and no reduction in data update rate.
- the prediction of the current position of the feature point and the rotation processing of the peripheral image is performed at a very high speed by predicting the prediction based on the attitude information from the inertial sensor.
- the search range was narrowed when the amount of movement of the inertial sensor was small.
- the moving distance is large, it is possible to switch between the two purposes of improving the processing speed and improving the analysis accuracy by expanding the search range.
- the pixel and the coordinate value are set as the corresponding information, so that the comparison process can be easily performed even if the pixel information is converted into discontinuous or multiply accumulated image information. Matching processing can be performed in a scattered state, and processing can be performed more accurately and at higher speed.
- each pixel of the registered image is managed as occupied image data, so that it can be viewed as a local corresponding in the frame image or as a special pixel corresponding to the entire image.
- the feature area can be widened or narrowed according to the complexity of the image to be captured, and the processing accuracy and processing time can be controlled as a pretext.
- fc exceeding the threshold value is determined as an effective feature, and only fixed or stopped objects in the surrounding images are used as reference information. More accurate momentum
- the position of effective feature points is tracked and their image positions are managed, so that the total processing is reduced.
- the depth information having a small difference is obtained by always performing the feature point detection processing without putting a reference mark such as a known feature point in the visual field. You can continue to seek information.
- the invention of 14th it is possible to easily know the projected image direction and classification from the current feature point position and to perform the calculation processing at high speed.
- the image size and the search range for matching are changed to optimal values based on the depth information of the registered feature point information, thereby improving the accuracy of the posture detection and optimizing the processing speed.
- the initialization process can be easily performed anytime and anywhere.
- the initialization process can be easily performed anytime and anywhere, and can cope with a change in the shape of the identification mark.
- the twenty-second invention it is not necessary to attach a special identification mark at the time of initialization, and the feature occupation can be corrected even during measurement after initialization.
- the motion detection device by attaching the motion detection device to the object to be gripped or held by the hand, it is possible to detect the motion of the device to be operated, such as the position m posture.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Position Input By Displaying (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP03770139A EP1594039A4 (en) | 2002-11-07 | 2003-11-04 | APPARATUS FOR MOTION DETECTION |
US11/113,380 US7489806B2 (en) | 2002-11-07 | 2005-04-22 | Motion detection apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2002324014A JP4007899B2 (ja) | 2002-11-07 | 2002-11-07 | 運動検出装置 |
JP2002-324014 | 2002-11-07 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/113,380 Continuation US7489806B2 (en) | 2002-11-07 | 2005-04-22 | Motion detection apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004042548A1 true WO2004042548A1 (ja) | 2004-05-21 |
Family
ID=32310432
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2003/014070 WO2004042548A1 (ja) | 2002-11-07 | 2003-11-04 | 運動検出装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US7489806B2 (ja) |
EP (1) | EP1594039A4 (ja) |
JP (1) | JP4007899B2 (ja) |
KR (1) | KR100948704B1 (ja) |
CN (1) | CN1711516A (ja) |
WO (1) | WO2004042548A1 (ja) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976330A (zh) * | 2010-09-26 | 2011-02-16 | 中国科学院深圳先进技术研究院 | 手势识别方法和系统 |
CN102402290A (zh) * | 2011-12-07 | 2012-04-04 | 北京盈胜泰科技术有限公司 | 一种肢体姿势识别方法及系统 |
CN102592331A (zh) * | 2012-02-14 | 2012-07-18 | 广州市方纬交通科技有限公司 | 一种车辆惯性运动数据采集器 |
WO2018196227A1 (zh) * | 2017-04-28 | 2018-11-01 | 王春宝 | 人体运动能力评价方法、装置及系统 |
CN110057352A (zh) * | 2018-01-19 | 2019-07-26 | 北京图森未来科技有限公司 | 一种相机姿态角确定方法及装置 |
CN110132272A (zh) * | 2019-06-20 | 2019-08-16 | 河北工业大学 | 一种用于空间碎片运动参数的测量方法及系统 |
CN111104816A (zh) * | 2018-10-25 | 2020-05-05 | 杭州海康威视数字技术股份有限公司 | 一种目标物的姿态识别方法、装置及摄像机 |
Families Citing this family (127)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2747236C (en) * | 2004-06-25 | 2013-08-20 | 059312 N.B. Inc. | Shape-acceleration measurement device and method |
JP4592360B2 (ja) * | 2004-09-02 | 2010-12-01 | 公立大学法人会津大学 | 身体状態監視装置 |
GB2419433A (en) * | 2004-10-20 | 2006-04-26 | Glasgow School Of Art | Automated Gesture Recognition |
EP1849123A2 (en) * | 2005-01-07 | 2007-10-31 | GestureTek, Inc. | Optical flow based tilt sensor |
JP5028751B2 (ja) * | 2005-06-09 | 2012-09-19 | ソニー株式会社 | 行動認識装置 |
KR100801087B1 (ko) * | 2006-07-05 | 2008-02-11 | 삼성전자주식회사 | 스트럭처드 라이트를 이용한 이동체 감지 시스템 및 방법,상기 시스템을 포함하는 이동 로봇 |
KR100814289B1 (ko) | 2006-11-14 | 2008-03-18 | 서경대학교 산학협력단 | 실시간 동작 인식 장치 및 그 방법 |
US8792005B2 (en) * | 2006-11-29 | 2014-07-29 | Honeywell International Inc. | Method and system for automatically determining the camera field of view in a camera network |
US7961109B2 (en) * | 2006-12-04 | 2011-06-14 | Electronics And Telecommunications Research Institute | Fall detecting apparatus and method, and emergency aid system and method using the same |
US8416851B2 (en) * | 2006-12-20 | 2013-04-09 | Intel Corporation | Motion detection for video processing |
JP4582116B2 (ja) * | 2007-06-06 | 2010-11-17 | ソニー株式会社 | 入力装置、制御装置、制御システム、制御方法及びそのプログラム |
JP4871411B2 (ja) * | 2007-07-26 | 2012-02-08 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 自動センサ位置認識システム及び方法 |
JP2009115621A (ja) * | 2007-11-06 | 2009-05-28 | Toshiba Corp | 移動体画像追尾装置 |
JP5385555B2 (ja) * | 2007-11-14 | 2014-01-08 | 日立コンシューマエレクトロニクス株式会社 | 生体検査システム、生体検査装置および生体検査方法 |
JP5233000B2 (ja) * | 2007-11-21 | 2013-07-10 | 株式会社国際電気通信基礎技術研究所 | 動き測定装置 |
KR100915525B1 (ko) * | 2007-12-18 | 2009-09-04 | 한국전자통신연구원 | 칼만 필터를 이용한 실시간 카메라 움직임 추정시스템에서의 움직임 추정 방법 및 장치 |
KR101483713B1 (ko) * | 2008-06-30 | 2015-01-16 | 삼성전자 주식회사 | 모션 캡쳐 장치 및 모션 캡쳐 방법 |
JP5305383B2 (ja) * | 2008-10-07 | 2013-10-02 | 国立大学法人秋田大学 | 手指関節位置推定装置、及び手指関節位置推定方法 |
US8310547B2 (en) | 2008-12-05 | 2012-11-13 | Electronics And Telecommunications Research Institue | Device for recognizing motion and method of recognizing motion using the same |
JP4810582B2 (ja) * | 2009-03-26 | 2011-11-09 | 株式会社東芝 | 移動体画像追尾装置および方法 |
US8848979B2 (en) * | 2009-03-31 | 2014-09-30 | Nec Corporation | Tracked object determination device, tracked object determination method and tracked object determination program |
US8630456B2 (en) * | 2009-05-12 | 2014-01-14 | Toyota Jidosha Kabushiki Kaisha | Object recognition method, object recognition apparatus, and autonomous mobile robot |
US9417700B2 (en) | 2009-05-21 | 2016-08-16 | Edge3 Technologies | Gesture recognition systems and related methods |
DE102009037316A1 (de) * | 2009-08-14 | 2011-02-17 | Karl Storz Gmbh & Co. Kg | Steuerung und Verfahren zum Betreiben einer Operationsleuchte |
JP5482047B2 (ja) * | 2009-09-15 | 2014-04-23 | ソニー株式会社 | 速度算出装置、速度算出方法及びナビゲーション装置 |
JP5445082B2 (ja) * | 2009-12-03 | 2014-03-19 | ソニー株式会社 | 速度算出装置及び速度算出方法並びにナビゲーション装置及びナビゲーション機能付携帯電話機 |
EP2354893B1 (en) * | 2009-12-31 | 2018-10-24 | Sony Interactive Entertainment Europe Limited | Reducing inertial-based motion estimation drift of a game input controller with an image-based motion estimation |
CN102122343A (zh) * | 2010-01-07 | 2011-07-13 | 索尼公司 | 躯干倾斜角度确定及姿势估计方法和装置 |
US9547910B2 (en) * | 2010-03-04 | 2017-01-17 | Honeywell International Inc. | Method and apparatus for vision aided navigation using image registration |
US8396252B2 (en) | 2010-05-20 | 2013-03-12 | Edge 3 Technologies | Systems and related methods for three dimensional gesture recognition in vehicles |
JP5628560B2 (ja) * | 2010-06-02 | 2014-11-19 | 富士通株式会社 | 携帯電子機器、歩行軌跡算出プログラム及び歩行姿勢診断方法 |
KR101699922B1 (ko) * | 2010-08-12 | 2017-01-25 | 삼성전자주식회사 | 하이브리드 사용자 추적 센서를 이용한 디스플레이 시스템 및 방법 |
CN102385695A (zh) * | 2010-09-01 | 2012-03-21 | 索尼公司 | 人体三维姿势识别方法和装置 |
US8582866B2 (en) | 2011-02-10 | 2013-11-12 | Edge 3 Technologies, Inc. | Method and apparatus for disparity computation in stereo images |
US8467599B2 (en) | 2010-09-02 | 2013-06-18 | Edge 3 Technologies, Inc. | Method and apparatus for confusion learning |
US8655093B2 (en) | 2010-09-02 | 2014-02-18 | Edge 3 Technologies, Inc. | Method and apparatus for performing segmentation of an image |
US8666144B2 (en) | 2010-09-02 | 2014-03-04 | Edge 3 Technologies, Inc. | Method and apparatus for determining disparity of texture |
JP5872829B2 (ja) * | 2010-10-01 | 2016-03-01 | 株式会社レイトロン | 動作解析装置 |
KR101364571B1 (ko) * | 2010-10-06 | 2014-02-26 | 한국전자통신연구원 | 영상 기반의 손 검출 장치 및 그 방법 |
GB2486445B (en) * | 2010-12-14 | 2013-08-14 | Epson Norway Res And Dev As | Camera-based multi-touch interaction apparatus system and method |
WO2012088285A2 (en) * | 2010-12-22 | 2012-06-28 | Infinite Z, Inc. | Three-dimensional tracking of a user control device in a volume |
US8948446B2 (en) * | 2011-01-19 | 2015-02-03 | Honeywell International Inc. | Vision based zero velocity and zero attitude rate update |
CN103415860B (zh) * | 2011-01-27 | 2019-07-12 | 苹果公司 | 确定第一和第二图像间的对应关系的方法以及确定摄像机姿态的方法 |
US8970589B2 (en) | 2011-02-10 | 2015-03-03 | Edge 3 Technologies, Inc. | Near-touch interaction with a stereo camera grid structured tessellations |
US8447116B2 (en) * | 2011-07-22 | 2013-05-21 | Honeywell International Inc. | Identifying true feature matches for vision based navigation |
JP5839220B2 (ja) * | 2011-07-28 | 2016-01-06 | ソニー株式会社 | 情報処理装置、情報処理方法、及びプログラム |
US9002099B2 (en) * | 2011-09-11 | 2015-04-07 | Apple Inc. | Learning-based estimation of hand and finger pose |
CN103017676B (zh) * | 2011-09-26 | 2016-03-02 | 联想(北京)有限公司 | 三维扫描装置和三维扫描方法 |
CN105651201B (zh) * | 2011-09-26 | 2018-08-31 | 联想(北京)有限公司 | 三维扫描装置和三维扫描方法 |
WO2013069048A1 (ja) | 2011-11-07 | 2013-05-16 | 株式会社ソニー・コンピュータエンタテインメント | 画像生成装置および画像生成方法 |
US9729788B2 (en) * | 2011-11-07 | 2017-08-08 | Sony Corporation | Image generation apparatus and image generation method |
JP5769813B2 (ja) | 2011-11-07 | 2015-08-26 | 株式会社ソニー・コンピュータエンタテインメント | 画像生成装置および画像生成方法 |
CN103099602B (zh) * | 2011-11-10 | 2016-04-06 | 深圳泰山在线科技有限公司 | 基于光学识别的体质检测方法与系统 |
US9672609B1 (en) | 2011-11-11 | 2017-06-06 | Edge 3 Technologies, Inc. | Method and apparatus for improved depth-map estimation |
DE102011118811A1 (de) * | 2011-11-15 | 2013-05-16 | Seca Ag | Verfahren und Vorrichtung zur Ermittlung von Bio-Impedanzdaten einer Person |
CN107835039A (zh) * | 2011-12-12 | 2018-03-23 | 株式会社尼康 | 电子设备 |
EP2634670A1 (en) * | 2012-03-01 | 2013-09-04 | Asplund Data AB | A data input device |
US8836799B2 (en) | 2012-03-30 | 2014-09-16 | Qualcomm Incorporated | Method to reject false positives detecting and tracking image objects |
KR101964861B1 (ko) | 2012-06-29 | 2019-04-02 | 삼성전자주식회사 | 카메라 장치 및 상기 카메라 장치에서의 물체 추적 방법 |
WO2014027500A1 (ja) * | 2012-08-15 | 2014-02-20 | インターナショナル・ビジネス・マシーンズ・コーポレーション | 特徴抽出方法、プログラム及びシステム |
CN109799900B (zh) | 2012-11-01 | 2023-02-28 | 艾卡姆有限公司 | 手腕可安装计算通信和控制设备及其执行的方法 |
US9740942B2 (en) * | 2012-12-12 | 2017-08-22 | Nissan Motor Co., Ltd. | Moving object location/attitude angle estimation device and moving object location/attitude angle estimation method |
US10444845B2 (en) * | 2012-12-21 | 2019-10-15 | Qualcomm Incorporated | Display of separate computer vision based pose and inertial sensor based pose |
CN103099622B (zh) * | 2013-01-15 | 2015-12-09 | 南昌大学 | 一种基于图像的身体稳定性评价方法 |
US10721448B2 (en) | 2013-03-15 | 2020-07-21 | Edge 3 Technologies, Inc. | Method and apparatus for adaptive exposure bracketing, segmentation and scene organization |
JP6273685B2 (ja) * | 2013-03-27 | 2018-02-07 | パナソニックIpマネジメント株式会社 | 追尾処理装置及びこれを備えた追尾処理システム並びに追尾処理方法 |
US9532032B2 (en) * | 2013-04-18 | 2016-12-27 | Ellis Amalgamated, LLC | Astigmatic depth from defocus imaging using intermediate images and a merit function map |
US9020194B2 (en) * | 2013-06-14 | 2015-04-28 | Qualcomm Incorporated | Systems and methods for performing a device action based on a detected gesture |
JP6312991B2 (ja) * | 2013-06-25 | 2018-04-18 | 株式会社東芝 | 画像出力装置 |
CN104296663B (zh) * | 2013-07-17 | 2017-09-19 | 英华达(上海)科技有限公司 | 物件尺寸测量系统及其方法 |
US9235215B2 (en) | 2014-04-03 | 2016-01-12 | Honeywell International Inc. | Feature set optimization in vision-based positioning |
JP6415842B2 (ja) * | 2014-04-16 | 2018-10-31 | 日本光電工業株式会社 | リハビリテーション支援システム |
US10281484B2 (en) * | 2014-05-02 | 2019-05-07 | Qualcomm Incorporated | Motion direction determination and application |
CN103961109B (zh) * | 2014-05-05 | 2016-02-24 | 北京航空航天大学 | 基于加速度信号和角速度信号的人体姿态检测装置 |
CN105377133A (zh) * | 2014-06-18 | 2016-03-02 | 兹克托株式会社 | 可穿戴设备的身体平衡测量方法及装置 |
CN104147770A (zh) * | 2014-07-24 | 2014-11-19 | 燕山大学 | 基于惯性传感器可穿戴式偏瘫康复设备及捷联姿态算法 |
JP2016045874A (ja) * | 2014-08-26 | 2016-04-04 | ソニー株式会社 | 情報処理装置、情報処理方法、及びプログラム |
KR102232517B1 (ko) | 2014-09-15 | 2021-03-26 | 삼성전자주식회사 | 이미지 촬영 방법 및 이미지 촬영 장치 |
US10262426B2 (en) | 2014-10-31 | 2019-04-16 | Fyusion, Inc. | System and method for infinite smoothing of image sequences |
US10176592B2 (en) | 2014-10-31 | 2019-01-08 | Fyusion, Inc. | Multi-directional structured image array capture on a 2D graph |
US9940541B2 (en) | 2015-07-15 | 2018-04-10 | Fyusion, Inc. | Artificially rendering images using interpolation of tracked control points |
US10275935B2 (en) | 2014-10-31 | 2019-04-30 | Fyusion, Inc. | System and method for infinite synthetic image generation from multi-directional structured image array |
US10726593B2 (en) | 2015-09-22 | 2020-07-28 | Fyusion, Inc. | Artificially rendering images using viewpoint interpolation and extrapolation |
JP6245671B2 (ja) * | 2015-02-02 | 2017-12-13 | オーシーアールシステム株式会社 | 光学端末装置及びスキャンプログラム |
US10075651B2 (en) * | 2015-04-17 | 2018-09-11 | Light Labs Inc. | Methods and apparatus for capturing images using multiple camera modules in an efficient manner |
WO2016187759A1 (en) * | 2015-05-23 | 2016-12-01 | SZ DJI Technology Co., Ltd. | Sensor fusion using inertial and image sensors |
CN113093808A (zh) | 2015-05-23 | 2021-07-09 | 深圳市大疆创新科技有限公司 | 使用惯性传感器和图像传感器的传感器融合 |
JP6700546B2 (ja) * | 2015-06-01 | 2020-05-27 | 富士通株式会社 | 負荷検出方法、負荷検出装置および負荷検出プログラム |
CN104951753B (zh) * | 2015-06-05 | 2018-11-27 | 张巍 | 一种有标识物6自由度视觉跟踪系统及其实现方法 |
CN105030244B (zh) * | 2015-06-29 | 2018-05-11 | 杭州镜之镜科技有限公司 | 一种眨眼的检测方法和检测系统 |
CN106170676B (zh) * | 2015-07-14 | 2018-10-09 | 深圳市大疆创新科技有限公司 | 用于确定移动平台的移动的方法、设备以及系统 |
US10750161B2 (en) | 2015-07-15 | 2020-08-18 | Fyusion, Inc. | Multi-view interactive digital media representation lock screen |
US10222932B2 (en) | 2015-07-15 | 2019-03-05 | Fyusion, Inc. | Virtual reality environment based manipulation of multilayered multi-view interactive digital media representations |
US10242474B2 (en) | 2015-07-15 | 2019-03-26 | Fyusion, Inc. | Artificially rendering images using viewpoint interpolation and extrapolation |
US11006095B2 (en) | 2015-07-15 | 2021-05-11 | Fyusion, Inc. | Drone based capture of a multi-view interactive digital media |
US11095869B2 (en) | 2015-09-22 | 2021-08-17 | Fyusion, Inc. | System and method for generating combined embedded multi-view interactive digital media representations |
US10147211B2 (en) | 2015-07-15 | 2018-12-04 | Fyusion, Inc. | Artificially rendering images using viewpoint interpolation and extrapolation |
US10698558B2 (en) | 2015-07-15 | 2020-06-30 | Fyusion, Inc. | Automatic tagging of objects on a multi-view interactive digital media representation of a dynamic entity |
US10852902B2 (en) | 2015-07-15 | 2020-12-01 | Fyusion, Inc. | Automatic tagging of objects on a multi-view interactive digital media representation of a dynamic entity |
CN105180804B (zh) * | 2015-08-10 | 2018-07-03 | 苏州优谱德精密仪器科技有限公司 | 一种光电检测装置 |
US11783864B2 (en) | 2015-09-22 | 2023-10-10 | Fyusion, Inc. | Integration of audio into a multi-view interactive digital media representation |
US10740907B2 (en) | 2015-11-13 | 2020-08-11 | Panasonic Intellectual Property Management Co., Ltd. | Moving body tracking method, moving body tracking device, and program |
JP6611376B2 (ja) * | 2015-12-03 | 2019-11-27 | アルプスアルパイン株式会社 | 位置検出システム |
JP6688990B2 (ja) * | 2016-04-28 | 2020-04-28 | パナソニックIpマネジメント株式会社 | 識別装置、識別方法、識別プログラムおよび記録媒体 |
JP2017224984A (ja) * | 2016-06-15 | 2017-12-21 | セイコーエプソン株式会社 | プログラム、装置、キャリブレーション方法 |
US11202017B2 (en) | 2016-10-06 | 2021-12-14 | Fyusion, Inc. | Live style transfer on a mobile device |
US10437879B2 (en) | 2017-01-18 | 2019-10-08 | Fyusion, Inc. | Visual search using multi-view interactive digital media representations |
US10313651B2 (en) | 2017-05-22 | 2019-06-04 | Fyusion, Inc. | Snapshots at predefined intervals or angles |
CN107316319B (zh) * | 2017-05-27 | 2020-07-10 | 北京小鸟看看科技有限公司 | 一种刚体追踪的方法、装置和系统 |
US11069147B2 (en) | 2017-06-26 | 2021-07-20 | Fyusion, Inc. | Modification of multi-view interactive digital media representation |
JP6933065B2 (ja) * | 2017-09-13 | 2021-09-08 | トヨタ自動車株式会社 | 情報処理装置、情報提供システム、情報提供方法、及びプログラム |
US11314399B2 (en) | 2017-10-21 | 2022-04-26 | Eyecam, Inc. | Adaptive graphic user interfacing system |
WO2019082376A1 (ja) | 2017-10-27 | 2019-05-02 | 株式会社アシックス | 動作状態評価システム、動作状態評価装置、動作状態評価サーバ、動作状態評価方法、および動作状態評価プログラム |
CN108413917B (zh) * | 2018-03-15 | 2020-08-07 | 中国人民解放军国防科技大学 | 非接触式三维测量系统、非接触式三维测量方法及测量装置 |
DE102018108741A1 (de) * | 2018-04-12 | 2019-10-17 | Klöckner Pentaplast Gmbh | Verfahren für optische Produktauthentifizierung |
US10592747B2 (en) | 2018-04-26 | 2020-03-17 | Fyusion, Inc. | Method and apparatus for 3-D auto tagging |
CN110617821B (zh) * | 2018-06-19 | 2021-11-02 | 北京嘀嘀无限科技发展有限公司 | 定位方法、装置及存储介质 |
CN109394226A (zh) * | 2018-09-05 | 2019-03-01 | 李松波 | 一种人体柔韧素质评测训练设备及评测方法 |
US10719944B2 (en) * | 2018-09-13 | 2020-07-21 | Seiko Epson Corporation | Dynamic object tracking |
CN109542215B (zh) * | 2018-10-09 | 2022-03-08 | 中国矿业大学 | 安全帽佩戴监测方法 |
JP7190919B2 (ja) * | 2019-01-25 | 2022-12-16 | 株式会社ソニー・インタラクティブエンタテインメント | 画像解析システム |
CN110163911B (zh) * | 2019-04-10 | 2022-07-19 | 电子科技大学 | 一种图像与惯性结合的头部姿态检测系统 |
US11029753B2 (en) * | 2019-11-05 | 2021-06-08 | XRSpace CO., LTD. | Human computer interaction system and human computer interaction method |
KR102405416B1 (ko) * | 2019-11-25 | 2022-06-07 | 한국전자기술연구원 | Hmd를 착용한 사용자의 자세 추정 방법 |
CN112927290A (zh) * | 2021-02-18 | 2021-06-08 | 青岛小鸟看看科技有限公司 | 基于传感器的裸手数据标注方法及系统 |
CN113639685B (zh) * | 2021-08-10 | 2023-10-03 | 杭州申昊科技股份有限公司 | 位移检测方法、装置、设备和存储介质 |
WO2023113694A2 (en) * | 2021-12-17 | 2023-06-22 | Refract Technologies Pte Ltd | Tracking system for simulating body motion |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000097637A (ja) * | 1998-09-24 | 2000-04-07 | Olympus Optical Co Ltd | 姿勢位置検出装置 |
JP2000132329A (ja) * | 1998-10-27 | 2000-05-12 | Sony Corp | 面認識装置、面認識方法及び仮想画像立体合成装置 |
JP2002007030A (ja) * | 2000-06-16 | 2002-01-11 | Olympus Optical Co Ltd | 運動検出装置及び操作入力装置 |
JP2002259992A (ja) * | 2001-03-06 | 2002-09-13 | Mixed Reality Systems Laboratory Inc | 画像処理装置およびその方法並びにプログラムコード、記憶媒体 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5097252A (en) * | 1987-03-24 | 1992-03-17 | Vpl Research Inc. | Motion sensor which produces an asymmetrical signal in response to symmetrical movement |
US5673082A (en) * | 1995-04-10 | 1997-09-30 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Light-directed ranging system implementing single camera system for telerobotics applications |
JPH0962437A (ja) * | 1995-08-23 | 1997-03-07 | Nec Corp | コンピュータ入力装置 |
JPH10176919A (ja) * | 1996-12-18 | 1998-06-30 | Olympus Optical Co Ltd | 形状入力装置 |
JP2000132305A (ja) * | 1998-10-23 | 2000-05-12 | Olympus Optical Co Ltd | 操作入力装置 |
JP3412592B2 (ja) * | 2000-02-08 | 2003-06-03 | 松下電器産業株式会社 | 個人情報認証方法 |
JP2001344053A (ja) * | 2000-06-01 | 2001-12-14 | Olympus Optical Co Ltd | 操作入力装置 |
JP2002023919A (ja) * | 2000-07-07 | 2002-01-25 | Olympus Optical Co Ltd | 姿勢検出装置及び操作入力装置 |
US6744420B2 (en) | 2000-06-01 | 2004-06-01 | Olympus Optical Co., Ltd. | Operation input apparatus using sensor attachable to operator's hand |
-
2002
- 2002-11-07 JP JP2002324014A patent/JP4007899B2/ja not_active Expired - Fee Related
-
2003
- 2003-11-04 CN CNA2003801027463A patent/CN1711516A/zh active Pending
- 2003-11-04 WO PCT/JP2003/014070 patent/WO2004042548A1/ja active Application Filing
- 2003-11-04 KR KR1020057007946A patent/KR100948704B1/ko not_active IP Right Cessation
- 2003-11-04 EP EP03770139A patent/EP1594039A4/en not_active Withdrawn
-
2005
- 2005-04-22 US US11/113,380 patent/US7489806B2/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000097637A (ja) * | 1998-09-24 | 2000-04-07 | Olympus Optical Co Ltd | 姿勢位置検出装置 |
JP2000132329A (ja) * | 1998-10-27 | 2000-05-12 | Sony Corp | 面認識装置、面認識方法及び仮想画像立体合成装置 |
JP2002007030A (ja) * | 2000-06-16 | 2002-01-11 | Olympus Optical Co Ltd | 運動検出装置及び操作入力装置 |
JP2002259992A (ja) * | 2001-03-06 | 2002-09-13 | Mixed Reality Systems Laboratory Inc | 画像処理装置およびその方法並びにプログラムコード、記憶媒体 |
Non-Patent Citations (2)
Title |
---|
FUJII HIROFUMI ET AL.: "Kakucho genjitsu no tame no gyrosensor o heiyo shita stereocamera ni yoru ichi awase", THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS GIJUTSU KENKYU HOKOKU UPATARN NINSHIKI MEDIA RIKAI], vol. 99, no. 574, 20 January 2000 (2000-01-20), pages 1 - 8, XP002979194 * |
YOKOKOHJI YASUYOSHI ET AL.: "Gazo to kasokudokei o mochiita HMD-jo deno eizo no seikaku na kasane awase", TRANSACTIONS OF THE VIRTUAL REALITY SOCIETY OF JAPAN, THE VIRTUAL REALITY SOCIETY OF JAPAN, vol. 4, no. 4, 31 December 1999 (1999-12-31), pages 589 - 598, XP002979195 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976330A (zh) * | 2010-09-26 | 2011-02-16 | 中国科学院深圳先进技术研究院 | 手势识别方法和系统 |
CN102402290A (zh) * | 2011-12-07 | 2012-04-04 | 北京盈胜泰科技术有限公司 | 一种肢体姿势识别方法及系统 |
CN102592331A (zh) * | 2012-02-14 | 2012-07-18 | 广州市方纬交通科技有限公司 | 一种车辆惯性运动数据采集器 |
WO2018196227A1 (zh) * | 2017-04-28 | 2018-11-01 | 王春宝 | 人体运动能力评价方法、装置及系统 |
CN110057352A (zh) * | 2018-01-19 | 2019-07-26 | 北京图森未来科技有限公司 | 一种相机姿态角确定方法及装置 |
CN110057352B (zh) * | 2018-01-19 | 2021-07-16 | 北京图森智途科技有限公司 | 一种相机姿态角确定方法及装置 |
CN111104816A (zh) * | 2018-10-25 | 2020-05-05 | 杭州海康威视数字技术股份有限公司 | 一种目标物的姿态识别方法、装置及摄像机 |
CN111104816B (zh) * | 2018-10-25 | 2023-11-03 | 杭州海康威视数字技术股份有限公司 | 一种目标物的姿态识别方法、装置及摄像机 |
CN110132272A (zh) * | 2019-06-20 | 2019-08-16 | 河北工业大学 | 一种用于空间碎片运动参数的测量方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
KR100948704B1 (ko) | 2010-03-22 |
JP2004157850A (ja) | 2004-06-03 |
EP1594039A1 (en) | 2005-11-09 |
US7489806B2 (en) | 2009-02-10 |
CN1711516A (zh) | 2005-12-21 |
JP4007899B2 (ja) | 2007-11-14 |
EP1594039A4 (en) | 2006-11-08 |
US20050232467A1 (en) | 2005-10-20 |
KR20050072473A (ko) | 2005-07-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2004042548A1 (ja) | 運動検出装置 | |
CN106643699B (zh) | 一种虚拟现实系统中的空间定位装置和定位方法 | |
JP4136859B2 (ja) | 位置姿勢計測方法 | |
JP3859574B2 (ja) | 3次元視覚センサ | |
JP4422777B2 (ja) | 移動体姿勢検出装置 | |
US7671875B2 (en) | Information processing method and apparatus | |
JP4739004B2 (ja) | 情報処理装置及び情報処理方法 | |
JP5205187B2 (ja) | 入力システム及び入力方法 | |
JP4898464B2 (ja) | 情報処理装置および方法 | |
JP5697590B2 (ja) | 拡張した被写体深度から抽出した三次元情報を用いたジェスチャ・ベース制御 | |
CN111353355B (zh) | 动作追踪系统及方法 | |
JP2000097637A (ja) | 姿勢位置検出装置 | |
JP2002213947A (ja) | ターゲット位置を測定するシステム及びその方法 | |
JP2015532077A (ja) | 少なくとも1つの画像を撮影する撮影装置に関連する装置の位置及び方向の決定方法 | |
JP2001283216A (ja) | 画像照合装置、画像照合方法、及びそのプログラムを記録した記録媒体 | |
JP2008046750A (ja) | 画像処理装置および方法 | |
JP7162079B2 (ja) | 頭部のジェスチャーを介してディスプレイ装置を遠隔制御する方法、システムおよびコンピュータプログラムを記録する記録媒体 | |
KR20190036864A (ko) | 가상현실 전망용 망원경, 이를 이용한 전망용 가상현실 구동 방법 및 매체에 기록된 어플리케이션 | |
CN109800645A (zh) | 一种动作捕捉系统及其方法 | |
JP3732757B2 (ja) | 画像認識方法および画像認識装置 | |
JP2004086929A (ja) | 画像照合装置 | |
JP5083715B2 (ja) | 三次元位置姿勢計測方法および装置 | |
CN104937608B (zh) | 道路区域检测 | |
GB2345538A (en) | Optical tracker | |
Mohareri et al. | A vision-based location positioning system via augmented reality: An application in humanoid robot navigation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 11113380 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020057007946 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20038A27463 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2003770139 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1020057007946 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 2003770139 Country of ref document: EP |