WO2007110731A1 - Image processing unit and image processing method - Google Patents

Image processing unit and image processing method Download PDF

Info

Publication number
WO2007110731A1
WO2007110731A1 PCT/IB2007/000732 IB2007000732W WO2007110731A1 WO 2007110731 A1 WO2007110731 A1 WO 2007110731A1 IB 2007000732 W IB2007000732 W IB 2007000732W WO 2007110731 A1 WO2007110731 A1 WO 2007110731A1
Authority
WO
WIPO (PCT)
Prior art keywords
candidate
point
candidate point
movement
points
Prior art date
Application number
PCT/IB2007/000732
Other languages
French (fr)
Inventor
Masamichi Osugi
Original Assignee
Toyota Jidosha Kabushiki Kaisha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toyota Jidosha Kabushiki Kaisha filed Critical Toyota Jidosha Kabushiki Kaisha
Publication of WO2007110731A1 publication Critical patent/WO2007110731A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Definitions

  • the invention relates to an image processing unit and image processing method that traces an object displayed on obtained images to estimate the attitude, etc. of the object.
  • the region of the object, which can be seen from the camera, may change due to the positional relationship between the object and another object or the rotational movement of the object.
  • the feature points targeted for the trace may be hidden from the images, and therefore, it is impossible to continue tracing the feature • points. Due to such inconvenience, the technology described in JP-A-2003-15816 is applied only to the trace of the movement of an object of which the region that can be seen from the camera changes by only a little amount.
  • the invention provides an image processing unit and image processing method that accurately traces a movement of an object even if the region of the object, which is displayed on the image, changes due to the movement thereof.
  • a first aspect of the invention relates to an image processing unit that includes image obtaining means (1) for obtaining multiple images of a recognition target object; feature point detecting means (2) for detecting a feature point, which is known to be on the recognition target object, in each of the images; candidate point detecting means (3) for detecting a candidate point, which is a candidate for a feature point, in each one of the multiple images independently of the other of the multiple images; checking means (4) for associating the candidate points in the multiple images with each other; and determining means (5) for determining whether the candidate point is on the recognition target object based on the association between the feature points in the multiple images and the association between the candidate points in the multiple images, and for newly identifying the candidate point as a feature point if it is determined that the candidate point is on the recognition target object.
  • a second aspect of the invention relates to an image processing method.
  • the image processing method multiple images of a recognition target object are obtained.
  • a feature point which is known to be on the recognition target object, is detected in each of the images, and a candidate point, which is a candidate for a feature point, is detected in each one of the multiple images independently of the other of the multiple images.
  • the candidate points in the multiple images are associated with each other. Whether the candidate point is on the recognition target object is determined based on the association between the feature points in the multiple images and the association between the candidate points in the multiple images. If it is determined that the candidate point is on the recognition target object, the candidate point is identified as a feature point.
  • the process for newly identifying the . candidate point, as a feature point is executed. Accordingly, even if the region of the recognition target object, which is displayed on the image, incessantly changes and the known feature point is hidden from the image, the candidate point that is present in the ' region newly displayed on the image is identified as a feature point. Thus, it is possible to trace the movement of the recognition target object. . . .
  • the degree of similarity between the movement of the feature point, which is made between the images, and the movement of the candidate point, which is made between the images may be evaluated based on the degree of confo ⁇ nance of the positional change of the candidate point, made between the images, to the geometric correlation that defines the movements of the feature points made between the images.
  • the base matrix that expresses the movements of .the feature points, made between the images may be calculated based on the association between positions of the feature points in the multiple images, and it may be determined that the candidate point is on the recognition" target object if the difference between the position, which is estimated by transforming the position of the candidate point before the movement using the base matrix, and the position, which is reached by the candidate point after the movement and which is associated with the position of the candidate point before the movement, is equal to or less than the threshold value.
  • the base matrix that defines the movements of the feature points, which are made between the images may be calculated based on association between positions of the feature points in the multiple images
  • the base matrix that defines the movement of the candidate point, which is made between the images may be calculated based on the association between positions of the candidate points in the multiple images, and it may be determined that the candidate point is on the recognition target object if the difference between the base matrixes is equal to or less than the threshold value. Whether the difference between the base matrixes is equal to or less than the threshold value may be determined based on the ' square sum of the differences between elements of the base matrixes.
  • the three-dimensional position, which is estimated to be reached by the candidate point after ⁇ the movement may be estimated based on the three-dimensional position of the feature . point and the manner in whichthe feature points move, and it may be determined whether the candidate point is on the recognition target object based on the difference between the estimated three-dimensional position of the candidate point and the three-dimensional position, which is reached by the candidate point after the movement and which is associated with a three-dimensional position of the candidate point before the movement • [0011] It is possible to find a new feature point by detecting the candidate point that . moves in the manner similar, to the movements of the feature points in the above-described methods.
  • the candidate point which is in the region newly displayed on the images due to the change of the region . displayed on the image and which is associated with, the candidate points in the other image in the manner similar to the manner in which the feature points in the images known to be on the recognition target are associated with each other, is identified as a feature point. Accordingly, the feature point is set even in the region that is newly displayed on the image. As a result, it is possible to accurately trace the movement of the recognition target object.
  • FIG. 1 is the block diagram of an image processing unit according to the invention
  • FIGL 2 is the flowchart describing the image processing method according to a first 5 embodiment of the invention
  • FIGs.3Aand 3B show an example of an image array targeted for processing
  • FIG. 4 is the flowchart describing the image processing method according to a second embodiment of the invention.
  • FIG. 5 is the flowchart describing the image processing method according to a third 10 embodiment of the invention. .
  • FIG 1 is the block diagram of an image processing unit according to the invention.
  • the image processing unit includes image obtaining means 1 for obtaining
  • an image array including the image of an object. targeted for recognition (hereinafter,
  • feature point detecting means 2 for detecting
  • the image obtaining means 1 is a device, for example, a television camera, which picks up a moving image and. captures the moving image as. an image array.
  • the image obtaining means 1 may be, for example, a video capture device that captures an image array from the moving image data stored in, for example, a video recorder.
  • Each of the feature point detecting means 2, the candidate point detecting means 3, the checking means 4 and the determining means 5 is formed of a CPU, ROM,
  • RAM random access memory
  • each means may be implemented via software.
  • each of these means may be implemented via an individual piece of software.
  • some means may share the same piece of software.
  • the configuration in which multiple means are incorporated in a piece of soft ware may also be employed.
  • the flowchart in FTG 2 shows the operation executed by the image processing unit according to the invention.
  • an image array (a moving image) is input in the image obtaining means 1 (step Sl).
  • two successive images are picked up.
  • the obtained image array may be stored in, for example, temporary storage means (not shown).
  • the feature point detecting means 2 and the candidate point detecting means 3 detect the feature points and the candidate points in each image, respectively.
  • the feature points and the candidate points may be detected by detecting the points having distinctive luminance ⁇ uminance pattern.
  • the feature points and the candidate points may be detected by the method called harries operator (refer to page 147 to page 151 of C. Harris, and M. Stephens, "A Combined Corner and Edge Detector", Proc. Alvey Vision Conf., 1988).
  • the images and information of the feature points and the candidate points thus detected and the areas around these points are stored in the temporary storage means.
  • Examples of the information stored in the temporary storage means include the positions, the luminance and the feature quantity of the texture of each of the feature points, the candidate points, and the areas around these points; the distances between the feature points/candidate points in one of. the images and the feature points/candidate points in the other image, . . which are calculated based on the similarity in the positions between the feature points/candidate points in one of the images and the feature points/candidate points in the ⁇ other image; the distances between the candidate points in. one of the images and the candidate points in the other image, which are calculated based on the. similarity in the luminance between the candidate points in one of the images and the candidate points in the other image; the distances between the candidate points in one of the images and the • .
  • candidate points in the other image which are calculated based on the similarity between the luminance pattern of the areas around the candidate points in one of the images and the luminance pattern of the areas around the candidate points in the other image; and the values calculated by linearly and non-Iinearly transforming these feature quantities.
  • the feature points and the candidate points in each image are provided with the identification numbers. • .
  • the feature point detecting means 2 and the checking means 4 associate the thus detected feature points and candidate points in one frame (image) with the thus detected feature points and candidate points in the other frame (images) (step S3).
  • the feature point detecting means 2 extracts the feature points that are known to be on the target object from among the detected feature points and candidate points. For example, the feature points that are known to be on the target object and the images of the areas around these feature points are stored in advance, and then compared with the images of the areas around the extracted feature points and candidate points by pattern matching, whereby it is determined whether the extracted feature points match the feature points on the target object.
  • the known feature points have the identification numbers.
  • the feature points that match the known feature points are associated with the identification numbers, and stored with the associated identification numbers.
  • the checking means 4 associates the candidate points in one of the frames with the candidate points in the other frame using, for example, the distances between the candidate points in one of the frames and the candidate points in the other frame, which are calculated based on the similarity in the feature quantity between the candidate points in one of the frames and the candidate points in.the other frame.
  • the distance d is expressed by the following formula.
  • F k (a) is the k th feature quantity of the candidate point a
  • w is the weighting factor.
  • the movements of the feature points known to be on the target object, which are made between the frames are estimated (step S4).
  • the movements may be estimated by, for example, the factorization method (refer to page 137 to page 154 of C. Tomasi & T.
  • the degree of similarity in the movement between a candidate point and the feature points is determined.
  • the first candidate point is selected as the candidate point i (step S5). Only the candidate points that are not identified to be or not to be on the target object are targeted for the determination on the degree of similarity in the movement. Accordingly, not only the candidate points that are known to be on the target object but also the candidate points that are known not to be on the target object are excluded from the candidate points targeted for the determination. Thus, the number of the candidate points targeted for the determination is reduced, which increases the efficiency of the processing. .
  • step 6 the movement of the candidate point i is estimated in the same manner as that in step 4. Then, the estimated movement of the candidate point i is compared with the movements of the feature points estimated in step S4 (step S7). More specifically, when the amount Ml by which the known feature points have moved . and the amount M2 by which the candidate point i has moved satisfy the following conditions, it is determined that the movement of the candidate point i is similar to the movements of the known feature points. [0028] Formula 2 difference in rotational angle around x-axis ⁇
  • step S8 when both the difference in the rotational angle and the difference in the translation amount between the candidate point i and the feature points are small, it is determined that the movement of the candidate point i is similar to the movements of the feature points.
  • step S9 When it is determined that the movement of the candidate point i is similar to the movements of the feature points, the candidate point i is newly identified as a feature point (step S8).
  • step S9 is executed without identifying the candidate point i as a feature point. In step S9, it is determined whether the determinations on all the candidate points have been completed.
  • step SlO is executed.
  • the next candidate- point is selected as the candidate point i in step SlO, and steps S6 to S9 are executed on the newly selected candidate point i.
  • steps S6 to S9 are executed on the newly selected candidate point i.
  • Executing the routine described above makes it possible to determine whether a point that newly appears in the image is on the recognition target object. Accordingly, . 5. ..
  • step S7 It may be determined in step S7 that the similarity between the movement of the candidate point i and the movement of the feature point is low. In such a case, it
  • may be determined that the candidate point is not on the recognition target object if it is determined that the similarity is low not only once but predetermined number of times in a row. Also, it may be determined that the feature point is on the recognition target
  • step S 14 the checking means 4 estimates the inter-frame matrix F of the feature points. For example, the positions of eight pairs of feature points on the images are obtained in the frame A and the frame B.
  • the two-dimensional coordinate of the i ⁇ 10 feature point on the image in the frame A is (U 3 1 , V 8 1 ), and the two-dimensional coordinate of the i" 1 feature point on the image in the frame B is (ji b , V).
  • the following matrix U is produced using the position coordinates of these feature points. . [0036]
  • the transposed matrix U ⁇ is obtained by transposing the matrix U. Then f is obtained as the eigenvector for. the minimum eigenvalue of the matrix U ⁇ -U that is the product of the matrix U ⁇ and the matrix U.
  • the relationship between the eigenvector f and the inter-frame matrix F is as follows.
  • the checking means 4 determines whether the movement of a candidate point is similar to the movements of the feature points based on the inter-frame matrix F.
  • the first candidate point is selected as the candidate point i (step S15).
  • the value J mat is used for an evaluation is calculated (step S16).
  • the value J is within a predetermined range, it is determined that the movement of the candidate point is similar to the movements of the feature points.
  • the value J corresponds to the distance between the actual position of the candidate point on the frame B and the estimated position of the candidate point, which is obtained by transforming the candidate point on the frame A onto the frame B using the inter-frame matrix F that defines the movements of the feature points, which is made between the frames A and B.
  • step S17 it is determined whether the movement of the candidate point i is similar to the movements of the feature points by determining whether the value J is within the predetermined range. Only when it is determined that the movement of the candidate point ⁇ is similar to the movements of the feature points, the process for newly identifying the candidate point i as a feature point is executed (step SlS). Then, it is determined whether the determinations on all the candidate points have been completed.
  • step S20 is executed. Li step S20, the next candidate point is selected as the candidate point i, and steps S16 to S19 are executed on the newly selected candidate point i. In this manner, the degree of similarity in the movement between every candidate point and the feature points is determined.
  • the base inter-frame matrix F that defines the positional changes (movements) of the feature points, which is made between the images, is calculated, and the value J corresponding to the distance between the actual position of the candidate point on the frame B and the estimated position of the candidate point, which is obtained by moving the candidate point on the frame A onto the frame B using the inter-frame matrix F is calculated.
  • the degree of similarity between the movement of the candidate point and the movements of the feature points is determined.
  • FIG 5 is the flowchart showing the image processing method according to a third embodiment of the invention. Steps S21 to S25 in the flowchart in FIG 5 are the same as steps SIl to S15 in FIG 4, respectively.
  • step S26 the checking means 4 replaces at least one of the feature points used to calculate the inter-frame matrix F in step S24 with a candidate point, and calculates the inter-frame matrix P.
  • step S27 the calculated inter-frame matrix F' is compared with the inter-frame matrix F obtained in step S24 (step S27). For example, the following value J may be used for an evaluation.
  • Formula e Formula e
  • Each of the inter-frame matrix F and the inter-frame matrix P defines the movement of the point, which is made between the frames.
  • the value J corresponds to the distance between the inter-frame matrix F and the inter-frame matrix P (square root of the square sum). Accordingly, if the value J is within a predetermined range, it is determined that the movement of the candidate point that has replaced at least one of the feature points is similar to the movements of the feature points.
  • step S27 it is determined whether the movement of the candidate point i is similar to the movements of the feature points by determining whether the value J is within the predetermined range. Only when it is determined that the movement of the candidate point i is similar to the movement of the feature point, the process for newly identifying the candidate point i as a feature point is executed (step S28). Then, it is determined whether the determinations on all the candidate points have been completed.
  • step S30 is executed. Ih step S30, the next candidate point is selected as the candidate point i, and steps S26 to S29 are executed on the newly selected candidate point i. In this manner, the degree of similarity in the movement between every candidate point and the feature points is determined.
  • the inter-frame matrix F is calculated.
  • the multiple feature points may be replaced with the candidate points at the same time.
  • the inter-frame matrix of the eight pairs of known feature points and the inter-frame matrix of the eight pairs of candidate points may be compared with each other.
  • the inter-frame matrix is obtained while the candidate points on the recognition target object and the candidate points that are not on the recognition target object are mixed with each other, it may be determined that the movement of the candidate point on the recognition target object is not similar to the movement of the feature points, and, therefore, it may be erroneously determined that the candidate point, which is actually on the recognition target object, is not on the recognition target object, or it may be erroneously determined that the candidate point, which is actually not on the recognition target object, is on the recognition target object.
  • two or more known feature points may be replaced with the candidate points and the inter-frame matrix may be obtained, only when all the candidate points to be replaced are close to each other or when all the candidate points to be replaced form the boundary of estimated-to-be the same face.
  • the efficiency of the processing is higher than that when the feature points are replaced with the candidate points one by one.
  • the description is made on the assumption that the image obtaining means 1 is a monocular camera. If the image obtaining means 1 is a stereo camera, it is possible to obtain the spacial position of the recognition target object, especially, the spacial positions of the feature points and the candidate points. Ih this case, the movement of the feature point (the movement of the target object) made between the frames is estimated based on the spacial positions. Then, it may be determined whether the candidate point is on the target object based on the distance between the actual spacial position of the candidate point in the latter frame and the spacial position of the candidate point that may be reached when the candidate point moves in the same manner as the feature point.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Multiple images are obtained (step Sl), and the feature points/candidate points are detected in the images (step S2). Then, the feature points/candidate points in the frames are associated with each other (step S3). The movement of the feature point, which is known to be on the object, made between the frames is estimated (step S4), and the movement of the candidate point made between the frames is also estimated (stepS6). The results of the estimations are compared with each other (step S7). The candidate point which moves in the manner similar to the movement of the feature point is newly identified as a feature point (step S8). Repeating these steps makes it possible to identify a point that newly appears on the object on the image as a feature point. Even if the region of the object, which is displayed on the image, changes due to a movement of the object, the image processing unit accurately traces the movement of the object.

Description

IMAGE PROCESSING UNTTAND IMAGE PROCESSING METHOD
BACKGROUND OF THE INVENTION
1. Field of the Invention [0001] The invention relates to an image processing unit and image processing method that traces an object displayed on obtained images to estimate the attitude, etc. of the object.
2. Description of the Related Art
[0002] Technologies for obtaining the images of a moving object and estimating an attitude change and movement of the object based on the obtained images have been ' developed. An example of such technologies is described in Japanese Patent Application Publication No. 2003-15816 (JP-A-2003-15816). According to this publication, the orientation of the face and the line of sight of a person are traced in real time using a stereo camera. Ia the method described in the publication, the feature points in the facial region of the image are detected and the positions of the feature points are traced, whereby the orientation of the face and the line of sight of the person are detected-based on the positions of the traced feature points.
[0003] The region of the object, which can be seen from the camera, may change due to the positional relationship between the object and another object or the rotational movement of the object. In such a case, the feature points targeted for the trace may be hidden from the images, and therefore, it is impossible to continue tracing the feature • points. Due to such inconvenience, the technology described in JP-A-2003-15816 is applied only to the trace of the movement of an object of which the region that can be seen from the camera changes by only a little amount.
SUMMARY OF THE INVENTION
[0004] The invention provides an image processing unit and image processing method that accurately traces a movement of an object even if the region of the object, which is displayed on the image, changes due to the movement thereof.
[0005] A first aspect of the invention relates to an image processing unit that includes image obtaining means (1) for obtaining multiple images of a recognition target object; feature point detecting means (2) for detecting a feature point, which is known to be on the recognition target object, in each of the images; candidate point detecting means (3) for detecting a candidate point, which is a candidate for a feature point, in each one of the multiple images independently of the other of the multiple images; checking means (4) for associating the candidate points in the multiple images with each other; and determining means (5) for determining whether the candidate point is on the recognition target object based on the association between the feature points in the multiple images and the association between the candidate points in the multiple images, and for newly identifying the candidate point as a feature point if it is determined that the candidate point is on the recognition target object.
[0006] A second aspect of the invention relates to an image processing method. According to the image processing method, multiple images of a recognition target object are obtained. A feature point, which is known to be on the recognition target object, is detected in each of the images, and a candidate point, which is a candidate for a feature point, is detected in each one of the multiple images independently of the other of the multiple images. Then, the candidate points in the multiple images are associated with each other. Whether the candidate point is on the recognition target object is determined based on the association between the feature points in the multiple images and the association between the candidate points in the multiple images. If it is determined that the candidate point is on the recognition target object, the candidate point is identified as a feature point. [0007] According to the first and second aspects of the invention, in addition to tracing the known feature points, whether the candidate point, which is a candidate for a feature point and which is extracted from the images, is on the recognition target object is determined based on the association between the feature points in the images and the association between the candidate points in the images. K it is determined that the candidate point is on the- recognition target object, the process for newly identifying the . candidate point, as a feature point is executed. Accordingly, even if the region of the recognition target object, which is displayed on the image, incessantly changes and the known feature point is hidden from the image, the candidate point that is present in the ' region newly displayed on the image is identified as a feature point. Thus, it is possible to trace the movement of the recognition target object. . . .
[0008] In each of the first aspect and the second aspect of the invention, the degree of similarity between the movement of the feature point, which is made between the images, and the movement of the candidate point, which is made between the images, may be evaluated based on the degree of confoπnance of the positional change of the candidate point, made between the images, to the geometric correlation that defines the movements of the feature points made between the images. Alternatively, the base matrix that expresses the movements of .the feature points, made between the images, may be calculated based on the association between positions of the feature points in the multiple images, and it may be determined that the candidate point is on the recognition" target object if the difference between the position, which is estimated by transforming the position of the candidate point before the movement using the base matrix, and the position, which is reached by the candidate point after the movement and which is associated with the position of the candidate point before the movement, is equal to or less than the threshold value.
[0009] In each of the first aspect and the second aspect of the invention, the base matrix that defines the movements of the feature points, which are made between the images, may be calculated based on association between positions of the feature points in the multiple images, the base matrix that defines the movement of the candidate point, which is made between the images, may be calculated based on the association between positions of the candidate points in the multiple images, and it may be determined that the candidate point is on the recognition target object if the difference between the base matrixes is equal to or less than the threshold value. Whether the difference between the base matrixes is equal to or less than the threshold value may be determined based on the ' square sum of the differences between elements of the base matrixes.
[0010] Ih each of the first aspect and the second aspect of the invention, the three-dimensional position, which is estimated to be reached by the candidate point after the movement may be estimated based on the three-dimensional position of the feature . point and the manner in whichthe feature points move, and it may be determined whether the candidate point is on the recognition target object based on the difference between the estimated three-dimensional position of the candidate point and the three-dimensional position, which is reached by the candidate point after the movement and which is associated with a three-dimensional position of the candidate point before the movement [0011] It is possible to find a new feature point by detecting the candidate point that . moves in the manner similar, to the movements of the feature points in the above-described methods.
[0012] According to the aspects of the invention described above; the candidate point, which is in the region newly displayed on the images due to the change of the region . displayed on the image and which is associated with, the candidate points in the other image in the manner similar to the manner in which the feature points in the images known to be on the recognition target are associated with each other, is identified as a feature point. Accordingly, the feature point is set even in the region that is newly displayed on the image. As a result, it is possible to accurately trace the movement of the recognition target object.
[0013] As the association used to determine whether the candidate point is on the recognition target object, the movements of the feature points, the matrix that defines the movements of the positions of the feature points, etc. are used.. Thus, it is possible to accurately and efficiently determine whether the candidate point is on the recognition target object.
BRIEF DESCRIPTION OF THE DRAWINGS . .
[0014] The foregoing and further objects, features and advantages of the invention will become apparent from the following description of example embodiments with reference to. the accompanying drawings, wherein the same or' corresponding portions will be denoted by the same reference numerals and wherein: . FIG. 1 is the block diagram of an image processing unit according to the invention; • . . . FIGL 2 is the flowchart describing the image processing method according to a first 5 embodiment of the invention; '
FIGs.3Aand 3B show an example of an image array targeted for processing; FIG. 4 is the flowchart describing the image processing method according to a second embodiment of the invention; and
FIG. 5 is the flowchart describing the image processing method according to a third 10 embodiment of the invention. .
DETMLED DESCRIPTION OFTHE EXAMPLE EMBODIMENTS [0015] Hereafter, example embodiments of the invention will be described with reference to the accompanying drawings. To facilitate the understanding of the 15. . description, the same or corresponding portions will be denoted by the same reference numerals in the drawings. The portions denoted by the same reference numerals will be . described only once.
[0016] FIG 1 is the block diagram of an image processing unit according to the invention. The image processing unit includes image obtaining means 1 for obtaining
20 an image array. including the image of an object. targeted for recognition (hereinafter,
. referred to as a "recognition target object"); feature point detecting means 2 for detecting
. feature points, which are known to be on the recognition target object, in the image array; candidate point detecting means 3 for detecting candidate points, which are candidates for new feature points, in the image array; checking means 4 for checking the^ candidate
25 points and feature points in one of the multiple images against the candidate points and feature points in each of the other images; and determining means 5 for determining whether the candidate points are on the recognition target object.
[0017] The image obtaining means 1 is a device, for example, a television camera, which picks up a moving image and. captures the moving image as. an image array. Alternatively, the image obtaining means 1 may be, for example, a video capture device that captures an image array from the moving image data stored in, for example, a video recorder.
[0018] Each of the feature point detecting means 2, the candidate point detecting means 3, the checking means 4 and the determining means 5 is formed of a CPU, ROM,
RAM, etc. Each of these means may be provided with an individual piece of hardware.
Alternatively, some or all the means may share the same piece of hardware, and each means may be implemented via software. In this case, each of these means may be implemented via an individual piece of software. Alternatively, some means may share the same piece of software. The configuration in which multiple means are incorporated in a piece of soft ware may also be employed.
[0019] Next, the image processing method according to a first embodiment of the invention will be described in detail with reference to the flowchart in FIG 2. The flowchart in FTG 2 shows the operation executed by the image processing unit according to the invention. First, an image array (a moving image) is input in the image obtaining means 1 (step Sl). Here, two successive images are picked up. The obtained image array may be stored in, for example, temporary storage means (not shown).
[0020] Next, the feature point detecting means 2 and the candidate point detecting means 3 detect the feature points and the candidate points in each image, respectively. ' More specifically, the feature points and the candidate points may be detected by detecting the points having distinctive luminanceΛuminance pattern. For example, the feature points and the candidate points may be detected by the method called harries operator (refer to page 147 to page 151 of C. Harris, and M. Stephens, "A Combined Corner and Edge Detector", Proc. Alvey Vision Conf., 1988). The images and information of the feature points and the candidate points thus detected and the areas around these points are stored in the temporary storage means. Examples of the information stored in the temporary storage means include the positions, the luminance and the feature quantity of the texture of each of the feature points, the candidate points, and the areas around these points; the distances between the feature points/candidate points in one of. the images and the feature points/candidate points in the other image, . . which are calculated based on the similarity in the positions between the feature points/candidate points in one of the images and the feature points/candidate points in the other image; the distances between the candidate points in. one of the images and the candidate points in the other image, which are calculated based on the. similarity in the luminance between the candidate points in one of the images and the candidate points in the other image; the distances between the candidate points in one of the images and the . candidate points in the other image, which are calculated based on the similarity between the luminance pattern of the areas around the candidate points in one of the images and the luminance pattern of the areas around the candidate points in the other image; and the values calculated by linearly and non-Iinearly transforming these feature quantities. The feature points and the candidate points in each image are provided with the identification numbers. .
[0021] The feature point detecting means 2 and the checking means 4 associate the thus detected feature points and candidate points in one frame (image) with the thus detected feature points and candidate points in the other frame (images) (step S3). First,
•the feature point detecting means 2 extracts the feature points that are known to be on the target object from among the detected feature points and candidate points. For example, the feature points that are known to be on the target object and the images of the areas around these feature points are stored in advance, and then compared with the images of the areas around the extracted feature points and candidate points by pattern matching, whereby it is determined whether the extracted feature points match the feature points on the target object. The known feature points have the identification numbers. The feature points that match the known feature points, are associated with the identification numbers, and stored with the associated identification numbers. Thus, the known
, feature points in one of the images (frames) are associated with the matching feature points in the other image (frame). [0022] The checking means 4 associates the candidate points in one of the frames with the candidate points in the other frame using, for example, the distances between the candidate points in one of the frames and the candidate points in the other frame, which are calculated based on the similarity in the feature quantity between the candidate points in one of the frames and the candidate points in.the other frame. When the 1th candidate point in the image t is the candidate point p,1, and the distance between the candidate point pt 1 and the f1 candidate point pj in the image s is the distance d (p{, p./), the distance d is expressed by the following formula. [0023] Formula 1
Figure imgf000010_0001
[0024] In the formula, Fk(a) is the kth feature quantity of the candidate point a, and w is the weighting factor. The shorter the distance d is, the higher the degree of similarity between the candidate point pt' and the candidate point p3 J is. Therefore, the candidate points, which are apart from each other by a distance that is less than a predetermined threshold value and which are apart from each other by the shortest distance d, are associated with each other. [0025J Next, the movements of the feature points known to be on the target object, which are made between the frames, are estimated (step S4). The movements may be estimated by, for example, the factorization method (refer to page 137 to page 154 of C. Tomasi & T. Kanade, "Shape and motion from image streams under orthography- A factorization method", Int. J. Comput. Vision, Vol.9, No. 2, written, 1992-10). [0026] Next, the degree of similarity in the movement between a candidate point and the feature points is determined. First, the first candidate point is selected as the candidate point i (step S5). Only the candidate points that are not identified to be or not to be on the target object are targeted for the determination on the degree of similarity in the movement. Accordingly, not only the candidate points that are known to be on the target object but also the candidate points that are known not to be on the target object are excluded from the candidate points targeted for the determination. Thus, the number of the candidate points targeted for the determination is reduced, which increases the efficiency of the processing. . [0027] In step 6, the movement of the candidate point i is estimated in the same manner as that in step 4. Then, the estimated movement of the candidate point i is compared with the movements of the feature points estimated in step S4 (step S7). More specifically, when the amount Ml by which the known feature points have moved . and the amount M2 by which the candidate point i has moved satisfy the following conditions, it is determined that the movement of the candidate point i is similar to the movements of the known feature points. [0028] Formula 2
Figure imgf000011_0001
Figure imgf000011_0002
difference in rotational angle around x-axis\
+ {difference in rotational angle around y-axis J + [difference in rotational angle around z-axis\
Figure imgf000011_0003
in x-axis direction\
Figure imgf000011_0004
^amount of translation in z-axis direction]
[0029] Namely, when both the difference in the rotational angle and the difference in the translation amount between the candidate point i and the feature points are small, it is determined that the movement of the candidate point i is similar to the movements of the feature points. [0030] When it is determined that the movement of the candidate point i is similar to the movements of the feature points, the candidate point i is newly identified as a feature point (step S8). On the other hand, when it is determined that the movement of the candidate point i is not similar to the movements of the feature points, step S9 is executed without identifying the candidate point i as a feature point. In step S9, it is determined whether the determinations on all the candidate points have been completed. If it is determined that the determinations on all the candidate points have been completed, the routine ends. On the other hand, if it is determined that at least one of the candidate point has riot undergone the determination, step SlO is executed. The next candidate- point is selected as the candidate point i in step SlO, and steps S6 to S9 are executed on the newly selected candidate point i. In this manner, the degree of similarity in the movement between each one of the candidate points and the feature points is determined. . [0031] Executing the routine described above makes it possible to determine whether a point that newly appears in the image is on the recognition target object. Accordingly, . 5. .. even if the region of the recognition target object, which can be seen on the image, changes and therefore the feature point that has been traced is hidden from the image, a feature point is set in the region of the recognition target object, which newly appears on the image, and the feature point is traced. As a result, it is possible to accurately trace the movement of the recognition target object.
10 [0032] It may be determined in step S7 that the similarity between the movement of the candidate point i and the movement of the feature point is low. In such a case, it
may be determined that the candidate point is not on the recognition target object if it is determined that the similarity is low not only once but predetermined number of times in a row. Also, it may be determined that the feature point is on the recognition target
•15 object, if it is determined that the similarity is high predetermined number of times in a row. .
[0033] In the images shown in FIGs. 3A and 3B, the vertexes of the objects are
extracted as the feature points and the candidate points. - In this case, if the three-dimensional object on the left side is the recognition target object, vertexes 7 and 15, 0 which are shown in FIG 3A, are hidden in FIG 3B. Instead of the vertexes 7 and 15, vertexes 8 and 14 appear in FIG. 3B. According to the first embodiment of the invention, it is possible to appropriately determine that the vertexes 8 and 14 that newly appear in FIG 3B are on the three-dimensional object based on the movements of the
. vertexes 8 and 14 and the movements. of the other vertexes (for example, vertexes 9 to- 5 12) in the images (not shown) present between the image shown in FIG 3A and FIG 3B.
. Meanwhile, it is determined that vertexes 21 to 24 of the triangular pyramid are not on the recognition target object based on the movements of the vertexes 21 to 24 made
: • between the FIG. 3A and FIG 3B.
[0034] In the description above, it is determined whether the candidate point,and the . feature points are on the same recognition target object based on the similarity in the movement between the candidate point and the feature points. However, the determination may be made by another method. The flowchart shown. in PlG 4 shows the image processing method according to a second embodiment of the invention. 5 Because steps SIl to S13 are the same as steps Sl to S3 in the flowchart shown in FIG 2, respectively, the description on steps SIl to S13 will not be provided below. £0035] In step S 14, the checking means 4 estimates the inter-frame matrix F of the feature points. For example, the positions of eight pairs of feature points on the images are obtained in the frame A and the frame B. The two-dimensional coordinate of the iώ 10 feature point on the image in the frame A is (U3 1, V8 1), and the two-dimensional coordinate of the i"1 feature point on the image in the frame B is (jib, V). The following matrix U is produced using the position coordinates of these feature points. . [0036] Formula 3
Figure imgf000013_0001
15 [0037] The transposed matrix Uτ is obtained by transposing the matrix U. Then f is obtained as the eigenvector for. the minimum eigenvalue of the matrix Uτ-U that is the product of the matrix Uτ and the matrix U. The relationship between the eigenvector f and the inter-frame matrix F is as follows.
10038] Formula 4
F2i Fn F2i F' ^22 Fx ) '
J
Figure imgf000013_0002
[0039] Thus, the inter-frame matrix F of the feature points is calculated.
[0040] Next, the checking means 4 determines whether the movement of a candidate point is similar to the movements of the feature points based on the inter-frame matrix F. First, the first candidate point is selected as the candidate point i (step S15). As in the first embodiment of the invention, only the points that are not identified to be or not to be on the target object are targeted for the determination on the degree of similarity in the movement. Next, the value J mat is used for an evaluation is calculated (step S16). When the coordinate positions of the candidate point on the images in the frames A and B are (ua, va) and (Ub, Vb), the value J is expressed by the following formula. [0041]Formula5
Figure imgf000014_0001
[0042] If the value J is within a predetermined range, it is determined that the movement of the candidate point is similar to the movements of the feature points. The value J corresponds to the distance between the actual position of the candidate point on the frame B and the estimated position of the candidate point, which is obtained by transforming the candidate point on the frame A onto the frame B using the inter-frame matrix F that defines the movements of the feature points, which is made between the frames A and B.
[0043] In step S17, it is determined whether the movement of the candidate point i is similar to the movements of the feature points by determining whether the value J is within the predetermined range. Only when it is determined that the movement of the candidate point ϊ is similar to the movements of the feature points, the process for newly identifying the candidate point i as a feature point is executed (step SlS). Then, it is determined whether the determinations on all the candidate points have been completed.
If it is determined that at least one of the candidate points has not undergone the determination, step S20 is executed. Li step S20, the next candidate point is selected as the candidate point i, and steps S16 to S19 are executed on the newly selected candidate point i. In this manner, the degree of similarity in the movement between every candidate point and the feature points is determined.
[0044] As described above, the base inter-frame matrix F that defines the positional changes (movements) of the feature points, which is made between the images, is calculated, and the value J corresponding to the distance between the actual position of the candidate point on the frame B and the estimated position of the candidate point, which is obtained by moving the candidate point on the frame A onto the frame B using the inter-frame matrix F is calculated. Thus, the degree of similarity between the movement of the candidate point and the movements of the feature points is determined.
According to the second embodiment of the invention as well as according to the first embodiment of the invention, it is possible to accurately determine whether the candidate point that newly appears on the image is on the recognition target object. [0045] Step S16 and the following steps may be replaced with the steps described below. FIG 5 is the flowchart showing the image processing method according to a third embodiment of the invention. Steps S21 to S25 in the flowchart in FIG 5 are the same as steps SIl to S15 in FIG 4, respectively.
[0046] In step S26, the checking means 4 replaces at least one of the feature points used to calculate the inter-frame matrix F in step S24 with a candidate point, and calculates the inter-frame matrix P. Next, the calculated inter-frame matrix F' is compared with the inter-frame matrix F obtained in step S24 (step S27). For example, the following value J may be used for an evaluation. [0047] Formula e
Figure imgf000015_0001
Here,
Figure imgf000015_0002
[0048] Each of the inter-frame matrix F and the inter-frame matrix P defines the movement of the point, which is made between the frames. The value J corresponds to the distance between the inter-frame matrix F and the inter-frame matrix P (square root of the square sum). Accordingly, if the value J is within a predetermined range, it is determined that the movement of the candidate point that has replaced at least one of the feature points is similar to the movements of the feature points.
[0049] In step S27, it is determined whether the movement of the candidate point i is similar to the movements of the feature points by determining whether the value J is within the predetermined range. Only when it is determined that the movement of the candidate point i is similar to the movement of the feature point, the process for newly identifying the candidate point i as a feature point is executed (step S28). Then, it is determined whether the determinations on all the candidate points have been completed.
If it is determined that at least one of the candidate point has not undergone the determination, step S30 is executed. Ih step S30, the next candidate point is selected as the candidate point i, and steps S26 to S29 are executed on the newly selected candidate point i. In this manner, the degree of similarity in the movement between every candidate point and the feature points is determined.
[0050] In the above description, only one of the eight pairs of known feature points is replaced with the candidate point, and the inter-frame matrix F is calculated. ,- Alternatively, the multiple feature points may be replaced with the candidate points at the same time. The inter-frame matrix of the eight pairs of known feature points and the inter-frame matrix of the eight pairs of candidate points may be compared with each other. However, if the inter-frame matrix is obtained while the candidate points on the recognition target object and the candidate points that are not on the recognition target object are mixed with each other, it may be determined that the movement of the candidate point on the recognition target object is not similar to the movement of the feature points, and, therefore, it may be erroneously determined that the candidate point, which is actually on the recognition target object, is not on the recognition target object, or it may be erroneously determined that the candidate point, which is actually not on the recognition target object, is on the recognition target object. Accordingly, two or more known feature points may be replaced with the candidate points and the inter-frame matrix may be obtained, only when all the candidate points to be replaced are close to each other or when all the candidate points to be replaced form the boundary of estimated-to-be the same face. Thus, the efficiency of the processing is higher than that when the feature points are replaced with the candidate points one by one. In addition, it is possible to suppress erroneous determination, which increases the accuracy of the image recognition.
[0051] In the embodiments described above, the description is made on the assumption that the image obtaining means 1 is a monocular camera. If the image obtaining means 1 is a stereo camera, it is possible to obtain the spacial position of the recognition target object, especially, the spacial positions of the feature points and the candidate points. Ih this case, the movement of the feature point (the movement of the target object) made between the frames is estimated based on the spacial positions. Then, it may be determined whether the candidate point is on the target object based on the distance between the actual spacial position of the candidate point in the latter frame and the spacial position of the candidate point that may be reached when the candidate point moves in the same manner as the feature point.
[0052] While the invention has been described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the example embodiments or constructions. To the contrary, the' invention is intended to cover various modifications and equivalent arrangements. In addition, while the various elements of the example embodiments are shown in various combinations and configurations, which are exemplary, other combinations and configurations, including more, less or only a single element, are also within the scope of the invention.

Claims

1. Aa image processing unit, characterized by comprising: image obtaining means for obtaining multiple images of a recognition target object; feature point detecting means for detecting a feature point, which is known to be on the recognition target object, in each of the images; candidate point detecting means for detecting a candidate point, which is a candidate for a feature point, in each one of the multiple images independently of the other of the multiple images; checking means for associating the candidate points in the multiple images with each other; and determining means for deterrnirdng whether the candidate point is on the recognition target object based on association between the feature points in the multiple images and association between the candidate points in the multiple images, and for newly identifying the candidate point as a feature point if it is determined that the candidate point is on the recognition target object.
2. The image processing unit according to claim 1, wherein the determining means evaluates a degree of similarity between a movement of the feature point, which is made between the images, and a movement of the candidate point, which is made between the images, based on a degree of conformance of a positional change of the candidate point, made between the images, to a geometric correlation that defines the movement of the feature point made between the images.
3. The image processing unit according to claim 1, wherein the determining means calculates a base matrix that expresses a movement of the feature point, made between the images, based on association between positions of the feature points in the multiple images, and determines that the candidate point is on the recognition target object if a difference between a position, which is estimated by transforming a position of the candidate point before a movement using the base matrix, and a position, which is reached by the candidate point after the movement and which is associated with the position of the candidate point before the movement, is equal to or less than a threshold value.
4. An image processing unit according to claim I5 wherein the determining means calculates a base matrix that defines a movement of the feature point, which is made between the images, based on association between positions of the feature points in the multiple images, and a base matrix that defines a movement of the candidate point, which is made between the images, based on association between positions of the candidate points in the multiple images, and determines that the candidate point is on the recognition target object if a difference between the base matrixes is equal to or less than a threshold value.
5. The image processing unit according to claim 4, wherein the determining means determines whether the difference between the base matrixes is equal to or less than the threshold value based on a square sum of differences between elements of the base matrixes.
6. The image processing unit according to claim 1, wherein the determining means estimates a three-dimensional position, which is estimated to be reached by the candidate point after a movement based on a three-dimensional position of the feature point and a manner in which the feature point moves, and determines whether the candidate point is on the recognition target object based on a difference between the estimated three-dimensional position of the candidate point and a three-dimensional position, which is reached by the candidate point after the movement and which is associated with a three-dimensional position of the candidate' point before the movement.
7. An image processing method; characterized by comprising: obtaining multiple images of a recognition target object; detecting a feature point, which is known to be on the recognition target object, in each of the images; • . • . detecting a candidate point, which is a candidate for a feature point, in. each one of the multiple images independently of the other of the multiple images; associating the candidate points in the multiple images with each other; determining whether the candidate point is on the recognition target object based on association between the feature points in the multiple images and association between the candidate points in the multiple images; and newly identifying the candidate point as a feature point if it is determined that the candidate point is on the recognition target object.
8. The image processing method according to claim 7, wherein, it is determined whether the candidate point is on the recognition target object based on the association between the feature points in the multiple images and the association between the candidate points in the multiple images, using a degree of conformance of a positional change of the candidate point, made between the images, to a geometric correlation that defines a movement of the feature point made between the images. .
9. The image processing method according to claim 7, wherein,
; whether the candidate point is on the recognition target object is determined based on the association between the feature points in the multiple images and the association between the candidate points in the multiple images, in a manner in which a base matrix that expresses a movement, of the feature point, made between the images, is calculated based on association between positions of the feature points in the multiple images, and a difference between a position, which is estimated by transforming a position of the candidate point before a movement using the base, matrix, and a position, which is reached by the candidate point after the movement and which is associated with the position of the candidate point before the movement is compared with a predetermined threshold value. • • •
10. The image processing method according to claim 7, wherein, . whether the candidate point is on the recognition target object is determined based on the association between the feature points in the multiple images and the association between the candidate points in the multiple images, in a manner in which a base matrix that defines a movement of the feature point, which is made between the images, is calculated based on association between positions of the feature points in the multiple images, a base matrix that defines a movement of the candidate point, which is made between the images, is calculated based on association between positions of the candidate
- points in the multiple images, and a difference between the base matrixes is compared with a threshold value.
11. The image processing method according to claim 10, wherein whether the difference between the base matrixes is equal to or less than the threshold value is determined based on a square sum of differences between elements of the base matrixes.
12. The image processing method according to claim 7, wherein whether the candidate point is on the recognition target object is determined based on the association between the feature points in the multiple images and the association between the candidate points in the multiple images, using a difference between a three-dimensional position, which is estimated to-be reached by the candidate point after a movement based on a three-dimensional position of the feature point and a manner in which the feature point moves, and a three-dimensional position, which is reached' by the candidate point after the movement and which is associated with a .three-dimensional position of the candidate point before the movement. ■ • .
13. An image processing unit, comprising: image obtaining apparatus that obtains multiple images of a recognition target object; feature point detecting apparatus that detects a feature point, which is known to be on the recognition target object, in each of the images; candidate point detecting apparatus that detects a candidate point, which is a candidate for a feature point, in each one of the multiple images independently of the other of the multiple images; checking apparatus that associates the candidate points in the multiple images with each other; and determining apparatus that determines whether the candidate point is on the recognition target object based on association between the feature points in the multiple images and association between the candidate points in the multiple images, and for newly identifying the candidate point as a feature point if it is determined that the candidate point is on the recognition target object.
PCT/IB2007/000732 2006-03-24 2007-03-22 Image processing unit and image processing method WO2007110731A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2006083295A JP2007257489A (en) 2006-03-24 2006-03-24 Image processor and image processing method
JP2006-083295 2006-03-24

Publications (1)

Publication Number Publication Date
WO2007110731A1 true WO2007110731A1 (en) 2007-10-04

Family

ID=38318646

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2007/000732 WO2007110731A1 (en) 2006-03-24 2007-03-22 Image processing unit and image processing method

Country Status (2)

Country Link
JP (1) JP2007257489A (en)
WO (1) WO2007110731A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009126273A1 (en) * 2008-04-09 2009-10-15 Cognex Corporation Method and system for dynamic feature detection

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8855366B2 (en) * 2011-11-29 2014-10-07 Qualcomm Incorporated Tracking three-dimensional objects
JP6482505B2 (en) * 2016-08-04 2019-03-13 日本電信電話株式会社 Verification apparatus, method, and program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08149460A (en) * 1994-11-18 1996-06-07 Sony Corp Moving image processor, moving image coder and moving image decoder
WO2004003849A1 (en) * 2002-06-28 2004-01-08 Seeing Machines Pty Ltd Tracking method and apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08149460A (en) * 1994-11-18 1996-06-07 Sony Corp Moving image processor, moving image coder and moving image decoder
WO2004003849A1 (en) * 2002-06-28 2004-01-08 Seeing Machines Pty Ltd Tracking method and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WEI DU ET AL: "Tracking by cluster analysis of feature points using a mixture particle filter", PROCEEDINGS. IEEE CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE, 2005. COMO, ITALY SEPT. 15-16, 2005, PISCATAWAY, NJ, USA,IEEE, 15 September 2005 (2005-09-15), pages 165 - 170, XP010881168, ISBN: 0-7803-9385-6 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009126273A1 (en) * 2008-04-09 2009-10-15 Cognex Corporation Method and system for dynamic feature detection
US8238639B2 (en) 2008-04-09 2012-08-07 Cognex Corporation Method and system for dynamic feature detection
US8411929B2 (en) 2008-04-09 2013-04-02 Cognex Corporation Method and system for dynamic feature detection

Also Published As

Publication number Publication date
JP2007257489A (en) 2007-10-04

Similar Documents

Publication Publication Date Title
US10948297B2 (en) Simultaneous location and mapping (SLAM) using dual event cameras
US10068344B2 (en) Method and system for 3D capture based on structure from motion with simplified pose detection
US8588512B2 (en) Localization method for a moving robot
EP2751777B1 (en) Method for estimating a camera motion and for determining a three-dimensional model of a real environment
EP2153409B1 (en) Camera pose estimation apparatus and method for augmented reality imaging
US8355529B2 (en) Motion capture apparatus and method, and motion capture program
CN112115980A (en) Binocular vision odometer design method based on optical flow tracking and point line feature matching
Belter et al. Improving accuracy of feature-based RGB-D SLAM by modeling spatial uncertainty of point features
CN110827321B (en) Multi-camera collaborative active target tracking method based on three-dimensional information
KR101207535B1 (en) Image-based simultaneous localization and mapping for moving robot
KR101460313B1 (en) Apparatus and method for robot localization using visual feature and geometric constraints
JP2006252275A (en) Restoration system of camera motion and object shape
JP6922348B2 (en) Information processing equipment, methods, and programs
CN113012224A (en) Positioning initialization method and related device, equipment and storage medium
WO2007110731A1 (en) Image processing unit and image processing method
Salvi et al. A survey addressing the fundamental matrix estimation problem
KR100792172B1 (en) Apparatus and method for estimating fundamental matrix using robust correspondence point
JP2961272B1 (en) Object recognition apparatus and method using feature vector
CN108694348B (en) Tracking registration method and device based on natural features
Ginhoux et al. Model-based object tracking using stereo vision
JP5032415B2 (en) Motion estimation apparatus and program
Thangarajah et al. Vision-based registration for augmented reality-a short survey
Stock et al. Subpixel corner detection for tracking applications using cmos camera technology
Ghita et al. Epipolar line extraction using feature matching
Jung et al. Feature-based object tracking with an active camera

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07734063

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07734063

Country of ref document: EP

Kind code of ref document: A1