WO2013125494A1 - Image verification device, image verification method, and program - Google Patents

Image verification device, image verification method, and program Download PDF

Info

Publication number
WO2013125494A1
WO2013125494A1 PCT/JP2013/053896 JP2013053896W WO2013125494A1 WO 2013125494 A1 WO2013125494 A1 WO 2013125494A1 JP 2013053896 W JP2013053896 W JP 2013053896W WO 2013125494 A1 WO2013125494 A1 WO 2013125494A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
partial
search
geometric transformation
registered
Prior art date
Application number
PCT/JP2013/053896
Other languages
French (fr)
Japanese (ja)
Inventor
達勇 秋山
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Publication of WO2013125494A1 publication Critical patent/WO2013125494A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • G06V10/7515Shifting the patterns to accommodate for positional errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/753Transform-based matching, e.g. Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/754Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries involving a deformation of the sample pattern or of the reference pattern; Elastic matching

Definitions

  • the present invention relates to an image matching device, an image matching method, and a program thereof, and in particular, an image matching device capable of detecting even a plurality of registered images to be detected among search images to be matched,
  • the present invention relates to an image matching method and a program thereof.
  • Image collation is a technique for comparing the feature points of the first image and the feature points of the second image to determine whether the degree of similarity or coincidence is high.
  • image collation an image similar to a registered image (or sample image) registered in advance can be searched from a search image (query image) group.
  • search image query image
  • Patent Documents 1 to 6 shown below are known as examples of an image collating apparatus.
  • the image collating apparatus 501 disclosed in Patent Document 1 is a projection affine in which the image collating module 501A, which is the main part, first estimates the affine parameters for registration of all registered images, as shown in FIG.
  • a parameter estimation module 250 a partial area projection module 262 that projects a partial area on the registered image RI onto the search image QI using the estimated affine parameters, and associates the partial area with each other, and a partial area projected on the search image QI
  • a projection image matching module 264 that obtains a feature amount and compares each partial area of the registered image RI and the search image QI to determine the match / mismatch, and the number of times that the projection image matching module 264 determines a match for each registered image
  • the image collation apparatus 501 having such a configuration operates to perform registration processing and search processing. Among these, in the registration process, after calculating the arrangement of the feature points of the registered image RI and the feature vector of the registered image RI, the partial region feature amount of the registered image RI is calculated.
  • the projection image partial area is determined for each of the plurality of affine parameters. At this time, it is determined that the partial area of the registered image RI matches the partial area of the projection image using any one affine parameter.
  • the projected image partial area is determined for each of a plurality of single affine parameters, and the partial areas are verified based on this. Is called.
  • affine parameters can be replaced with arbitrary geometric transformation parameters such as a homography matrix representing a correspondence between a plane and a plane.
  • Patent Document 2 discloses a technique of using features having arbitrary invariance with respect to scale, affine transformation, and perspective distortion for the purpose of searching a partial image of a search image.
  • perspective transformation homography transformation is exemplified.
  • Patent Document 3 for the purpose of searching for Japanese in an image, a method is adopted in which part or all of characters are adjusted to be connected components, and the center of gravity of the connected components is obtained as a feature point. Is disclosed.
  • Patent Document 4 discloses a technique for preventing a registration image and a search image from being erroneously determined to match with a certain condition of conformity. Further, Patent Document 4 discloses an image collation technique that can collate with high accuracy even when there is a geometric transformation such as affine transformation between one and the other images to be compared.
  • Patent Document 5 specifies candidate images by extracting local features in order to prevent a decrease in accuracy when the number of local feature points is insufficient while enabling search of images that have been rotated or cut out in the search image.
  • a method for searching for an image by obtaining a geometric transformation parameter that minimizes the distance between a sample feature point and a query feature point, comparing the overall features, and calculating the overall similarity between the local feature and the overall feature feature is disclosed. ing.
  • This Patent Document 5 discloses a method for obtaining geometric transformation parameters by repeatedly evaluating and voting errors of parameters extracted at random by RANSAC (RANdom SAmple Consensus).
  • RANSAC Random SAmple Consensus
  • Patent Documents 1 to 5 and related technologies combining these when a plurality of the same registered images are included in the search image, only a maximum of one can be detected. There has been a disadvantage that the position or number of each cannot be detected with high accuracy.
  • An object of the present invention is to enable detection of a plurality of images when a plurality of the same registered images are included in a search image, thereby expanding the versatility of the apparatus, and an image verification method And providing the program.
  • an image collation apparatus includes a storage unit that stores a registered image having a predetermined feature point, and an image processing unit that compares the registered image with a search image to be collated.
  • Image collating device The image processing means estimates a geometric transformation parameter based on each feature point and the arrangement of the registered image and the search image, and the geometric transformation parameter estimated by the geometric transformation parameter estimation unit.
  • a partial area determining unit that determines a projected partial image area in the search image corresponding to each partial area image in the registered image by using the plurality of projected partial image areas and each part of the registered image in the search image;
  • a partial region feature amount calculation unit that calculates each feature amount of the region image, and compares the feature amounts of the plurality of partial search images with the feature amount of the partial registration image specified in the partial region of the registered image, respectively.
  • a partial image collating unit for determining a match between the partial search image and the partial registered image, and using the geometric transformation parameter, Characterized by comprising the registered image position detecting unit to detect and calculate the location, the.
  • an image matching method includes a storage unit that stores a registered image having a predetermined feature amount, and an image processing unit that compares the registered image with a search image to be compared.
  • a geometric transformation parameter estimation unit of the image processing means Based on each feature point and the arrangement of the registered image and the search image, a geometric transformation parameter estimation unit of the image processing means estimates a geometric transformation parameter, and uses the estimated geometric transformation parameter to register a registered image.
  • a plurality of projection partial image areas in the search image corresponding to each of the partial image areas of the image processing means are determined by the partial area determination unit of the image processing means, the plurality of projection partial image areas of the search image and the registered image
  • Each feature quantity of each partial area image is calculated by a partial area feature quantity calculation unit of the image processing means, and the plurality of feature quantities corresponding to the feature quantities of the partial registration image specified in the partial area of the registered image
  • the partial image matching unit of the image processing unit compares the feature amount of the partial search image to determine whether the partial search image matches the partial registered image, and When it is determined that the partial search image and the partial registration image match, the image position detection unit of the image processing means determines the partial search image based on a geometric transformation parameter of the search image with respect to the registration image. The position is calculated and detected.
  • an image collation program includes a storage unit that stores a registered image having a predetermined feature amount, and an image processing unit that compares the registered image with a search image to be collated.
  • a geometric transformation parameter estimation procedure for estimating a geometric transformation parameter based on each feature point of the registered image and the search image and its arrangement;
  • a partial region determination procedure for determining a plurality of projection partial image regions in a search image corresponding to each partial region image in a registered image using the estimated geometric transformation parameter;
  • a feature amount calculating procedure for calculating each feature amount of each of the plurality of projection partial image regions of the search image and each of the partial region images of the registered image;
  • Partial image matching procedure to determine match, And an image position for calculating and detecting the position of the matched partial search image using the geometric transformation parameter when it is determined in the partial image matching procedure that the partial search image and the partial registered image match.
  • a detection procedure These are executed by a computer provided in the image processing means.
  • the geometric transformation parameter estimation unit estimates the geometric transformation parameters of the search image with respect to the registered image
  • the partial image matching unit searches for the feature points and the partial search of the partial registration image. Since it is configured to determine whether or not there is a match by comparing the feature points of the image, and the position calculation unit calculates and determines the position of the matched partial search image using the geometric transformation parameter, a plurality of search images Even when there is a partial search image, the position can be calculated individually, and thereby the versatility of the apparatus can be expanded, and the excellent image matching apparatus, the image matching method, which are not in the related technology described above, And the program can be provided.
  • FIG. 11 is an explanatory diagram illustrating an example of a table used when calculating the reliability of a geometric transformation parameter in relation to FIG. 10. It is a block diagram which shows the structural example of 3rd Embodiment of this invention. It is a block diagram which shows the detailed structural example of the integrated process part disclosed by 3rd Embodiment.
  • an image collation apparatus 101 compares a storage unit 10 that stores a registered image RI having a feature quantity with the registered image RI and a search image QI to be collated, and performs an image processing. And processing means 12.
  • the storage means 10 stores in advance a registered image RI having a feature quantity and various data (partial registered image RP and the like) calculated from the registered image RI.
  • the image processing means 12 is estimated by the geometric transformation parameter estimation unit 16 that estimates the geometric transformation parameters based on the feature points and the arrangement of the registered image RI and the search image QI, and the geometric transformation parameter estimation unit 16.
  • a partial area determination unit 14 that determines a projected partial image area in the search image QI corresponding to each partial area image in the registered image RI using the geometric transformation parameter, and each of the plurality of projections of the search image QI
  • a partial region feature amount calculation unit 18 that calculates each feature amount of the partial image region and each partial region image of the registered image RI, and the feature amount of the partial registered image specified by the partial region of the registered image RI
  • a partial image collation unit 20 that determines a match between the partial search image and the partial registration image by comparing feature quantities of a plurality of partial search images; Comprises an image position detecting section 22 to detect and calculate what the position of the partial search image the matching using the transformation parameters, a.
  • the geometric transformation parameter estimation unit 16 includes a parameter estimation function (candidate parameter calculation function 16a) that calculates and estimates the geometric transformation parameters that are candidates for the partial search image for a plurality of sets.
  • the image position detection unit 22 calculates the degree of coincidence between the partial search image and the partial registered image for each estimated geometric transformation parameter, and sets the reliability of the geometric transformation parameter having a high degree of coincidence.
  • Whole image position estimation for calculating and estimating in which area in the search image QI the image area corresponding to the registered image RI is calculated using the reliability calculation function 22a and this highly reliable geometric transformation parameter. And a function 22b.
  • the above-described image position detection unit 22 includes a corresponding position calculation function 22c that calculates the position of the partial search image using the geometric transformation parameter and the preset coordinates of the partial registration image.
  • the above-described geometric transformation parameter estimation unit estimates the geometric transformation parameter of the search image QI with respect to the registered image RI, and the partial image matching unit uses the feature points of the partial registration image and the feature of the partial search image. Since it is configured to determine whether or not there is a match by comparing the point, and the position calculation unit calculates and determines the position of the matched partial search image using the geometric transformation parameter, a plurality of parts are included in the search image QI. Even if there is a search image, its position can be calculated individually. That is, the position of each of the plurality of partial search images included in the search image QI can be calculated, thereby increasing the versatility of the apparatus.
  • the geometric transformation parameter estimation unit 16 described above first extracts a feature point for each of the registered image RI and the search image QI described above, and also specifies a feature point extraction / specification function that specifies the arrangement position of the extracted feature point. It has.
  • the geometric transformation parameter estimation unit 16 includes a feature vector calculation function for calculating a geometric invariant string (invariant feature vector) that characterizes the arrangement of feature points, and further, the registered image RI and the search image.
  • a geometric transformation parameter estimation function for estimating a geometric transformation parameter from each QI feature point arrangement information is provided.
  • the geometric transformation parameter is a parameter representing the content and amount of geometric transformation between two images
  • examples of the geometric transformation include affine transformation and homography transformation.
  • the affine transformation can represent transformation of movement, scaling, rotation, and skew (oblique distortion) of the feature on the image.
  • the homography transformation can represent trapezoidal distortion.
  • This geometric transformation parameter is a set of parameters representing a geometric transformation between two images, such as an affine parameter and a homography matrix.
  • the parameters of these geometric transformations include an affine parameter and a homography parameter.
  • these parameters may be approximated. As the approximate value, the length ratio of the characteristic portion, the area ratio, a ratio of some cumulative value, or the like can be used.
  • the search image QI as a whole has non-linear distortion.
  • the geometric transformation parameters for each partial search image QP are obtained and handled. By doing so, it is possible to cope with non-linear conversion by linear conversion for each part.
  • the geometric transformation parameters between the registered image RI and the search image QI can be obtained by various methods. First, when the external parameter indicating the installation angle of the camera that captured the registered image RI and the external parameter of the camera that captured the search image QI are known in advance, the geometric transformation parameter is determined from the difference between the external parameters. Can be calculated.
  • the same object (including characters and codes) is captured in the registered image RI and the search image QI, and the object imaged in the search image QI moves with reference to the object imaged in the registered image RI.
  • the geometric transformation parameters can be obtained by comparing the characteristics of the same object (for example, coordinates of the circumscribed rectangle).
  • the geometric transformation parameter can be calculated using the image portion of the index.
  • the geometric transformation invariant is data indicating the feature amount of an image that is not lost by geometric transformation, and is a quantity that can be reproduced using a geometric transformation parameter.
  • the geometric transformation invariant is a feature of the image, and the specific calculation of the feature may be executed by the partial region feature amount calculator 18.
  • the geometric transformation parameter estimation unit 16 estimates the geometric transformation parameters of the search image QI for the registered image RI.
  • the geometric transformation parameter estimation unit 16 determines whether or not the same or similar object is imaged by determining whether there is a geometric transformation invariant between the registered image RI and the search image QI, for example.
  • the geometric transformation parameter can be estimated by calculating a parameter such that the feature portions which are geometric transformation invariants overlap.
  • the degree of coincidence between the registered image RI of the geometric transformation invariant and the search image QI is used as an index, and all combinations or random combinations of the parameters are obtained by voting or the least square method. It can be calculated by optimization calculation or the like.
  • This estimation method can perform estimation even if the external parameters of the camera are not known in advance or an object whose shape or the like is specified in advance in the search image QI is not reliably captured. For this reason, it is suitable for collation processing for determining whether or not an image that is the same as or similar to the registered image RI (part thereof) is included in the search image QI.
  • the geometric transformation parameter estimation unit 16 estimates a geometric transformation parameter for each partial retrieval image QP. Further, instead of estimating only one geometric transformation parameter for one partial search image QP, a plurality of candidate geometric transformation parameters may be estimated.
  • the geometric transformation parameters adapted to each partial retrieval image QP can be obtained without specifying the coordinates and shape of the partial retrieval image QP in advance. Can be sought.
  • the estimated geometric transformation parameter is a value adapted to the partial search image QP.
  • geometric transformation parameters corresponding to each of the partial search images QP are obtained.
  • the partial area determination unit 14 described above has a function of specifying a partial registered image RP by calculating a plurality of partial areas of the search image QI.
  • a circumscribed rectangle or the like may be obtained by applying image processing such as clustering, binarization, and discriminant analysis to the search image QI, or the above-described geometric transformation parameters may be obtained in advance.
  • the partial area on the search image QI may be obtained by projecting the position of the partial registration image RP onto the search image QI using the geometric transformation parameter.
  • the partial region determination unit 14 may specify the partial search image QP by image processing on the search image QI, but it is registered using the geometric transformation parameter.
  • the partial search image QP may be specified by projecting the coordinates of the partial registration image RP in the image RI onto the search image QI.
  • the partial area feature amount calculation unit 18 described above calculates the feature amount of each partial search image QP.
  • the feature amount may be a geometric transformation invariant or another type of feature that is not a geometric transformation invariant as long as it is a feature amount for determining a match between the partial registration image RP and the partial search image QP.
  • the features of the partial registration image RP are calculated in advance using the partial region feature amount calculation unit 18 and the like, and are stored in the storage unit 10.
  • the partial image matching unit 20 described above compares the feature amounts of the plurality of partial search images QP with the features of the partial registration image RP specified in the partial region of the registered image RI, so that the partial search image QP. And the partial registration image RP. This determination of coincidence can be performed by information processing for determining whether or not the difference in feature amount or its ratio is within a range indicated by a static or dynamic threshold value.
  • the image position detection unit 22 has a function of calculating the position of the matched partial search image QP using the geometric transformation parameter.
  • the geometric transformation parameter is useful for specifying the partial search image QP.
  • a plurality of partial search images QP included in the search image QI can be searched, and the position of the partial search image QP can be calculated. Furthermore, by determining the position of the partial search image QP, the number of partial registration images RP in the search image QI can be calculated.
  • FIG. 3 is a block diagram illustrating a configuration example in the case of calculating the reliability of the geometric transformation parameter, although a part of the description is around.
  • the geometric transformation parameter estimation unit 16 includes a candidate parameter calculation function 16a.
  • the image position detection unit 22 described later estimates the position of the partial search image QP using the parameter reliability calculation function 22a for determining the reliability of the geometric conversion parameter and the geometric conversion parameter. And a calculated image position estimating function 22b.
  • the candidate parameter calculation function 16a of the geometric transformation parameter estimation unit 16 calculates a plurality of sets of geometric transformation parameters that are candidates for the partial search image QP.
  • the parameter reliability calculation function 22a of the image position detection unit 22 calculates the degree of coincidence between the partial search image QP and the partial registered image RP for each candidate geometric transformation parameter, and obtains a geometric transformation parameter having a high degree of coincidence. This is a function for setting a high reliability.
  • the degree of coincidence may be the degree of coincidence of individual partial search images QP, or the ratio of the number of matched partial search images QP to partial search images QP when there are a plurality of partial search images QP subjected to the same geometric transformation. It is also good.
  • the parameter reliability calculation function 22a determines whether each of the partial registration images RP matches the partial registration image RP using the geometric transformation parameter, and the number of partial search images QP determined to match is determined. If it is above a certain level, it is determined that the geometric transformation parameter is reliable.
  • the image position estimation function 22b is a function for calculating the position of the partial search image QP using a highly reliable geometric transformation parameter. That is, the image position estimation function 22b uses the geometric transformation parameter determined to have high reliability by the parameter reliability calculation function 22a to determine which partial search image QP corresponding to the partial registration image RP is in the search image QI. It can be estimated whether it is in position.
  • the reliability is high by comparing the feature amounts of the partial registration image RP and the partial search image QP, not by the geometric conversion parameter based on the comparison of the geometric transformation invariant between the registration image RI and the search image QI.
  • the position of each partial search image QP can be calculated with high accuracy.
  • the geometric transformation parameters derived from the features of the respective partial registration images RP become the optimal geometric transformation parameters for the partial search image QP that matches the partial registration image RP.
  • the optimum geometric transformation parameter for each partial search image QP can be selected based on the reliability, and the position of the partial search image QP can be obtained using the geometric transformation parameter.
  • the image position detection unit 22 preferably includes a corresponding position calculation function 22c that calculates the position of the partial search image QP using the geometric transformation parameters and the coordinates of the partial registration image RP ( (See FIG. 1).
  • the position of the partial search image QP can be calculated in the coordinate system of the registered image RI, and various image processing corresponding to the coordinates of the registered information can be performed.
  • the geometric transformation parameter estimation unit 16 estimates a geometric transformation parameter (FIG. 4: step S101).
  • a geometric transformation parameter established by the registered image RI and the search image QI. If the geometric transformation parameters of the registered image RI for the search image QI portion (the portion that becomes the partial search image QP) are locally different, different geometric transformation parameters may be obtained respectively. Conversely, when the geometric transformation parameters of the search image QI for the partial registration image RP are locally different, different geometric transformation parameters may be obtained in the same manner.
  • the partial region determination unit 14 specifies a partial registration image RP in the registration image RI (FIG. 4: step S102). And the partial registration feature-value RC which is the feature-value of this partial registration image RP is read from the memory
  • the partial area determination unit 14 determines a partial area of the search image QI and sets it as the partial search image QP (FIG. 4: step S104).
  • the region of the partial search image QP may be specified by projecting the coordinates of the partial registration image RP onto the search image QI using the geometric transformation parameter calculated in step S101.
  • the partial region feature quantity calculation unit 18 calculates the partial search feature quantity QC of the partial search image QP (FIG. 4: step S105).
  • the partial search feature quantity QC is a feature quantity in a format comparable to the partial registration feature quantity RC.
  • the partial image matching unit 20 determines whether the partial registration image RP in the registration image RI matches the partial search image QP (FIG. 4: step S106).
  • the image position detection unit 22 has one technical feature in that the position of the partial search image QP (existing area of the registered image RI) is calculated (FIG. 4: step S107). .
  • the position of the partial search image QP is calculated using the geometric transformation parameter itself calculated in step S101 of FIG. 4 or a better geometric transformation parameter for specifying the partial search image QP, the partial search is performed. It is preferable to calculate the position of the image QP.
  • the example shown in FIG. 5 shows an example in which the position of the partial search image QP is calculated using a highly reliable geometric transformation parameter.
  • the parameter reliability calculation function 22a determines the coincidence of each of the plurality of partial search images QP for each of the plurality of geometric transformation parameters (FIG. 5: Step S111).
  • Step S111 of FIG. 5 when there are three geometric transformation parameters and there are four partial search images QP, it is determined whether there are 12 combinations.
  • the parameter reliability calculation function 22a may count the reliability using the reliability table TB shown in FIG.
  • the parameter reliability calculation function 22a determines that a geometric transformation parameter having a certain number of partial search images QP determined to match can be trusted (FIG. 5: step S112).
  • the geometric transformation parameter having the largest number of matches is set as a highly reliable geometric transformation parameter.
  • the first geometric transformation parameter matches in two partial search images QP
  • the second geometric transformation parameter matches in two partial search images QP
  • the third geometric transformation parameter matches in three partial search images QP.
  • the parameter reliability calculation function 22a determines that the third geometric conversion parameter is a highly reliable geometric conversion parameter.
  • the position of the partial search image QP is finally specified (FIG. 5: step S113), and the coordinates of the partial search image QP based on this geometric transformation parameter are determined as the part.
  • the position of the search image QP is set (FIG. 5: Step S114).
  • the operation content (information processing) of each component and the linkage operation at the time of information processing between the components are programmed, and the image processing means is provided. It may be realized by a computer. The same applies to each embodiment described later.
  • the registration image RI is determined so as to determine in which area in the search image QI, the same registration image RI is included in the search image QI.
  • a plurality of images can be detected when a plurality of images are captured.
  • the same registered image RI is included in the search image QI, it is possible to detect at which position in the search image QI the registered image RI is included.
  • FIG. 6 is a diagram for illustrating the effect of the first embodiment.
  • two partial registration images RP are included in the search image QI.
  • Patent Document 1 described above discloses a similar content.
  • the partial search image QP [1] and the partial search image QP [2] are in different areas. Since collation is performed without considering that there is, it is difficult to obtain with high accuracy where the partial registration image RP exists in the search image QI.
  • a plurality of the same partial registration images RP are included in the search image QI.
  • a plurality of partial registration images QP [1] and QP [2] can be detected.
  • a plurality of the same registration images RI are included in the search image QI. In some cases, multiple images can be detected. In addition, when the same registered image RI is included in the search image QI, it is possible to detect at which position in the search image QI the registered image RI is included.
  • FIGS. 7 to 11 an example in which a feature vector is used as a feature amount and an example in which a homography transformation is used as a geometric transformation will be described.
  • symbol shall be used about the same component as 1st Embodiment mentioned above.
  • the image collation apparatus 102 performs a partial area determination unit 14 configured substantially the same as that of the first embodiment described above in order to collate the registered image RI and the search image QI.
  • a geometric transformation parameter estimation unit 16 a partial region feature amount calculation unit 18, a partial image matching unit 20, and an image position detection unit 22.
  • the partial region feature quantity calculation unit 18 has a function of calculating feature quantities of the partial registration image RP and the search image QI and calculating a feature vector having a geometric transformation invariant as an element as the partial registration feature quantity RC.
  • the partial region feature amount calculation unit 18 calculates a feature amount for each of the registered image RI and the search image QI.
  • the feature amount of the registered image RI may be calculated in advance and stored in the storage unit 10.
  • the partial region feature quantity calculation unit 18 generates a feature point of the image and the arrangement of the feature point, and further, a geometric invariant string characterizing the arrangement of the feature point from the generated arrangement of the feature point and its coordinates. (Feature vector) is calculated.
  • the partial region feature amount calculation unit 18 has a function of calculating the partial registration feature amount RC of the partial registration image RP. Yes.
  • the geometric transformation parameter estimation unit 16 has a function of determining whether or not the feature vector calculated from the partial registration image RP matches the feature vector calculated from the search image QI (invariant match determination function 16b). It has. Based on this, the geometric transformation parameter estimation unit 16 estimates a geometric transformation parameter having a matching feature vector. Furthermore, the geometric transformation parameter estimation unit 16 can use a combination of feature points in the vicinity as the feature points and the center of gravity of the connected region after binarization and the feature point arrangement.
  • the partial area determination unit 14 described above has a function of calculating a partial area of the registered image RI and calculating one or more partial registered images RP.
  • the partial registration image RP may be calculated in advance as a registration process and stored in the storage unit 10 in advance.
  • the partial region determination unit 14 further has a function of calculating a partial search image QP of the search image QI by projection using a predetermined geometric transformation parameter.
  • the partial region determination unit 14 uses the geometric transformation parameter estimated by the geometric transformation parameter estimation unit 16 to obtain a partial retrieval image QP in the retrieval image QI corresponding to each partial registration image RP in the registration image RI. It has a function to decide. Specifically, the partial search image QP in the search image QI is determined by applying the geometric transformation parameters to the circumscribed rectangle and the coordinates of the outer periphery of the partial registration image RP. If n (n ⁇ 1) geometric transformation parameters are obtained, n partial search images QP are obtained as described above.
  • the partial region feature amount calculation unit 18 includes a function (partial registration feature amount calculation function 18d) for calculating a feature vector (partial search feature amount QC) of the partial search image QP in the search image QI. .
  • the feature vector (partial search feature quantity QC) is also n. Individuals are required.
  • This feature vector is a vector including, for example, a feature point arrangement which is a geometric transformation invariant.
  • the partial image matching unit 20 described above uses the feature vector (partial registration feature amount RC) of the partial registration image RP of the search image RI specified by using the geometric transformation parameter and the feature vector (partial part of the partial search image QP). It has a function of determining whether or not the search feature value (QC) matches. For example, the partial image matching unit 20 can determine that both images match when the difference or ratio between the feature vectors of the two images is within a predetermined threshold range.
  • the image position detection unit 22 calculates the position of the partial search image QP determined to be coincident by the partial image matching unit 20 by using the geometric transformation parameter. As described above, the image position detection unit 22 determines which partial registration image RP is included in which position in the search image QI from the collation result obtained by the partial image collation unit 20.
  • the partial region feature quantity calculation unit 18 includes, for example, a feature point extraction function 18a, a feature point arrangement function 18b, and an invariant calculation function 18c. good.
  • the partial region feature amount calculation unit 18 calculates the feature vector of the registered image RI, partial registered image RP, search image QI, or partial search image QP at each timing.
  • the feature vectors of the registration image RI and the partial registration image RP may be calculated in advance as registration processing.
  • the feature vector of the search image QI may be calculated when the geometric transformation parameter is initially estimated by the geometric transformation parameter estimation unit 16. Further, the feature vector of the partial search image QP may be calculated after the partial search image QP is specified by the partial region determination unit 14 using the geometric transformation parameter.
  • the feature point extraction function 18a of the partial area feature amount calculation unit 18 described above calculates the feature points of the registered image RI, the partial registration image RP, the search image QI, or the partial search image QP at each timing.
  • the feature point is the center or the center of gravity of the feature portion.
  • the feature point arrangement function 18b calculates the arrangement of a plurality of feature points.
  • the invariant calculation function 18c calculates a feature vector having a geometric transformation invariant as an element from the arrangement of the feature points.
  • the image position detection unit 22 may include a parameter reliability calculation function 22a and an image position estimation function 22b.
  • the parameter reliability calculation function 22a calculates the geometric transformation parameter reliability, which is the reliability of the geometric transformation parameter.
  • the image position estimation function 22b estimates the position of the search image QI included at least partially in the registered image RI using the geometric transformation parameter reliability.
  • the partial region determination unit 14 calculates a partial region of the registered image RI and calculates one or more partial registered images RP (FIG. 8: Step S201). Then, when the search image QI is received and the search process is started, the partial region feature value calculation unit 18 calculates a feature vector whose elements are geometric invariants of the search image QI and the partial registration image RP.
  • the feature point extraction function 18a calculates the feature points of the partial registration image RP and the search image QI (FIG. 8: step S202). Then, the feature point arrangement function 18b calculates the arrangement of a plurality of feature points (FIG. 8: Step S203). Further, the invariant calculation function 18c calculates a feature vector whose elements are geometric transformation invariants from the arrangement and coordinates of the feature points (FIG. 8: step S204).
  • the geometric transformation parameter estimation unit 16 determines whether or not the feature vector calculated from the partially registered image RP matches the feature vector calculated from the search image QI (FIG. 8: Step S205). Then, the geometric transformation parameter estimation unit 16 estimates a geometric transformation parameter based on the matching of the feature vectors (FIG. 8: Step S206). In the determination of the coincidence and the geometric transformation parameter (FIG. 8: Steps S205 and S206), the coincidence judgment may be repeated while changing the value of the geometric transformation parameter, or as an optimization process by a voting process or a least square method. good.
  • the partial region determination unit 14 calculates the partial retrieval image QP of the retrieval image QI by projection using the geometric transformation parameter (FIG. 8: Step S207).
  • the geometric transformation parameter is estimated, the partial region determination unit 14 calculates the partial retrieval image QP of the retrieval image QI by projection using the geometric transformation parameter (FIG. 8: Step S207).
  • the partial region feature amount calculation unit 18 calculates again. That is, the partial region feature quantity calculation unit 18 calculates the feature vector of the partial search image QP (FIG. 8: Step S208). Then, the partial image matching unit 20 determines whether or not the feature vector for the partial search image QP specified using the geometric transformation parameter matches the feature vector of the partial registration image RP (FIG. 8). : Step S209). Then, the image position detection unit 22 calculates the position of the partial search image QP determined to be coincident by the partial image matching unit 20 by using the geometric transformation parameter.
  • the parameter reliability calculation function 22a calculates the geometric transformation parameter reliability, which is the reliability of the geometric transformation parameter (FIG. 8: step S210), and the image position estimation function 22b
  • the position of the search image QI included at least partially in the registered image RI may be estimated using the geometric transformation parameter reliability (FIG. 8: step S211).
  • the geometric transformation parameter estimation unit 16 includes a homography matrix calculation function 16c that calculates a homography matrix as a geometric transformation parameter.
  • the partial region feature quantity calculation unit 18 calculates the geometric transformation invariant, which is an invariable quantity even if the partial search image QP is geometrically transformed with the homography matrix with respect to the partial registration image RP, as a feature.
  • a function 18c is provided.
  • the image position detection unit 22 includes a projection position calculation function 22d that calculates the position of the partial search image QP by projecting a partial region of the registered image RI onto the search image QI using a homography matrix.
  • the image position detection unit 22 may include a parameter reliability calculation function 22a and an image position estimation function 22b.
  • the homography matrix H is a geometric transformation matrix that associates coordinate values, and is a 3 ⁇ 3 matrix that represents the relationship between the position (xr, yr) of the registered image RI and the position (xq, yq) of the search image QI. is there. Specifically, the following expression (1) is satisfied.
  • the symbol a is a constant determined according to the values of (xr, yr) and (xq, yq).
  • the homography matrix calculation function 16c of the geometric transformation parameter estimation unit 16 calculates a homography matrix as a geometric transformation parameter. Then, the invariant calculation function 18c of the partial region feature quantity calculation unit 18 features a geometric transformation invariant that is an invariant quantity even if the partial search image QP is geometrically transformed with the homography matrix with respect to the partial registration image RP. Calculate as Subsequently, the projection position calculation function 22d of the image position detection unit 22 functions, and the position of the partial search image QP is calculated by projecting the partial area of the registered image RI onto the search image QI using the homography matrix. Thus, by using a homography matrix, it is possible to collate partial images using a stable image processing technique.
  • FIG. 10 illustrates an example of a match determination result between the partial registration feature quantity RC and the partial search feature quantity QC when two homography matrices (H1, H2) are obtained.
  • the upper row is a partial registration image RP (RP [1] to RP [8]) of the registration image RI (DB Image) read from the storage means 10.
  • RP RP [1] to RP [8]
  • QP QP [1] to QP [8], Compensated Image
  • the feature amount calculated from these is indicated by the symbol IF in FIG.
  • the partial search image QP is collated [(number of partial search images QP) ⁇ (number of obtained geometric transformation parameters)] times. Specifically, in the example shown in FIG. 10, 16 comparison results of 8 ⁇ 2 are obtained.
  • the parameter reliability calculation function 22a of the image position detection unit 22 fixes the geometric transformation parameter H1 and calculates the partial registration image RP calculated from the registration image RI and the search image QI.
  • the collation results with the partial search image QP are totaled.
  • the reliability is calculated as 7/8.
  • the ratio of the number of matching partial search images QP is used as the reliability.
  • the reliability may be calculated after weighting the area of each partial search image QP.
  • the parameter reliability calculation function 22a of the image position detection unit 22 arranges the collation results by the partial image collation unit 20 shown in FIG. 10 into the data structure of the reliability table TB in FIG. 11, and matches the total number of partial search images QP.
  • the parameter reliability is calculated from the number, and it is preferable to determine whether or not the homography matrix as the geometric transformation parameter is reliable based on the reliability.
  • the homography matrix H1 is a reliable geometric transformation parameter because it is determined that the reliability is a geometric transformation parameter that has a predetermined reliability, for example, a geometric transformation parameter of 5/8 or more. It is determined that there is.
  • the homography matrix H2 only one partial search image QP matches eight partial search images QP. In this case, the reliability is 1/8. Since this is smaller than a predetermined value, the homography matrix H2 is not determined as a reliable parameter.
  • the image position estimation function 22b of the image position detection unit 22 estimates which position in the search image QI the region corresponding to the partially registered image RP of the registered image RI is using the reliable conversion parameter. To do. In the example shown in FIG. 11, only the homography matrix H1 is used as the geometric transformation parameter.
  • the relationship between the coordinates (xr, yr) in the registered image RI and the point (xq, yq) in the search image QI is expressed by the above-described equation (1).
  • the position of the registered image RI (partial registered image RP) existing in the search image QI can be obtained.
  • the homography matrix projects the line segment in the registered image RI onto the line segment in the search image QI. Therefore, when the partial area of the registered image RI is configured to be a rectangle, if the four sides constituting the rectangle that is the partial area in the registered image RI are projected onto the search image QI, A corresponding area is required.
  • an integrated processing unit 24 is newly added to the image position detection unit 22 provided in advance in the image matching device 103. Further, the geometric transformation parameter estimation unit 16 of the image processing means 12 is equipped with a plural parameter calculation function 16d, and the image position detection unit 22 is newly provided with a position candidate calculation function 22e.
  • the multiple parameter calculation function 16d of the geometric transformation parameter estimation unit 16 estimates a plurality of geometric transformation parameters for each partial search image QP when the partial region determination unit 14 calculates a plurality of partial regions. By this processing, a plurality of geometric transformation parameters are estimated for one partial search image QP.
  • the image position detection unit 22 calculates a position candidate of the matched partial search image QP as a position candidate for each geometric transformation parameter.
  • a geometric transformation parameter that satisfies the condition of matching with the partial registration image RP and a partial search image QP that can be specified by this geometric transformation parameter remain.
  • the newly-integrated integration processing unit 24 integrates the position candidates for each overlapping partial search image QP when a plurality of partial search images QP overlap in the search image QI.
  • the position of the partial search image QP is calculated. Thereby, the number of partial search images QP can be calculated with high accuracy.
  • the integration processing unit 24 integrates two or more position candidates (candidates for the position of the partial search image QP) that are outputs of the image position detection unit 22. This makes it possible to calculate a representative position from slightly different position candidates (geometric transformation parameters).
  • the integration processing unit 24 may be provided with any of the union position function 24a, the reliability position function 24b, or the voting position function 24c. Further, the integration processing unit 24 may include two or more of these three functions, and may be configured to perform integration processing using a plurality of methods in combination.
  • the union position function 24a calculates a union of a plurality of position candidates and determines the position of the union as a position.
  • the union position function 24a indicates that the plurality of partial search images QP [10a], QP [10b], and QP [10c] are calculated as shown in FIG. 13 (1) union.
  • the sum area (OR area) related to the overlapping areas is integrated to obtain the outer peripheral shape of the partial search image QP [10A].
  • the union position function 24a integrates a plurality of partial search images QP [11a] and QP [11b] into a partial search image QP [11A].
  • the search image position determination function 24b calculates the reliability of the geometric transformation parameter and determines the position using the geometric transformation parameter having a high reliability. That is, when there are partial search images QP [11a] and QP [11b] that overlap each other, the search image position determination function 24b performs the respective geometric transformations that specify the partial search images QP [11a] and QP [11b]. Among the parameters, the position candidate based on the geometric transformation parameter having the highest reliability is selected.
  • the partial search image QP [11B] is selected.
  • the search image position determination function 24b has a plurality of partial search images QP [10a], QP [10b], and QP [10c].
  • the most reliable geometric transformation parameter is used as a partial search image QP [10B].
  • the position candidate may be selected using the statistical value of the parameter value PM of the geometric transformation parameter having a certain degree of reliability.
  • the other image position determination function 24c controls the voting of the geometric transformation parameters to the parameter space, and determines the position using statistical values of a plurality of geometric transformation parameters whose voting results are equal to or greater than a threshold value.
  • the statistical value is a value that can be calculated by statistical processing such as an average value or a median value.
  • FIG. 15 shows an example of integration processing by another image position determination function 24c.
  • the other image position determination function 24c converts the reliable geometric transformation parameter values PM [1], PM [2],..., PM [n] into ballot boxes VT [1], VT [2],. ., Vote for VT [n], calculate the average of the geometric transformation parameters related to the ballot box VT for which the vote is equal to or greater than a certain value, and obtain the result.
  • the number of votes for one ballot box VT [1] exceeds a certain number.
  • the position of the partial search image QP [10C] is specified using this geometric transformation parameter.
  • the position of the partial search image QP [11C] is specified using the geometric transformation parameter corresponding to the ballot box VT [2].
  • FIG. 17 is a flowchart showing a detailed operation example of the position detection process (FIG. 16: step S301) of the partial search image QP.
  • the process (FIG. 16: step S301) for detecting the region of the search image QI (position of the partial search image QP) where the partial registration image RP of the registration image RI exists is different from step S107 of FIG. 4 described above.
  • a region integration process FIG. 17: step S311
  • the integration processing unit 24 integrates the regions by the method described above.
  • the third embodiment since a representative position is calculated from slightly different position candidates, one region can be obtained for each registered image RI included in the search image QI. The possibility increases. Therefore, the number of registered images RI included in the search image QI can be determined with high accuracy.
  • the integration processing unit 24 performs integration processing on the overlapping partial search images QP when a plurality of partial search images QP overlap in the search image QI, the same search result is assumed due to the overlap. It is possible to integrate the search results of the partial search images QP well into one, and further, by using a value obtained by averaging the parameter values PM of geometric transformation parameters having a certain degree of reliability or voting, etc. A position close to the value of can be calculated.
  • the number of partial registration images RP (corresponding to partial search images QP) in the search image QI can be strictly calculated.
  • the image processing means 12 in the third embodiment is provided with a display device 95 as shown in FIG. Further, the image processing means 12 is provided with a display control unit 26 for controlling the information display operation on the display device 95.
  • the display control unit 26 controls display of display data DP related to the partial registration image RP at a position corresponding to the partial search image QP. Thereby, the convenience of an image search can be improved.
  • the display data DP may be any expression or description as long as the content indicates the result of successful image matching to the user.
  • various display data DP such as the character data, meaning, a link to data containing the character, a translation into another language, etc. are adopted. be able to. If the search result is a sign or a guide map, an explanation of the contents can be displayed.
  • the display control unit 26 may include a similarity display function 26a.
  • the similarity display function 26a controls the display of the similarity between the partial registration image and the search partial image as display data DP.
  • FIG. 18 shows an example in which the positions of the partial registration image RP [1] and partial registration image RP [2] detected from the search image QI are displayed to show the correspondence with the search image QI.
  • the display control unit 26 displays the entire search image QI on the display device 95 and associates it with the partial search image QP so that the partial registration images RP [1] and RP [2] that can be searched.
  • the value of similarity is displayed as the display data DP while displaying the type and content.
  • the display control unit 26 performs display control of “similarity 0.9” that is the display data DP [1] in association with the partial registration image RP [1], and associates it with the partial registration image RP [2]. Display control is performed on “similarity 0.8” which is display data DP [2]. Thereby, the certainty of the search of the partial search image QP can be displayed in an easily understandable manner. As described above, the accuracy of the search result can be transmitted to the user by the similarity display function 26a.
  • the display control unit 26 may include a link display function 26b.
  • the link display function 26b displays a link to a predetermined content in association with the partial registration image RP as the display data DP.
  • the link to the content may be content in the same image processing device, or may be a link to content managed by another server device 70 when the image matching device 103 is connected to the network 96.
  • the link can display any information that can be normally handled by a computer, such as a link to voice or a URL, or a link to any information.
  • the example shown in FIG. 19 is an example in which information accompanying the registered image RI (or partial registered image RP) stored in advance in the storage means 10 or the like is superimposed and displayed according to the image collation result.
  • the entire search image QI is displayed on the entire display device 95.
  • a position candidate related to the registered image RI or an integrated representative area is detected, so that information can be presented around the area.
  • the partial search image QP [3] corresponding to the partial registration image RP is detected from the search image QI, and the information “Explanation 1, explanation 1” (display data DP [3]) is displayed.
  • the display data DP [3] is a sentence using a text character string. At that time, if sentences representing the same explanation contents in various languages are stored in advance, the language presented by the user can be selected and displayed.
  • the display control unit 26 performs display control of the display data DP [4] corresponding to the partial search image QP [4], whereby the difference between the partial search image QP [3] and the partial search image QP [4]. Can be clearly displayed. Furthermore, if the explanatory text, which is the display data DP, is used as a link to other contents, the user can access more detailed explanation with an easy operation.
  • the partially registered image RP may be displayed as it is, or the registered image RI may be transformed into a trapezoid using the value of the homography matrix (for example, disclosed in Patent Document 1). Figure 2).
  • the third embodiment of the present invention can be applied to an application such as an information providing apparatus that provides guidance information. Further, when the registered image RI is a signboard or a guide sentence, the present invention can be applied to a use such as a parallel translation display system that displays a parallel translation sentence related to the content. Other configurations and the effects thereof are the same as those of the second embodiment described above.
  • the configuration and operation of the image collation apparatuses 101, 102, and 103 described above are examples of the implementation, and the configuration and operation order can be changed within a range that does not impair the principle of image collation. It is.
  • each of the embodiments of the arrangement collation device and the image collation devices 101, 102, and 103 described above the case where various means are implemented in the form of software has been described as an example. However, part or all of each means can be configured by hardware.
  • Information processing by the image collating apparatus 101 disclosed in FIGS. 1 to 7 shows specific technical contents in which software and hardware resources cooperate to calculate or process information according to the purpose of use. It is.
  • the image collating apparatus 101 actually has a computer 80 for information processing as shown in FIG.
  • the computer 80 includes calculation means 82 that is a central processing unit (CPU), and main storage means 86 that provides a storage area for the calculation means 82.
  • the computer 80 generally has peripheral devices connected through a data bus and an input / output interface.
  • the peripheral devices are typically a communication unit 88, an external storage unit 90, an input unit 92, and an output unit 94.
  • the whole including peripheral devices may be referred to as a computer 80.
  • the communication unit 88 controls communication with the server device 70 via a wired or wireless network.
  • the external storage means 90 is a program file 100 or a storage medium that can be installed or carried to store data.
  • the input unit 92 is a keyboard, a touch panel, a pointing device, a scanner, or the like, and inputs data that can be read by the computer 80 in accordance with a user operation.
  • the output means 94 displays and outputs data calculated by the computer 80 such as a display and a printer.
  • the storage unit 10 uses the external storage unit 90 as a hardware resource, and stores a registered image RI having a feature amount. Further, the storage means 10 stores not only the external storage means 90 directly connected to the computer 80 serving as the image processing means 12 but also the registered image RI and the like from the database 72 of the server device 70 connected via the network 96. It may be used as a wear resource.
  • the external storage unit 90 or the database 72 stores a registered image RI, a partially registered image RP, and a partially registered feature quantity RC as the storing unit 10. Further, the search image QI, the partial search image QP, the partial search feature quantity QC, the reliability table TB, and the like are stored in the image matching process of each embodiment. In relation to the third embodiment, display data DP related to the partial registration image RP may be stored.
  • the search image QI may be received from a scanner or facsimile receiving device which is the input means 92 connected to the computer 80, or may be received from another server device 70 or the like.
  • the image collation result is displayed directly on the display device 95 of the output means 94, stored in the external storage means 90 or the database 72, and transmitted as data in response to access from another server device 70 or the like. Also good.
  • the above-described image processing means 12 compares the registered image RI with the search image QI to be collated using the computing means 82 which is a CPU as a hardware resource.
  • the main storage means 86 temporarily stores various data necessary for the calculation of the CPU. Such data is data that is compared and generated in each step of the flowchart showing the image processing example of each embodiment. For example, in addition to the partial registration image RP and the partial search image QP, the feature amount and the geometric transformation parameter For example, the parameter value PM.
  • Each part and each function of the image collating apparatus shown in FIG. 1 and the like perform information processing specifically realized by software using hardware resources.
  • Information processing is a combination of processing such as calculation (logical operation), comparison, conditional branching, repetition, and determination.
  • This software has program procedures (instructions and codes) corresponding to the execution environment of the computer 80.
  • the code group is generally stored in the external storage unit 90 as a program file 100. Further, the program file 100 may be downloaded from the server device 70 in response to a request from the computer 80.
  • a program for image collation is an image collation program.
  • Information processing combined as means, units, functions, and the like of the image collation apparatus is realized by cooperation with the hardware resources by causing the computer 80 included in the image collation apparatus to execute a program for image collation. Can do. The same applies to the other image matching devices 102 and 103.
  • the present invention has been exemplified by the first to third embodiments.
  • the present invention is not limited to the technical contents of these embodiments, and As long as equivalent effects are produced, these are included.
  • An image collating apparatus comprising a storage unit 10 that stores a registered image having a predetermined feature point, and an image processing unit 12 that compares the registered image with a search image to be collated.
  • the image processing means 12 is A geometric transformation parameter estimation unit 16 that estimates a geometric transformation parameter based on each feature point and the arrangement of the registered image and the search image;
  • a partial region determination unit 14 for determining a projected partial image region in a search image corresponding to each partial region image in a registered image using the geometric conversion parameter estimated by the geometric conversion parameter estimation unit 16;
  • a partial region feature amount calculation unit 18 that calculates each feature amount of each of the plurality of projection partial image regions of the search image and each partial region image of the registered image;
  • a match between the partial search image and the partial registration image is determined by comparing feature amounts of the plurality of partial search images with the feature amount of the partial registration image specified in the partial area of the registration image.
  • a partial image matching unit 20 An image collating apparatus comprising: a registered image position detecting unit 22 that calculates
  • the geometric transformation parameter estimation unit 16 includes a parameter estimation function (candidate parameter calculation function 16a) that calculates and estimates the geometric transformation parameters that are candidates for the partial search image for a plurality of sets.
  • the image position detection unit 22 calculates the degree of coincidence between the partial search image and the partial registered image for each estimated geometric transformation parameter, and sets the reliability of the geometric transformation parameter having a high degree of coincidence to a high level.
  • Degree calculation function 22a and whole image position estimation function 22b for calculating and estimating in which area of the search image the image area corresponding to the registered image is located using the geometric transformation parameter having high reliability And an image collating apparatus.
  • the image position detection unit 22 includes a corresponding position calculation function 22c that calculates the position of the partial search image using the geometric transformation parameter and preset coordinates of the partial registration image. Image collation device.
  • the geometric transformation parameter estimation unit 16 includes a multiple parameter estimation function 16d that estimates a plurality of geometric transformation parameters for each partial search image when a plurality of partial regions are calculated by the partial region determination unit 14.
  • the image position detection unit 22 includes a position candidate calculation function (22e) that calculates the position candidate of the matched partial search image as a position candidate for each geometric transformation parameter,
  • An integration processing unit that calculates the position of the partial search image by integrating the position candidates for each of the overlapping partial search images when a plurality of the partial search images overlap in the search image.
  • An image collating apparatus characterized by having 24.
  • the integration processing unit 24 includes a search image position determination function 24b that calculates the reliability of the geometric conversion parameter and determines the position of the partial search image using the geometric conversion parameter having the high reliability.
  • An image matching apparatus characterized by the above.
  • the integration processing unit 24 controls voting of the geometric transformation parameters to the parameter space, and determines the position of the partial search image using a plurality of statistical values of the geometric transformation parameters whose voting results are equal to or greater than a threshold value.
  • the image processing means 12 includes a display control unit 26 that performs display control at a position corresponding to the partial search image of a display device 95 that is preliminarily equipped with display data DP related to the partial registration image.
  • Image matching device In the image collating device according to any one of appendices 1 to 7, The image processing means 12 includes a display control unit 26 that performs display control at a position corresponding to the partial search image of a display device 95 that is preliminarily equipped with display data DP related to the partial registration image. Image matching device.
  • An image collating apparatus comprising a storage unit that stores a registered image having a predetermined feature amount, and an image processing unit that compares the registered image with a search target search image, Based on each feature point and the arrangement of the registered image and the search image, the geometric transformation parameter estimation unit 16 of the image processing means estimates a geometric transformation parameter, The partial region determination unit 14 of the image processing means determines a plurality of projection partial image regions in the search image corresponding to each partial region image in the registered image using the estimated geometric transformation parameter, A partial region feature amount calculation unit 18 of the image processing means calculates each feature amount of each of the plurality of projection partial image regions of the search image and each partial region image of the registered image, The partial image matching unit 20 of the image processing unit compares the feature quantities of the plurality of partial search images corresponding to the feature quantities of the partial registration image specified in the partial area of the registration image, respectively.
  • the image position detection unit 22 of the image processing unit performs geometric conversion parameters of the search image with respect to the registration image.
  • An image collating apparatus comprising a storage unit that stores a registered image having a predetermined feature amount, and an image processing unit that compares the registered image with a search target search image, A geometric transformation parameter estimation procedure for estimating a geometric transformation parameter based on each feature point of the registered image and the search image and its arrangement; A partial region determination procedure for determining a plurality of projection partial image regions in a search image corresponding to each partial region image in a registered image using the estimated geometric transformation parameter; A feature amount calculating procedure for calculating each feature amount of each of the plurality of projection partial image regions of the search image and each of the partial region images of the registered image; By comparing the feature quantities of the plurality of partial search images corresponding to the feature quantities of the partial registration image specified in the partial area of the registration image, the partial search image and the partial registration image are compared.
  • Partial image matching procedure to determine match, And an image position for calculating and detecting the position of the matched partial search image using the geometric transformation parameter when it is determined in the partial image matching procedure that the partial search image and the partial registered image match.
  • a detection procedure An image collation processing program characterized by causing a computer provided in the image processing means to execute this.
  • the present invention can be used for all information processing apparatuses that search for a part that is the same as or similar to a part of the registered image RI from the search image.

Abstract

[Problem] To make possible the detection of the position of each of a plurality of the same partial image among search images and to increase the versatility of a device. [Solution] An image verification device comprising: a geometric transformation parameter estimation unit (16) that estimates geometric transformation parameters for a search image (QI) relative to a recorded image (RI); a partial region calculation unit (14) that utilizes the estimated geometric transformation parameters and calculates a plurality of partial regions inside the search image (QI); a partial region characteristics value calculation unit (18) that calculates the characteristics of each partial search image (QP) specified by the plurality of partial regions in the search image (QI); a partial image verification unit (20) that determines matches between the partial search images (QP) and a partial recorded image (RP), by comparing the characteristics of each of the plurality of partial search images (QP) to the characteristics of the partial recorded image (RP) specified by the partial region in the recorded image (RI); and an image position detection unit (22) that calculates the position of the partial search image (QP) matched using the geometric transformation parameters.

Description

画像照合装置、画像照合方法とプログラムImage collation apparatus, image collation method and program
 本発明は、画像照合装置、画像照合方法、およびそのプログラムに係り、特に、照合対象である検索画像の中に検出すべき登録画像が複数枚あっても、それらを検出可能な画像照合装置、画像照合方法、およびそのプログラムに関する。 The present invention relates to an image matching device, an image matching method, and a program thereof, and in particular, an image matching device capable of detecting even a plurality of registered images to be detected among search images to be matched, The present invention relates to an image matching method and a program thereof.
 画像照合は、第1の画像の特徴点と第2の画像の特徴点とを比較して、類似度又は一致度が高いか否かを判定する技術である。この画像照合により、予め登録された登録画像(又はサンプル画像)と類似する画像を検索画像(クエリ画像)群から検索することができる。このような画像の特徴点の比較では、画像間の幾何変換があっても不変な特徴量を使用することで、三次元での撮影角度の相違などがあっても、同一又は類似の画像を検索することができる。 Image collation is a technique for comparing the feature points of the first image and the feature points of the second image to determine whether the degree of similarity or coincidence is high. By this image collation, an image similar to a registered image (or sample image) registered in advance can be searched from a search image (query image) group. In such feature point comparison, even if there is a geometric transformation between images, the same or similar images can be obtained even if there are differences in shooting angles in three dimensions, etc. You can search.
 これに関連して、画像照合装置の一例として、以下に示す特許文献1乃至6が知られている。
 この内、特許文献1に開示された画像照合装置501は、その主要部である画像照合モジュール501Aが、図20に示すように、まず全ての登録画像の登録用アフィンパラメータを推定する投影用アフィンパラメータ推定モジュール250と、推定されたアフィンパラメータを用いて登録画像RI上の部分領域を検索画像QIに投影し部分領域を対応させる部分領域投影モジュール262と、検索画像QIに投影された部分領域について特徴量を求めると共に登録画像RIと検索画像QIの各部分領域を比較してその一致/不一致を判定する投影画像照合モジュール264と、登録画像毎に前記投影画像照合モジュール264が一致と判定した回数を使用して検索画像QIに対して登録画像RIを同定する画像照合結果計算モジュール290とを備えて構成されている。
In relation to this, Patent Documents 1 to 6 shown below are known as examples of an image collating apparatus.
Among these, the image collating apparatus 501 disclosed in Patent Document 1 is a projection affine in which the image collating module 501A, which is the main part, first estimates the affine parameters for registration of all registered images, as shown in FIG. A parameter estimation module 250, a partial area projection module 262 that projects a partial area on the registered image RI onto the search image QI using the estimated affine parameters, and associates the partial area with each other, and a partial area projected on the search image QI A projection image matching module 264 that obtains a feature amount and compares each partial area of the registered image RI and the search image QI to determine the match / mismatch, and the number of times that the projection image matching module 264 determines a match for each registered image The image matching result calculation module 29 for identifying the registered image RI with respect to the search image QI using And it is configured to include and.
 そして、このような構成を有する画像照合装置501は、登録処理と検索処理を行うように動作する。
 この内、登録処理では、登録画像RIの特徴点の配置と登録画像RIの特徴ベクトルとを計算した後、登録画像RIの部分領域特徴量を計算する。
The image collation apparatus 501 having such a configuration operates to perform registration processing and search processing.
Among these, in the registration process, after calculating the arrangement of the feature points of the registered image RI and the feature vector of the registered image RI, the partial region feature amount of the registered image RI is calculated.
 また、検索処理では、まず、全ての登録画像RIについて、投影用アフィンパラメータを推定し、登録画像RIの部分領域を検索画像QIに投影し、投影画像部分領域内の特徴量を計算し、部分領域毎に照合して部分領域が一致するか否かを判定する。その後、最終的な画像照合結果を求める。 In the search process, first, projection affine parameters are estimated for all registered images RI, a partial area of the registered image RI is projected onto the search image QI, a feature amount in the projected image partial area is calculated, A check is made for each area to determine whether the partial areas match. Thereafter, a final image matching result is obtained.
 ここで、投影画像部分領域の決定に使われるアフィンパラメータが複数存在する場合には、複数のアフィンパラメータのそれぞれについて、投影画像部分領域の決定が行われる。このとき、何れか一つのアフィンパラメータを用いて登録画像RIの部分領域と投影画像の部分領域とが一致したと判定される。 Here, when there are a plurality of affine parameters used for determining the projection image partial area, the projection image partial area is determined for each of the plurality of affine parameters. At this time, it is determined that the partial area of the registered image RI matches the partial area of the projection image using any one affine parameter.
 投影画像部分領域の決定に使われるアフィンパラメータが一つに決定される場合には、単一の複数のアフィンパラメータのそれぞれについて投影画像部分領域を決定し、これに基づいて部分領域の照合が行われる。尚、上述したアフィンパラメータは、平面と平面の対応付けを表すホモグラフィー行列など、任意の幾何変換パラメータと置き換え可能である。 When one affine parameter is used to determine the projected image partial area, the projected image partial area is determined for each of a plurality of single affine parameters, and the partial areas are verified based on this. Is called. Note that the above-described affine parameters can be replaced with arbitrary geometric transformation parameters such as a homography matrix representing a correspondence between a plane and a plane.
 又、特許文献2には、検索画像の部分画像を検索することを目的として、スケール、アフィン変換及び透視の歪みに対して任意の不変性を持つ特徴を使用する手法が開示されている。透視変換としては、ホモグラフィー変換が例示されている。
 更に、特許文献3には、画像中の日本語を検索することを目的として、文字の一部あるいは全部が連結成分となるように調整し、この連結成分の重心を求めて特徴点とする手法が開示されている。
Patent Document 2 discloses a technique of using features having arbitrary invariance with respect to scale, affine transformation, and perspective distortion for the purpose of searching a partial image of a search image. As the perspective transformation, homography transformation is exemplified.
Furthermore, in Patent Document 3, for the purpose of searching for Japanese in an image, a method is adopted in which part or all of characters are adjusted to be connected components, and the center of gravity of the connected components is obtained as a feature point. Is disclosed.
 又、特許文献4には、登録画像と検索画像とが一定条件の適合をもって誤って一致する旨の判定がなされるのを防止する技術が開示されている。
 又、特許文献4には、比較対象とする一方と他方の各画像の間に、アフィン変換などの幾何変換があっても精度良く照合可能な画像照合技術が開示されている。
Patent Document 4 discloses a technique for preventing a registration image and a search image from being erroneously determined to match with a certain condition of conformity.
Further, Patent Document 4 discloses an image collation technique that can collate with high accuracy even when there is a geometric transformation such as affine transformation between one and the other images to be compared.
 更に特許文献5には、検索画像中で回転又は切り取られた画像の検索を可能としつつ、局所特徴点数が足りない場合の精度低下の防止を意図して、局所特徴の抽出により候補画像を特定し、サンプル特徴点とクエリ特徴点との距離を最小とする幾何変換パラメータを求め、更に全体特徴を比較し、局所と全体の総合類似度を算出することで、画像を検索する手法が開示されている。
 この特許文献5には、RANSAC(RANdom SAmple Consensus)により、ランダムに抽出したパラメータの誤差の評価及び投票を繰り返すことで、幾何変換パラメータを求める手法が開示されている。
Further, Patent Document 5 specifies candidate images by extracting local features in order to prevent a decrease in accuracy when the number of local feature points is insufficient while enabling search of images that have been rotated or cut out in the search image. A method for searching for an image by obtaining a geometric transformation parameter that minimizes the distance between a sample feature point and a query feature point, comparing the overall features, and calculating the overall similarity between the local feature and the overall feature feature is disclosed. ing.
This Patent Document 5 discloses a method for obtaining geometric transformation parameters by repeatedly evaluating and voting errors of parameters extracted at random by RANSAC (RANdom SAmple Consensus).
国際公開第2010/053109A1号International Publication No. 2010 / 053109A1 特表2010-530998号公報Special table 2010-530998 特開2009-32109号公報JP 2009-32109 A 国際公開第2009/060975号International Publication No. 2009/060975 特開2010-266964号公報JP 2010-266964 A
 しかしながら、上記特許文献1乃至5およびこれらを組み合わせた関連技術では、検索画像の中に同一の登録画像が複数含まれている場合には最大1つしか検出することができず、他の登録画像の位置又は個数を精度良く検出することができない、という不都合があった。 However, in the above-mentioned Patent Documents 1 to 5 and related technologies combining these, when a plurality of the same registered images are included in the search image, only a maximum of one can be detected. There has been a disadvantage that the position or number of each cannot be detected with high accuracy.
[発明の目的]
 本発明の目的は、検索画像の中に同一の登録画像が複数写りこんでいる場合に、複数の画像の検出を可能とし、これによって装置の汎用性拡大を図った画像照合装置、画像照合方法、及びそのプログラムを提供することにある。
[Object of the invention]
An object of the present invention is to enable detection of a plurality of images when a plurality of the same registered images are included in a search image, thereby expanding the versatility of the apparatus, and an image verification method And providing the program.
 上記目的を達成するため、本発明に係る画像照合装置は、所定の特徴点を有する登録画像を記憶した記憶手段と、当該登録画像と照合対象の検索画像とを比較する画像処理手段とを備えた画像照合装置であって、
 前記画像処理手段が、前記登録画像と検索画像の各特徴点及びその配置に基づいて幾何変換パラメータを推定する幾何変換パラメータ推定部と、この幾何変換パラメータ推定部により推定された前記幾何変換パラメータを利用して登録画像中の各部分領域画像に相当する検索画像中の投影部分画像領域を決定する部分領域決定部と、前記検索画像の前記複数の各投影部分画像領域と前記登録画像の各部分領域画像の各特徴量を計算する部分領域特徴量計算部と、前記登録画像の部分領域で特定される部分登録画像の前記特徴量に対して複数の前記部分検索画像の特徴量をそれぞれ比較することで前記部分検索画像と前記部分登録画像との一致を判定する部分画像照合部と、前記幾何変換パラメータを使用して前記一致した前記部分検索画像の位置を計算し検出する登録画像位置検出部と、を備えたことを特徴とする。
In order to achieve the above object, an image collation apparatus according to the present invention includes a storage unit that stores a registered image having a predetermined feature point, and an image processing unit that compares the registered image with a search image to be collated. Image collating device,
The image processing means estimates a geometric transformation parameter based on each feature point and the arrangement of the registered image and the search image, and the geometric transformation parameter estimated by the geometric transformation parameter estimation unit. A partial area determining unit that determines a projected partial image area in the search image corresponding to each partial area image in the registered image by using the plurality of projected partial image areas and each part of the registered image in the search image; A partial region feature amount calculation unit that calculates each feature amount of the region image, and compares the feature amounts of the plurality of partial search images with the feature amount of the partial registration image specified in the partial region of the registered image, respectively. A partial image collating unit for determining a match between the partial search image and the partial registered image, and using the geometric transformation parameter, Characterized by comprising the registered image position detecting unit to detect and calculate the location, the.
 上記目的を達成するため、本発明に係る画像照合方法は、所定の特徴量を有する登録画像を記憶した記憶手段と、当該登録画像と照合対象の検索画像とを比較する画像処理手段とを備えた画像照合装置にあって、
 前記登録画像と検索画像の各特徴点及びその配置に基づいて、前記画像処理手段の幾何変換パラメータ推定部が幾何変換パラメータを推定し、 この推定された前記幾何変換パラメータを利用して登録画像中の各部分領域画像に相当する検索画像中の複数の投影部分画像領域を、前記画像処理手段の部分領域決定部が決定し、 前記検索画像の前記複数の各投影部分画像領域と前記登録画像の各部分領域画像の各特徴量を、前記画像処理手段の部分領域特徴量計算部が計算し、 前記登録画像の部分領域で特定される部分登録画像の前記特徴量に対して対応する前記複数の前記部分検索画像の特徴量を、前記画像処理手段の部分画像照合部がそれぞれ比較して前記部分検索画像と前記部分登録画像との一致の有無を判定し、 前記部分画像照合部により前記部分検索画像と前記部分登録画像とが一致したと判定された場合に、前記画像処理手段の画像位置検出部が、前記登録画像に対する前記検索画像の幾何変換パラメータに基づいて前記部分検索画像の位置を計算し検出する構成としたことを特徴とする。
In order to achieve the above object, an image matching method according to the present invention includes a storage unit that stores a registered image having a predetermined feature amount, and an image processing unit that compares the registered image with a search image to be compared. In the image verification device,
Based on each feature point and the arrangement of the registered image and the search image, a geometric transformation parameter estimation unit of the image processing means estimates a geometric transformation parameter, and uses the estimated geometric transformation parameter to register a registered image. A plurality of projection partial image areas in the search image corresponding to each of the partial image areas of the image processing means are determined by the partial area determination unit of the image processing means, the plurality of projection partial image areas of the search image and the registered image Each feature quantity of each partial area image is calculated by a partial area feature quantity calculation unit of the image processing means, and the plurality of feature quantities corresponding to the feature quantities of the partial registration image specified in the partial area of the registered image The partial image matching unit of the image processing unit compares the feature amount of the partial search image to determine whether the partial search image matches the partial registered image, and When it is determined that the partial search image and the partial registration image match, the image position detection unit of the image processing means determines the partial search image based on a geometric transformation parameter of the search image with respect to the registration image. The position is calculated and detected.
 上記目的を達成するため、本発明に係る画像照合用プログラムは、所定の特徴量を有する登録画像を記憶した記憶手段と、当該登録画像と照合対象の検索画像とを比較する画像処理手段とを備えた画像照合装置にあって、
 前記登録画像と検索画像の各特徴点及びその配置に基づいて幾何変換パラメータを推定する幾何変換パラメータ推定手順、
 この推定された前記幾何変換パラメータを利用して登録画像中の各部分領域画像に相当する検索画像中の複数の投影部分画像領域を決定する部分領域決定手順、
 前記検索画像の前記複数の各投影部分画像領域と前記登録画像の各部分領域画像の各特徴量を計算する特徴量計算手順、
 前記登録画像の部分領域で特定される部分登録画像の前記特徴量に対して対応する前記複数の前記部分検索画像の特徴量を、それぞれ比較することで前記部分検索画像と前記部分登録画像との一致を判定する部分画像照合手順、
 および前記部分画像照合手順で前記部分検索画像と部分登録画像とが一致したと判定された場合に、前記幾何変換パラメータを使用して前記一致した前記部分検索画像の位置を計算し検出する画像位置検出手順と、を備え、
 これらを前記画像処理手段が備えているコンピュータに実行させるようにしたことを特徴とする。
In order to achieve the above object, an image collation program according to the present invention includes a storage unit that stores a registered image having a predetermined feature amount, and an image processing unit that compares the registered image with a search image to be collated. In the image collation device provided,
A geometric transformation parameter estimation procedure for estimating a geometric transformation parameter based on each feature point of the registered image and the search image and its arrangement;
A partial region determination procedure for determining a plurality of projection partial image regions in a search image corresponding to each partial region image in a registered image using the estimated geometric transformation parameter;
A feature amount calculating procedure for calculating each feature amount of each of the plurality of projection partial image regions of the search image and each of the partial region images of the registered image;
By comparing the feature quantities of the plurality of partial search images corresponding to the feature quantities of the partial registration image specified in the partial area of the registration image, the partial search image and the partial registration image are compared. Partial image matching procedure to determine match,
And an image position for calculating and detecting the position of the matched partial search image using the geometric transformation parameter when it is determined in the partial image matching procedure that the partial search image and the partial registered image match. A detection procedure,
These are executed by a computer provided in the image processing means.
 本発明は以上のように構成され機能するので、これによると、幾何変換パラメータ推定部が登録画像に対する検索画像の幾何変換パラメータを推定し、部分画像照合部が部分登録画像の特徴点と部分検索画像の特徴点とを比較することで一致の有無を判定し、位置計算部が幾何変換パラメータを使用して一致した部分検索画像の位置を計算し決定するように構成したので、検索画像に複数の部分検索画像がある場合にも、その位置を個別に計算することができ、これによって装置の汎用性拡大を図ることができるという前述した関連技術にない優れた画像照合装置、画像照合方法、及びそのプログラムを提供することができる。 Since the present invention is configured and functions as described above, according to this, the geometric transformation parameter estimation unit estimates the geometric transformation parameters of the search image with respect to the registered image, and the partial image matching unit searches for the feature points and the partial search of the partial registration image. Since it is configured to determine whether or not there is a match by comparing the feature points of the image, and the position calculation unit calculates and determines the position of the matched partial search image using the geometric transformation parameter, a plurality of search images Even when there is a partial search image, the position can be calculated individually, and thereby the versatility of the apparatus can be expanded, and the excellent image matching apparatus, the image matching method, which are not in the related technology described above, And the program can be provided.
本発明の第1実施形態の構成を示すブロック図である。It is a block diagram which shows the structure of 1st Embodiment of this invention. 第1実施形態の基本構成の一例を示すブロック図である。It is a block diagram which shows an example of the basic composition of 1st Embodiment. 第1実施形態に開示した登録画像位置検出部の構成例を示すブロック図である。It is a block diagram which shows the structural example of the registration image position detection part disclosed by 1st Embodiment. 第1実施形態に開示した画像照合処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the image collation process disclosed by 1st Embodiment. 第1実施形態に開示した部分検索画像の位置特定処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the position specific process of the partial search image disclosed by 1st Embodiment. 第1実施形態に開示した画像処理手段による画像照合結果の一例を示す説明図である。It is explanatory drawing which shows an example of the image collation result by the image processing means disclosed in 1st Embodiment. 本発明の第2実施形態の構成例を示すブロック図である。It is a block diagram which shows the structural example of 2nd Embodiment of this invention. 第2実施形態における画像照合処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the image collation process in 2nd Embodiment. 第2実施形態に開示した幾何変換パラメータ推定部でホモグラフィー行列を使用する場合の具体例を示すブロック図である。It is a block diagram which shows the specific example in the case of using a homography matrix in the geometric transformation parameter estimation part disclosed by 2nd Embodiment. 図9に開示した2つのホモグラフィー行列を幾何変換パラメータとした際の信頼度を計算する場合の一例を示す説明図である。It is explanatory drawing which shows an example in the case of calculating the reliability at the time of making the two homography matrix disclosed in FIG. 9 into a geometric transformation parameter. 図10の関連で、幾何変換パラメータの信頼度を計算する際に使用するテーブルの一例を示す説明図である。FIG. 11 is an explanatory diagram illustrating an example of a table used when calculating the reliability of a geometric transformation parameter in relation to FIG. 10. 本発明の第3実施形態の構成例を示すブロック図である。It is a block diagram which shows the structural example of 3rd Embodiment of this invention. 第3実施形態に開示した統合処理部の詳細構成例を示すブロック図である。It is a block diagram which shows the detailed structural example of the integrated process part disclosed by 3rd Embodiment. 図13における和集合又は信頼度を使用した統合処理の一例を示す説明図である。It is explanatory drawing which shows an example of the integration process using the union or the reliability in FIG. 図13の関連で、投票による統合処理の一例を示す説明図である。It is explanatory drawing which shows an example of the integration process by vote in relation to FIG. 第3実施形態における画像照合処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the image collation process in 3rd Embodiment. 第3実施形態における部分検索画像の画像位置計算処理の一例を示すフローチャートである。It is a flowchart which shows an example of the image position calculation process of the partial search image in 3rd Embodiment. 第3実施形態の表示用データとして類似度を表示制御する場合の一例を示す説明図である。It is explanatory drawing which shows an example in the case of carrying out display control of the similarity as display data of 3rd Embodiment. 第3実施形態の表示用データとして文章データを表示制御する場合の一例を示す説明図である。It is explanatory drawing which shows an example in the case of carrying out display control of text data as display data of 3rd Embodiment. 本発明の各実施形態の各構成の背景を成すハードウエア資源の一例を示す説明図である。It is explanatory drawing which shows an example of the hardware resource which comprises the background of each structure of each embodiment of this invention. 関連技術を示すブロック図である。It is a block diagram which shows related technology.
〔第1実施形態〕
(基本的な構成)
 以下、本発明にかかる画像照合装置の第1実施形態を、図1乃至図6に基づいて説明する。最初に、本発明にかかる画像照合装置の基本的な構成を説明し、その後に具体的な内容を説明する。
[First Embodiment]
(Basic configuration)
Hereinafter, a first embodiment of an image collating apparatus according to the present invention will be described with reference to FIGS. First, the basic configuration of the image collating apparatus according to the present invention will be described, and then the specific contents will be described.
 まず、図1において、画像照合装置101は、特徴量を有する登録画像RIを記憶した記憶手段10と、当該登録画像RIと照合対象の検索画像QIとを比較して一定の画像処理を行う画像処理手段12とを備えている。
 この内、記憶手段10は、特徴量を有する登録画像RIと、この登録画像RIから計算される各種のデータ(部分登録画像RP等)を予め記憶している。
First, in FIG. 1, an image collation apparatus 101 compares a storage unit 10 that stores a registered image RI having a feature quantity with the registered image RI and a search image QI to be collated, and performs an image processing. And processing means 12.
Among these, the storage means 10 stores in advance a registered image RI having a feature quantity and various data (partial registered image RP and the like) calculated from the registered image RI.
 上記画像処理手段12は、前述した登録画像RIと検索画像QIの各特徴点及びその配置に基づいて幾何変換パラメータを推定する幾何変換パラメータ推定部16と、この幾何変換パラメータ推定部16により推定された前記幾何変換パラメータを利用して登録画像RI中の各部分領域画像に相当する検索画像QI中の投影部分画像領域を決定する部分領域決定部14と、前記検索画像QIの前記複数の各投影部分画像領域と前記登録画像RIの各部分領域画像の各特徴量を計算する部分領域特徴量計算部18と、前記登録画像RIの部分領域で特定される部分登録画像の前記特徴量に対して複数の前記部分検索画像の特徴量をそれぞれ比較することで前記部分検索画像と前記部分登録画像との一致を判定する部分画像照合部20と、前記幾何変換パラメータを使用して前記一致した前記部分検索画像の位置を計算し検出する画像位置検出部22と、を備えている。 The image processing means 12 is estimated by the geometric transformation parameter estimation unit 16 that estimates the geometric transformation parameters based on the feature points and the arrangement of the registered image RI and the search image QI, and the geometric transformation parameter estimation unit 16. In addition, a partial area determination unit 14 that determines a projected partial image area in the search image QI corresponding to each partial area image in the registered image RI using the geometric transformation parameter, and each of the plurality of projections of the search image QI A partial region feature amount calculation unit 18 that calculates each feature amount of the partial image region and each partial region image of the registered image RI, and the feature amount of the partial registered image specified by the partial region of the registered image RI A partial image collation unit 20 that determines a match between the partial search image and the partial registration image by comparing feature quantities of a plurality of partial search images; Comprises an image position detecting section 22 to detect and calculate what the position of the partial search image the matching using the transformation parameters, a.
 この内、幾何変換パラメータ推定部16は、前記部分検索画像についての候補となる前記幾何変換パラメータを複数組について計算し推定するパラメータ推定機能(候補パラメータ計算機能16a)を備えている。 Among these, the geometric transformation parameter estimation unit 16 includes a parameter estimation function (candidate parameter calculation function 16a) that calculates and estimates the geometric transformation parameters that are candidates for the partial search image for a plurality of sets.
 又、画像位置検出部22は、推定された前記幾何変換パラメータ毎に前記部分検索画像と部分登録画像との一致度を計算すると共に一致度の高い前記幾何変換パラメータの信頼度を高く設定するパラメータ信頼度計算機能22aと、この信頼度の高い幾何変換パラメータを使用して前記登録画像RIに相当する画像領域が前記検索画像QI中のどの領域に位置するかを計算し推定する全体画像位置推定機能22bとを備えている。 The image position detection unit 22 calculates the degree of coincidence between the partial search image and the partial registered image for each estimated geometric transformation parameter, and sets the reliability of the geometric transformation parameter having a high degree of coincidence. Whole image position estimation for calculating and estimating in which area in the search image QI the image area corresponding to the registered image RI is calculated using the reliability calculation function 22a and this highly reliable geometric transformation parameter. And a function 22b.
 更に、上述した画像位置検出部22は、前記幾何変換パラメータと前記部分登録画像の予め設定された座標とを使用して当該部分検索画像の位置を計算する対応位置計算機能22cを備えている。 Furthermore, the above-described image position detection unit 22 includes a corresponding position calculation function 22c that calculates the position of the partial search image using the geometric transformation parameter and the preset coordinates of the partial registration image.
 このため、本第1実施形態では、上述した幾何変換パラメータ推定部が登録画像RIに対する検索画像QIの幾何変換パラメータを推定し、部分画像照合部が部分登録画像の特徴点と部分検索画像の特徴点とを比較することで一致の有無を判定し、位置計算部が幾何変換パラメータを使用して一致した部分検索画像の位置を計算し決定するように構成したので、検索画像QIに複数の部分検索画像がある場合にも、その位置を個別に計算することができる。即ち、検索画像QIに含まれる複数の部分検索画像のそれぞれの位置を計算することができ、これによって装置の汎用性拡大を図ることが可能となっている。 Therefore, in the first embodiment, the above-described geometric transformation parameter estimation unit estimates the geometric transformation parameter of the search image QI with respect to the registered image RI, and the partial image matching unit uses the feature points of the partial registration image and the feature of the partial search image. Since it is configured to determine whether or not there is a match by comparing the point, and the position calculation unit calculates and determines the position of the matched partial search image using the geometric transformation parameter, a plurality of parts are included in the search image QI. Even if there is a search image, its position can be calculated individually. That is, the position of each of the plurality of partial search images included in the search image QI can be calculated, thereby increasing the versatility of the apparatus.
 以下、これを更に詳述する。
(具体的な構成内容)
 まず、前述した幾何変換パラメータ推定部16は、まず、前述した登録画像RIと検索画像QIのそれぞれについて特徴点を抽出すると共に、その抽出された特徴点の配置位置を特定する特徴点抽出特定機能を備えている。又、この幾何変換パラメータ推定部16は、特徴点の配置を特徴づける幾何学的不変量の列(不変量特徴ベクトル)を計算する特徴ベクトル算出機能を備え、更に、この登録画像RIと検索画像QIの各特徴点配置情報から幾何学変換パラメータを推定する幾何学変換パラメータ推定機能を備えている。
This will be described in detail below.
(Specific configuration contents)
First, the geometric transformation parameter estimation unit 16 described above first extracts a feature point for each of the registered image RI and the search image QI described above, and also specifies a feature point extraction / specification function that specifies the arrangement position of the extracted feature point. It has. The geometric transformation parameter estimation unit 16 includes a feature vector calculation function for calculating a geometric invariant string (invariant feature vector) that characterizes the arrangement of feature points, and further, the registered image RI and the search image. A geometric transformation parameter estimation function for estimating a geometric transformation parameter from each QI feature point arrangement information is provided.
 ここで、上記幾何変換パラメータは、2つの画像の幾何変換の内容及び量を表すパラメータであり、幾何変換としては、アフィン変換や、ホモグラフィー変換などがある。アフィン変換は、画像上の特徴部分の移動、拡大縮小、回転及びスキュー(斜めへの歪み)の変換を表すことができる。ホモグラフィー変換は、台形歪みを表すことができる。 Here, the geometric transformation parameter is a parameter representing the content and amount of geometric transformation between two images, and examples of the geometric transformation include affine transformation and homography transformation. The affine transformation can represent transformation of movement, scaling, rotation, and skew (oblique distortion) of the feature on the image. The homography transformation can represent trapezoidal distortion.
 そして、この幾何変換パラメータは、アフィンパラメータ、ホモグラフィー行列など、2つの画像の間の幾何学的変換を表すパラメータの組のことであるこれらの幾何変換のパラメータとしては、アフィンパラメータやホモグラフィーパラメータの他、それらのパラメータを近似する値としても良い。近似値としては、特徴部分の長さ比、面積比、何らかの累積値の比などを使用することができる。 This geometric transformation parameter is a set of parameters representing a geometric transformation between two images, such as an affine parameter and a homography matrix. The parameters of these geometric transformations include an affine parameter and a homography parameter. In addition, these parameters may be approximated. As the approximate value, the length ratio of the characteristic portion, the area ratio, a ratio of some cumulative value, or the like can be used.
 また、線形の変換のみならず、非線形の変換も既存の画像処理手法により取り扱うことができる。そして、本第1実施形態では、検索画像QI全体では非線形歪みがあるが、部分的には線形の幾何変換で良好に近似できる際にも、部分検索画像QP毎の幾何変換パラメータを求めて対応させることで、非線形の変換に部分毎の線形変換で対応することもできる。 Also, not only linear conversion but also non-linear conversion can be handled by existing image processing methods. In the first embodiment, the search image QI as a whole has non-linear distortion. However, even when partial approximation can be satisfactorily approximated by linear geometric transformation, the geometric transformation parameters for each partial search image QP are obtained and handled. By doing so, it is possible to cope with non-linear conversion by linear conversion for each part.
 登録画像RIと検索画像QIとの幾何変換パラメータは様々な手法で求めることができる。まず、登録画像RIを撮像したカメラの設置角度等を示す外部パラメータと、検索画像QIを撮像したカメラの外部パラメータとが予め判明している場合には、当該外部パラメータの差から幾何変換パラメータを算出することができる。 The geometric transformation parameters between the registered image RI and the search image QI can be obtained by various methods. First, when the external parameter indicating the installation angle of the camera that captured the registered image RI and the external parameter of the camera that captured the search image QI are known in advance, the geometric transformation parameter is determined from the difference between the external parameters. Can be calculated.
 次に、登録画像RIと検索画像QIとに同一物体(文字やコード等を含む)が撮影されており、検索画像QIに撮像された物体が登録画像RIにて撮像された物体を基準として移動、拡大縮小、変形などがある際には、これら同一の物体の特徴(例えば、外接矩形の座標)を対比させることで、幾何変換パラメータを求めることができる。登録画像RIと検索画像QIとに指標として同一物体が撮像されていることが予め判明している際には、当該指標の画像部分を使用して幾何変換パラメータを算出することができる。 Next, the same object (including characters and codes) is captured in the registered image RI and the search image QI, and the object imaged in the search image QI moves with reference to the object imaged in the registered image RI. When there is enlargement / reduction, deformation, etc., the geometric transformation parameters can be obtained by comparing the characteristics of the same object (for example, coordinates of the circumscribed rectangle). When it is known in advance that the same object is imaged as an index in the registered image RI and the search image QI, the geometric transformation parameter can be calculated using the image portion of the index.
 そして、同一物体が撮像されているか否かが不明であっても、幾何変換に対して不変な量が登録画像RIと検索画像QIとで共通するか否かを判定することで、同一又は類似の物体が撮像されているか否かを判定することができる。幾何変換不変量は、幾何変換により失われない画像の特徴量を示すデータであり、幾何変換パラメータを使用して再現できる量である。幾何変換不変量は、画像の特徴であり、特徴の具体的な計算は部分領域特徴量計算部18で実行すると良い。 Even if it is unclear whether or not the same object is being imaged, it is the same or similar by determining whether or not the invariable amount with respect to geometric transformation is common to the registered image RI and the search image QI. It is possible to determine whether or not the object is being imaged. The geometric transformation invariant is data indicating the feature amount of an image that is not lost by geometric transformation, and is a quantity that can be reproduced using a geometric transformation parameter. The geometric transformation invariant is a feature of the image, and the specific calculation of the feature may be executed by the partial region feature amount calculator 18.
 本第1実施形態では、幾何変換パラメータ推定部16は、登録画像RIに対する検索画像QIの幾何変換パラメータを推定する。この幾何変換パラメータ推定部16は、例えば、登録画像RIと検索画像QIとの間で幾何変換不変量の有無を判定することで、同一又は類似の物体が撮像されているか否かを判定し、さらに、幾何変換不変量となる特徴部分が重なり合うようなパラメータを計算することで、幾何変換パラメータを推定することができる。 In the first embodiment, the geometric transformation parameter estimation unit 16 estimates the geometric transformation parameters of the search image QI for the registered image RI. The geometric transformation parameter estimation unit 16 determines whether or not the same or similar object is imaged by determining whether there is a geometric transformation invariant between the registered image RI and the search image QI, for example. Furthermore, the geometric transformation parameter can be estimated by calculating a parameter such that the feature portions which are geometric transformation invariants overlap.
 幾何変換パラメータを推定するには、幾何変換不変量の登録画像RIと検索画像QIでの一致度等を指標として、パラメータの全ての組み合わせ又はランダムな組み合わせに対して、投票処理、最小二乗法による最適化の計算等により算出することができる。この推定手法は、カメラの外部パラメータが事前に判明していなくとも、検索画像QIに予め形状等が特定される物体を確実に撮像させておかなくとも、推定をすることができる。このため、登録画像RI(の部分)と同一又は類似の画像が検索画像QIに含まれているか否かを判定するための照合処理に適している。 In order to estimate the geometric transformation parameter, the degree of coincidence between the registered image RI of the geometric transformation invariant and the search image QI is used as an index, and all combinations or random combinations of the parameters are obtained by voting or the least square method. It can be calculated by optimization calculation or the like. This estimation method can perform estimation even if the external parameters of the camera are not known in advance or an object whose shape or the like is specified in advance in the search image QI is not reliably captured. For this reason, it is suitable for collation processing for determining whether or not an image that is the same as or similar to the registered image RI (part thereof) is included in the search image QI.
 本第1実施形態では、検索画像QIに複数の部分的な画像が含まれている際には、幾何変換パラメータ推定部16は、各部分検索画像QP毎の幾何変換パラメータを推定する。また、一つの部分検索画像QP対して一つの幾何変換パラメータのみを推定するのではなく、候補となる複数の幾何変換パラメータを推定するようにしても良い。 In the first embodiment, when a plurality of partial images are included in the search image QI, the geometric transformation parameter estimation unit 16 estimates a geometric transformation parameter for each partial retrieval image QP. Further, instead of estimating only one geometric transformation parameter for one partial search image QP, a plurality of candidate geometric transformation parameters may be estimated.
 そして、幾何変換不変量を使用して投票や最適化により幾何変換パラメータを推定すると、部分検索画像QPの座標や形状を事前に特定しなくとも、各部分検索画像QPに適応した幾何変換パラメータを求めることができる。 When the geometric transformation parameters are estimated by voting or optimization using geometric transformation invariants, the geometric transformation parameters adapted to each partial retrieval image QP can be obtained without specifying the coordinates and shape of the partial retrieval image QP in advance. Can be sought.
 即ち、各部分検索画像QPに適した幾何変換パラメータを直接的に算出しなくとも、登録画像RIと検索画像QIとの間の幾何変換パラメータを投票や最適化で推定すると、結果的に、この推定された幾何変換パラメータは部分検索画像QPに適応した値となる。この場合、部分検索画像QPが複数ある場合には、それぞれに対応した幾何変換パラメータが求められる。 That is, if the geometric transformation parameter between the registered image RI and the retrieval image QI is estimated by voting or optimization without directly calculating the geometric transformation parameter suitable for each partial retrieval image QP, as a result, The estimated geometric transformation parameter is a value adapted to the partial search image QP. In this case, when there are a plurality of partial search images QP, geometric transformation parameters corresponding to each of the partial search images QP are obtained.
 又、前述した部分領域決定部14は、検索画像QIの複数の部分領域を計算することで部分登録画像RP特定する機能を備えている。
 この部分領域の計算は、検索画像QIに対してクラスタリング、二値化、判別分析等の画像処理を適用することで外接矩形等を求めても良いし、前述した幾何変換パラメータを予め求めておき、当該幾何変換パラメータを用いて部分登録画像RPの位置を検索画像QIに投影することで、検索画像QI上の部分領域を求めるようにしても良い。
The partial area determination unit 14 described above has a function of specifying a partial registered image RP by calculating a plurality of partial areas of the search image QI.
In calculating the partial area, a circumscribed rectangle or the like may be obtained by applying image processing such as clustering, binarization, and discriminant analysis to the search image QI, or the above-described geometric transformation parameters may be obtained in advance. The partial area on the search image QI may be obtained by projecting the position of the partial registration image RP onto the search image QI using the geometric transformation parameter.
 この部分領域決定部14は、検索画像QIから部分検索画像QPを計算するに際して、検索画像QIへの画像処理により部分検索画像QPを特定しても良いが、上記幾何変換パラメータを使用して登録画像RI中の部分登録画像RPの座標を検索画像QIに投影することで、部分検索画像QPを特定すると良い。 When calculating the partial search image QP from the search image QI, the partial region determination unit 14 may specify the partial search image QP by image processing on the search image QI, but it is registered using the geometric transformation parameter. The partial search image QP may be specified by projecting the coordinates of the partial registration image RP in the image RI onto the search image QI.
 更に、前述した部分領域特徴量計算部18は、部分検索画像QPが特定されると、各部分検索画像QPの特徴量を計算する。この特徴量は、部分登録画像RPと部分検索画像QPとの一致を判定するための特徴量であれば、幾何変換不変量でも、幾何変換不変量ではない別種の特徴であっても良い。本第1実施形態では、部分登録画像RPの特徴は、予め部分領域特徴量計算部18等を使用して計算され、記憶手段10に格納されている。 Furthermore, when the partial search image QP is specified, the partial area feature amount calculation unit 18 described above calculates the feature amount of each partial search image QP. The feature amount may be a geometric transformation invariant or another type of feature that is not a geometric transformation invariant as long as it is a feature amount for determining a match between the partial registration image RP and the partial search image QP. In the first embodiment, the features of the partial registration image RP are calculated in advance using the partial region feature amount calculation unit 18 and the like, and are stored in the storage unit 10.
 又、前述した部分画像照合部20は、登録画像RIの部分領域で特定される部分登録画像RPの特徴に対して複数の部分検索画像QPの特徴量をそれぞれ比較することで、部分検索画像QPと部分登録画像RPとの一致を判定する機能を備えている。この一致の判定は、特徴量の差やその比が静的又は動的なしきい値で示される範囲内であるか否かを判定する情報処理により実行することができる。 Further, the partial image matching unit 20 described above compares the feature amounts of the plurality of partial search images QP with the features of the partial registration image RP specified in the partial region of the registered image RI, so that the partial search image QP. And the partial registration image RP. This determination of coincidence can be performed by information processing for determining whether or not the difference in feature amount or its ratio is within a range indicated by a static or dynamic threshold value.
 上記画像位置検出部22は、幾何変換パラメータを使用して、一致した部分検索画像QPの位置を計算する機能を備えている。そして、部分検索画像QPが部分登録画像RPと一致した際には、その幾何変換パラメータは部分検索画像QPを特定するために有用であったことが示唆されることから、当該幾何変換パラメータを使用して当該部分登録画像RPの位置を計算すると、部分検索画像QPの位置を正確に求めることができる。 The image position detection unit 22 has a function of calculating the position of the matched partial search image QP using the geometric transformation parameter. When the partial search image QP matches the partial registration image RP, it is suggested that the geometric transformation parameter is useful for specifying the partial search image QP. When the position of the partial registration image RP is calculated, the position of the partial search image QP can be accurately obtained.
 本第1実施形態では、前述した構成により、検索画像QIに複数含まれる部分検索画像QPを検索し、そして、部分検索画像QPの位置を計算することができる。さらに、部分検索画像QPの位置を確定することで、検索画像QI中の部分登録画像RPの個数を計算することもできる。 In the first embodiment, with the above-described configuration, a plurality of partial search images QP included in the search image QI can be searched, and the position of the partial search image QP can be calculated. Furthermore, by determining the position of the partial search image QP, the number of partial registration images RP in the search image QI can be calculated.
 説明が一部前後するが、図3は、幾何変換パラメータの信頼度を算出する場合の構成例を示すブロック図である。この例では、幾何変換パラメータ推定部16が、候補パラメータ計算機能16aを備えている。そして、この図3において、後述する画像位置検出部22は、幾何変換パラメータの信頼度を判定するパラメータ信頼度計算機能22aと、この幾何変換パラメータを使用して部分検索画像QPの位置を推定する計算画像位置推定機能22bとを備えている。 FIG. 3 is a block diagram illustrating a configuration example in the case of calculating the reliability of the geometric transformation parameter, although a part of the description is around. In this example, the geometric transformation parameter estimation unit 16 includes a candidate parameter calculation function 16a. In FIG. 3, the image position detection unit 22 described later estimates the position of the partial search image QP using the parameter reliability calculation function 22a for determining the reliability of the geometric conversion parameter and the geometric conversion parameter. And a calculated image position estimating function 22b.
 幾何変換パラメータ推定部16の候補パラメータ計算機能16aは、部分検索画像QPについての候補となる幾何変換パラメータを複数組み計算する。
 一方、画像位置検出部22のパラメータ信頼度計算機能22aは、候補となる幾何変換パラメータ毎に、部分検索画像QPと部分登録画像RPとの一致度を計算し、一致度の高い幾何変換パラメータの信頼度を高く設定するための機能である。この一致度は、個々の部分検索画像QPの一致の程度でも良いし、同一の幾何変換された複数の部分検索画像QPがある際に部分検索画像QPに対する一致した部分検索画像QPの数の比率としても良い。
The candidate parameter calculation function 16a of the geometric transformation parameter estimation unit 16 calculates a plurality of sets of geometric transformation parameters that are candidates for the partial search image QP.
On the other hand, the parameter reliability calculation function 22a of the image position detection unit 22 calculates the degree of coincidence between the partial search image QP and the partial registered image RP for each candidate geometric transformation parameter, and obtains a geometric transformation parameter having a high degree of coincidence. This is a function for setting a high reliability. The degree of coincidence may be the degree of coincidence of individual partial search images QP, or the ratio of the number of matched partial search images QP to partial search images QP when there are a plurality of partial search images QP subjected to the same geometric transformation. It is also good.
 そして、例えば、パラメータ信頼度計算機能22aは、幾何変換パラメータを用いて部分登録画像RPのそれぞれが部分登録画像RPと一致するかどうかを判定し、一致すると判定された部分検索画像QPの数が一定以上の場合に、その幾何変換パラメータは信頼できると判定する。 Then, for example, the parameter reliability calculation function 22a determines whether each of the partial registration images RP matches the partial registration image RP using the geometric transformation parameter, and the number of partial search images QP determined to match is determined. If it is above a certain level, it is determined that the geometric transformation parameter is reliable.
 これに対して、画像位置推定機能22bは、信頼度の高い幾何変換パラメータを使用して当該部分検索画像QPの位置を計算する機能である。即ち、この画像位置推定機能22bは、パラメータ信頼度計算機能22aによって信頼度が高いと判定された幾何変換パラメータを用いて、部分登録画像RPに相当する部分検索画像QPが検索画像QI中のどの位置にあるかを推定することができる。 On the other hand, the image position estimation function 22b is a function for calculating the position of the partial search image QP using a highly reliable geometric transformation parameter. That is, the image position estimation function 22b uses the geometric transformation parameter determined to have high reliability by the parameter reliability calculation function 22a to determine which partial search image QP corresponding to the partial registration image RP is in the search image QI. It can be estimated whether it is in position.
 このように、登録画像RIと検索画像QIとの幾何変換不変量の比較による幾何変換パラメータではなく、部分登録画像RPと部分検索画像QPとの特徴量の比較により信頼度が高いと判定される幾何変換パラメータを使用して部分検索画像QPの位置を求めることで、部分検索画像QP毎の位置を精度よく計算することができる。 As described above, it is determined that the reliability is high by comparing the feature amounts of the partial registration image RP and the partial search image QP, not by the geometric conversion parameter based on the comparison of the geometric transformation invariant between the registration image RI and the search image QI. By obtaining the position of the partial search image QP using the geometric transformation parameter, the position of each partial search image QP can be calculated with high accuracy.
 特に、全体から求めた幾何変換パラメータが複数あるとき、それぞれの部分登録画像RPの特徴に由来する幾何変換パラメータはその部分登録画像RPと一致する部分検索画像QPについて最適な幾何変換パラメータとなるため、個々の部分検索画像QPに最適な幾何変換パラメータを信頼度により選択して、その幾何変換パラメータを使用して部分検索画像QPの位置を求めることができる。 In particular, when there are a plurality of geometric transformation parameters obtained from the whole, the geometric transformation parameters derived from the features of the respective partial registration images RP become the optimal geometric transformation parameters for the partial search image QP that matches the partial registration image RP. The optimum geometric transformation parameter for each partial search image QP can be selected based on the reliability, and the position of the partial search image QP can be obtained using the geometric transformation parameter.
 ここで、好ましい例としては、画像位置検出部22が、幾何変換パラメータと部分登録画像RPの座標とを使用して当該部分検索画像QPの位置を計算する対応位置計算機能22cを備えると良い(図1参照)。これにより、部分検索画像QPの位置を登録画像RIの座標系で算出することができ、すると、登録情報の座標と対応させた様々な画像処理が可能となる。 Here, as a preferable example, the image position detection unit 22 preferably includes a corresponding position calculation function 22c that calculates the position of the partial search image QP using the geometric transformation parameters and the coordinates of the partial registration image RP ( (See FIG. 1). Thereby, the position of the partial search image QP can be calculated in the coordinate system of the registered image RI, and various image processing corresponding to the coordinates of the registered information can be performed.
(第1実施形態の動作)
 次に、図4に基づいて本第1実施形態の全体の動作について詳細に説明する。
 図4に示す例では、登録画像RIと、部分登録画像RPと、部分登録特徴量RCとは予め記憶手段10に格納されているものとする。そして、検索画像QIを受信し、当該検索画像QIに部分登録画像RPと同一又は類似の画像が存在するか否かを照合するための検索処理を開始する。
(Operation of the first embodiment)
Next, the overall operation of the first embodiment will be described in detail with reference to FIG.
In the example illustrated in FIG. 4, it is assumed that the registered image RI, the partially registered image RP, and the partially registered feature amount RC are stored in the storage unit 10 in advance. Then, the search image QI is received, and a search process for checking whether or not an image identical to or similar to the partially registered image RP exists in the search image QI is started.
 まず、幾何変換パラメータ推定部16が、幾何変換パラメータを推定する(図4:ステップS101)。この例では、登録画像RIと検索画像QIとで成立する幾何変換パラメータを求めると良い。検索画像QIの部分(部分検索画像QPとなる部分)に対する登録画像RIの幾何変換パラメータが局所的に異なる場合、異なる幾何変換パラメータをそれぞれ求めておくと良い。逆に、部分登録画像RPに対する検索画像QIの幾何変換パラメータが、局所的に異なる場合も、同様に、異なる幾何変換パラメータをそれぞれ求めておくと良い。 First, the geometric transformation parameter estimation unit 16 estimates a geometric transformation parameter (FIG. 4: step S101). In this example, it is preferable to obtain a geometric transformation parameter established by the registered image RI and the search image QI. If the geometric transformation parameters of the registered image RI for the search image QI portion (the portion that becomes the partial search image QP) are locally different, different geometric transformation parameters may be obtained respectively. Conversely, when the geometric transformation parameters of the search image QI for the partial registration image RP are locally different, different geometric transformation parameters may be obtained in the same manner.
 次に、部分領域決定部14が、登録画像RI中の部分登録画像RPを特定する(図4:ステップS102)。そして、この部分登録画像RPの特徴量である部分登録特徴量RCを記憶手段10から読み出す(図4:ステップS103)。 Next, the partial region determination unit 14 specifies a partial registration image RP in the registration image RI (FIG. 4: step S102). And the partial registration feature-value RC which is the feature-value of this partial registration image RP is read from the memory | storage means 10 (FIG. 4: step S103).
 そして、部分領域決定部14は、検索画像QIの部分領域を決定し、これを部分検索画像QPとする(図4:ステップS104)。部分検索画像QPを特定するには、ステップS101で計算した幾何変換パラメータを使用して、部分登録画像RPの座標を検索画像QIに投影することで部分検索画像QPの領域を特定すると良い。 Then, the partial area determination unit 14 determines a partial area of the search image QI and sets it as the partial search image QP (FIG. 4: step S104). In order to specify the partial search image QP, the region of the partial search image QP may be specified by projecting the coordinates of the partial registration image RP onto the search image QI using the geometric transformation parameter calculated in step S101.
 次に、部分領域特徴量計算部18が、部分検索画像QPの部分検索特徴量QCを計算する(図4:ステップS105)。この部分検索特徴量QCは、部分登録特徴量RCと比較可能な形式の特徴量とする。
 そして、部分画像照合部20が、登録画像RI中の部分登録画像RPと部分検索画像QPとの一致判定を行う(図4:ステップS106)。
Next, the partial region feature quantity calculation unit 18 calculates the partial search feature quantity QC of the partial search image QP (FIG. 4: step S105). The partial search feature quantity QC is a feature quantity in a format comparable to the partial registration feature quantity RC.
Then, the partial image matching unit 20 determines whether the partial registration image RP in the registration image RI matches the partial search image QP (FIG. 4: step S106).
 本第1実施例では、特に、画像位置検出部22が、部分検索画像QPの位置(登録画像RIの存在領域)を算出する点(図4:ステップS107)に、一つの技術的特徴を有する。この部分検索画像QPの位置の算出では、図4のステップS101で算出した幾何変換パラメータそのものか、又は、さらに部分検索画像QPを特定するためにより良好な幾何変換パラメータを使用して、当該部分検索画像QPの位置を計算すると良い。 In the first embodiment, in particular, the image position detection unit 22 has one technical feature in that the position of the partial search image QP (existing area of the registered image RI) is calculated (FIG. 4: step S107). . In the calculation of the position of the partial search image QP, using the geometric transformation parameter itself calculated in step S101 of FIG. 4 or a better geometric transformation parameter for specifying the partial search image QP, the partial search is performed. It is preferable to calculate the position of the image QP.
 図5に示す例では、信頼度の高い幾何変換パラメータを使用して部分検索画像QPの位置を計算する例を示す。
 この図5の例では、まず、パラメータ信頼度計算機能22aが、複数の幾何変換パラメータ毎に、複数の部分検索画像QPのそれぞれの一致を判定する(図5:ステップS111)。この図5のステップS111では、幾何変換パラメータが3つあり、部分検索画像QPが4つある場合、12の組み合わせでの一致を判定する。
 尚、このパラメータ信頼度計算機能22aは、図11に示す信頼度テーブルTBを使用して信頼度の集計をしても良い。
The example shown in FIG. 5 shows an example in which the position of the partial search image QP is calculated using a highly reliable geometric transformation parameter.
In the example of FIG. 5, first, the parameter reliability calculation function 22a determines the coincidence of each of the plurality of partial search images QP for each of the plurality of geometric transformation parameters (FIG. 5: Step S111). In step S111 of FIG. 5, when there are three geometric transformation parameters and there are four partial search images QP, it is determined whether there are 12 combinations.
Note that the parameter reliability calculation function 22a may count the reliability using the reliability table TB shown in FIG.
 続いて、パラメータ信頼度計算機能22aは、一致すると判定された部分検索画像QPの数が一定以上の幾何変換パラメータを信頼できると判定する(図5:ステップS112)。3つの幾何変換パラメータ毎に4つの部分検索画像QPの一致を判定した際には、最も一致数の多い幾何変換パラメータを、信頼度の高い幾何変換パラメータとする。 Subsequently, the parameter reliability calculation function 22a determines that a geometric transformation parameter having a certain number of partial search images QP determined to match can be trusted (FIG. 5: step S112). When it is determined that the four partial search images QP match for each of the three geometric transformation parameters, the geometric transformation parameter having the largest number of matches is set as a highly reliable geometric transformation parameter.
 例えば、1つめの幾何変換パラメータが2つの部分検索画像QPで一致し、2つめの幾何変換パラメータが2つの部分検索画像QPで一致し、3つめの幾何変換パラメータが3つの部分検索画像QPで一致した際には、パラメータ信頼度計算機能22aは、3つめの幾何変換パラメータを信頼度の高い幾何変換パラメータと判定する。
 そして、信頼度の高い幾何変換パラメータを使用して、部分検索画像QPの位置を最終的に特定し(図5:ステップS113)、そして、この幾何変換パラメータによる部分検索画像QPの座標を当該部分検索画像QPの位置とする(図5:ステップS114)。
For example, the first geometric transformation parameter matches in two partial search images QP, the second geometric transformation parameter matches in two partial search images QP, and the third geometric transformation parameter matches in three partial search images QP. When they match, the parameter reliability calculation function 22a determines that the third geometric conversion parameter is a highly reliable geometric conversion parameter.
Then, using the highly reliable geometric transformation parameter, the position of the partial search image QP is finally specified (FIG. 5: step S113), and the coordinates of the partial search image QP based on this geometric transformation parameter are determined as the part. The position of the search image QP is set (FIG. 5: Step S114).
 ここで、本第1実施形態にあって、その各構成要素の動作内容(情報処理)および構成要素相互間の情報処理に際しての連係動作については、これをプログラム化し、前記画像処理手段が備えているコンピュータに実現させるようにしてもよい。後述する各実施形態についても同様である。 Here, in the first embodiment, the operation content (information processing) of each component and the linkage operation at the time of information processing between the components are programmed, and the image processing means is provided. It may be realized by a computer. The same applies to each embodiment described later.
(第1実施形態の効果)
 本第1実施形態では、上述したように登録画像RIが検索画像QI中のどの領域に位置しているかが求められるように構成されているため、検索画像QIの中に同一の登録画像RIが複数写りこんでいる場合に複数の画像を検出することができる。また、検索画像QIの中に同一の登録画像RIが1枚含まれている場合に検索画像QI中のどの位置に登録画像RIが含まれているかを検出することもできる。
(Effect of 1st Embodiment)
In the first embodiment, as described above, since the registration image RI is determined so as to determine in which area in the search image QI, the same registration image RI is included in the search image QI. A plurality of images can be detected when a plurality of images are captured. In addition, when the same registered image RI is included in the search image QI, it is possible to detect at which position in the search image QI the registered image RI is included.
 図6は、本第1実施形態の効果を示すための図である。この図6に示す例では、検索画像QI中に2つの部分登録画像RPが含まれている。
 前述した特許文献1にもこれに近い内容のものが開示されているが、特許文献1に示す先行技術では、部分検索画像QP[1]と部分検索画像QP[2]とが別の領域であるということを考慮せずに照合を行っているため、部分登録画像RPが検索画像QI中のどこに存在するかを高精度に求めることが困難であった。
FIG. 6 is a diagram for illustrating the effect of the first embodiment. In the example shown in FIG. 6, two partial registration images RP are included in the search image QI.
Patent Document 1 described above discloses a similar content. However, in the prior art shown in Patent Document 1, the partial search image QP [1] and the partial search image QP [2] are in different areas. Since collation is performed without considering that there is, it is difficult to obtain with high accuracy where the partial registration image RP exists in the search image QI.
 一方、本第1実施形態では、部分登録画像RPが検索画像QI中のどの領域に位置しているかを計算するため、検索画像QIの中に同一の部分登録画像RPが複数写りこんでいる場合にも、複数の部分登録画像QP[1]、QP[2]を検出することができる。 On the other hand, in the first embodiment, in order to calculate in which region in the search image QI the partial registration image RP is located, a plurality of the same partial registration images RP are included in the search image QI. In addition, a plurality of partial registration images QP [1] and QP [2] can be detected.
 このように、本第1実施形態では、部分登録画像RPが検索画像QI中のどの領域に位置しているかを計算するため、検索画像QIの中に同一の登録画像RIが複数写りこんでいる場合には、複数の画像を検出することができる。
 また、検索画像QIの中に同一の登録画像RIが1枚含まれている場合、検索画像QI中のどの位置に登録画像RIが含まれているかを検出することもできる。
〔第2実施形態〕
As described above, in the first embodiment, in order to calculate in which region in the search image QI the partial registration image RP is located, a plurality of the same registration images RI are included in the search image QI. In some cases, multiple images can be detected.
In addition, when the same registered image RI is included in the search image QI, it is possible to detect at which position in the search image QI the registered image RI is included.
[Second Embodiment]
 次に、本発明の第2実施形態を図7乃至図11に基づいて説明する。
 この図7乃至図11に示す第2実施形態では、特徴量として特徴ベクトルを使用する例と、幾何変換としてホモグラフィー変換を使用する例について説明する。
 ここで、前述した第1実施形態と同一の構成部材については同一の符号を使用するものとする。
Next, a second embodiment of the present invention will be described with reference to FIGS.
In the second embodiment shown in FIGS. 7 to 11, an example in which a feature vector is used as a feature amount and an example in which a homography transformation is used as a geometric transformation will be described.
Here, the same code | symbol shall be used about the same component as 1st Embodiment mentioned above.
 図7において、本第2実施形態の画像照合装置102は、登録画像RIと検索画像QIとを照合するために、前述した第1実施形態の場合とほぼ同等に構成された部分領域決定部14と、幾何変換パラメータ推定部16と、部分領域特徴量計算部18と、部分画像照合部20と、画像位置検出部22とを備えている。 In FIG. 7, the image collation apparatus 102 according to the second embodiment performs a partial area determination unit 14 configured substantially the same as that of the first embodiment described above in order to collate the registered image RI and the search image QI. A geometric transformation parameter estimation unit 16, a partial region feature amount calculation unit 18, a partial image matching unit 20, and an image position detection unit 22.
 部分領域特徴量計算部18は、部分登録画像RP及び検索画像QIの特徴量を計算すると共に、幾何変換不変量を要素とする特徴ベクトルを部分登録特徴量RCとして計算する機能を備えている。この部分領域特徴量計算部18は、登録画像RIと検索画像QIのそれぞれについて特徴量を算出する。登録画像RIの特徴量は予め算出して記憶手段10に格納しておいても良い。 The partial region feature quantity calculation unit 18 has a function of calculating feature quantities of the partial registration image RP and the search image QI and calculating a feature vector having a geometric transformation invariant as an element as the partial registration feature quantity RC. The partial region feature amount calculation unit 18 calculates a feature amount for each of the registered image RI and the search image QI. The feature amount of the registered image RI may be calculated in advance and stored in the storage unit 10.
 この部分領域特徴量計算部18は、画像の特徴点と当該特徴点の配置とを生成し、さらに、生成された特徴点の配置とその座標から、特徴点の配置を特徴付ける幾何不変量の列(特徴ベクトル)を計算する。
 ここで、前述した部分領域決定部14によって予め部分登録画像RPが計算される場合には、部分領域特徴量計算部18は、部分登録画像RPの部分登録特徴量RCを計算する機能を備えている。
The partial region feature quantity calculation unit 18 generates a feature point of the image and the arrangement of the feature point, and further, a geometric invariant string characterizing the arrangement of the feature point from the generated arrangement of the feature point and its coordinates. (Feature vector) is calculated.
Here, when the partial registration image RP is calculated in advance by the partial region determination unit 14 described above, the partial region feature amount calculation unit 18 has a function of calculating the partial registration feature amount RC of the partial registration image RP. Yes.
 又、幾何変換パラメータ推定部16は、部分登録画像RPから計算された特徴ベクトルと検索画像QIから計算された特徴ベクトルとが一致するか否かを判定する機能を(不変量一致判定機能16b)を備えている。そして、この幾何変換パラメータ推定部16は、これに基づいて特徴ベクトルが一致する幾何変換パラメータを推定する。更に、この幾何変換パラメータ推定部16は、特徴点として二値化後の連結領域の重心、特徴点配置として互いに近傍にある特徴点の組み合わせを利用することができる。 Further, the geometric transformation parameter estimation unit 16 has a function of determining whether or not the feature vector calculated from the partial registration image RP matches the feature vector calculated from the search image QI (invariant match determination function 16b). It has. Based on this, the geometric transformation parameter estimation unit 16 estimates a geometric transformation parameter having a matching feature vector. Furthermore, the geometric transformation parameter estimation unit 16 can use a combination of feature points in the vicinity as the feature points and the center of gravity of the connected region after binarization and the feature point arrangement.
 前述した部分領域決定部14は、登録画像RIの部分領域を計算して部分登録画像RPを1つ以上計算する機能を備えている。部分登録画像RPの計算は登録処理として事前に行い予め記憶手段10に格納しておくと良い。
 この部分領域決定部14は、更に、所定の幾何変換パラメータを使用した投影により検索画像QIの部分検索画像QPを計算する機能を備えている。
The partial area determination unit 14 described above has a function of calculating a partial area of the registered image RI and calculating one or more partial registered images RP. The partial registration image RP may be calculated in advance as a registration process and stored in the storage unit 10 in advance.
The partial region determination unit 14 further has a function of calculating a partial search image QP of the search image QI by projection using a predetermined geometric transformation parameter.
 又、この部分領域決定部14は、幾何変換パラメータ推定部16が推定した幾何変換パラメータを利用して、登録画像RI中の各部分登録画像RPに相当する検索画像QI中の部分検索画像QPを決定する機能を備えている。
 具体的には、部分登録画像RPの外接矩形や外周の座標に幾何変換パラメータを適用することで検索画像QI中の部分検索画像QPを決定する。仮に、n(n≧1)個の幾何変換パラメータが求められている場合、上述したように、部分検索画像QPはn個求められる。
In addition, the partial region determination unit 14 uses the geometric transformation parameter estimated by the geometric transformation parameter estimation unit 16 to obtain a partial retrieval image QP in the retrieval image QI corresponding to each partial registration image RP in the registration image RI. It has a function to decide.
Specifically, the partial search image QP in the search image QI is determined by applying the geometric transformation parameters to the circumscribed rectangle and the coordinates of the outer periphery of the partial registration image RP. If n (n ≧ 1) geometric transformation parameters are obtained, n partial search images QP are obtained as described above.
 更に、部分領域特徴量計算部18は、検索画像QI中の部分検索画像QPでの画像の特徴ベクトル(部分検索特徴量QC)を計算する機能(部分登録特徴量計算機能18d)を備えている。これによると、検索画像QIの1つの部分検索領域に対して1つの部分検索画像QPが求められるので、幾何変換パラメータがn個求められている場合、特徴ベクトル(部分検索特徴量QC)もn個求められる。この特徴ベクトルは、例えば、幾何変換不変量である特徴点配置を含むベクトルである。 Further, the partial region feature amount calculation unit 18 includes a function (partial registration feature amount calculation function 18d) for calculating a feature vector (partial search feature amount QC) of the partial search image QP in the search image QI. . According to this, since one partial search image QP is obtained for one partial search region of the search image QI, when n geometric transformation parameters are obtained, the feature vector (partial search feature quantity QC) is also n. Individuals are required. This feature vector is a vector including, for example, a feature point arrangement which is a geometric transformation invariant.
 前述した部分画像照合部20は、当該幾何変換パラメータを利用して特定される検索画像RIの部分登録画像RPについての特徴ベクトル(部分登録特徴量RC)と、部分検索画像QPの特徴ベクトル(部分検索特徴量QC)とが一致するか否かを判定する機能を備えている。この部分画像照合部20は、例えば、両画像の特徴ベクトルの差や比が所定のしきい値の範囲内にある際に両画像が一致していると判定することができる。 The partial image matching unit 20 described above uses the feature vector (partial registration feature amount RC) of the partial registration image RP of the search image RI specified by using the geometric transformation parameter and the feature vector (partial part of the partial search image QP). It has a function of determining whether or not the search feature value (QC) matches. For example, the partial image matching unit 20 can determine that both images match when the difference or ratio between the feature vectors of the two images is within a predetermined threshold range.
 又、画像位置検出部22は、部分画像照合部20で一致と判定された部分検索画像QPの位置を幾何変換パラメータの使用により計算する。このように、画像位置検出部22は、部分画像照合部20で得られた照合結果から、検索画像QI中のどの位置にどの部分登録画像RPが含まれているかを判定する。 In addition, the image position detection unit 22 calculates the position of the partial search image QP determined to be coincident by the partial image matching unit 20 by using the geometric transformation parameter. As described above, the image position detection unit 22 determines which partial registration image RP is included in which position in the search image QI from the collation result obtained by the partial image collation unit 20.
 そして、本第2実施形態にて特徴ベクトルを計算するには、部分領域特徴量計算部18は、例えば特徴点抽出機能18aと、特徴点配置機能18bと、不変量計算機能18cとを備えると良い。 In order to calculate a feature vector in the second embodiment, the partial region feature quantity calculation unit 18 includes, for example, a feature point extraction function 18a, a feature point arrangement function 18b, and an invariant calculation function 18c. good.
 この部分領域特徴量計算部18は、登録画像RI、部分登録画像RP、検索画像QI又は部分検索画像QPの画像の特徴ベクトルを、それぞれのタイミングで計算する。
 例えば、登録画像RIと部分登録画像RPの特徴ベクトルは登録処理として事前に計算しておくと良い。又、検索画像QIの特徴ベクトルは幾何変換パラメータ推定部16によって当初に幾何変換パラメータが推定される際に計算すると良い。更に、部分検索画像QPの特徴ベクトルは幾何変換パラメータを使用して部分領域決定部14によって部分検索画像QPが特定された後に計算すると良い。
The partial region feature amount calculation unit 18 calculates the feature vector of the registered image RI, partial registered image RP, search image QI, or partial search image QP at each timing.
For example, the feature vectors of the registration image RI and the partial registration image RP may be calculated in advance as registration processing. The feature vector of the search image QI may be calculated when the geometric transformation parameter is initially estimated by the geometric transformation parameter estimation unit 16. Further, the feature vector of the partial search image QP may be calculated after the partial search image QP is specified by the partial region determination unit 14 using the geometric transformation parameter.
 ここで、前述した部分領域特徴量計算部18の特徴点抽出機能18aは、登録画像RI、部分登録画像RP、検索画像QI又は部分検索画像QPの画像の特徴点をそれぞれのタイミングで計算する。特徴点は、特徴部分の中心や重心等である。特徴点か求められると、特徴点配置機能18bは、複数の特徴点の配置を計算する。さらに、不変量計算機能18cは、特徴点の配置から幾何変換不変量を要素とする特徴ベクトルを計算する。 Here, the feature point extraction function 18a of the partial area feature amount calculation unit 18 described above calculates the feature points of the registered image RI, the partial registration image RP, the search image QI, or the partial search image QP at each timing. The feature point is the center or the center of gravity of the feature portion. When the feature point is obtained, the feature point arrangement function 18b calculates the arrangement of a plurality of feature points. Further, the invariant calculation function 18c calculates a feature vector having a geometric transformation invariant as an element from the arrangement of the feature points.
 また、前述した第1実施形態の場合と同様に、画像位置検出部22が、パラメータ信頼度計算機能22aと、画像位置推定機能22bとを備えると良い。パラメータ信頼度計算機能22aは、幾何変換パラメータの信頼度である幾何変換パラメータ信頼度を計算する。そして、画像位置推定機能22bは、幾何変換パラメータ信頼度を利用して登録画像RI中に少なくとも部分的に含まれる検索画像QIの位置を推定する。 Also, as in the case of the first embodiment described above, the image position detection unit 22 may include a parameter reliability calculation function 22a and an image position estimation function 22b. The parameter reliability calculation function 22a calculates the geometric transformation parameter reliability, which is the reliability of the geometric transformation parameter. Then, the image position estimation function 22b estimates the position of the search image QI included at least partially in the registered image RI using the geometric transformation parameter reliability.
 次に、図8を参照して、本第2実施形態にて特徴ベクトルを使用する画像処理例を説明する。
 図8に示す例では、まず、部分領域決定部14が、登録画像RIの部分領域を計算して部分登録画像RPを1つ以上計算する(図8:ステップS201)。そして、検索画像QIを受信して検索処理が開始すると、部分領域特徴量計算部18が、検索画像QIと部分登録画像RPの幾何変換不変量を要素とする特徴ベクトルを計算する。
Next, an example of image processing using feature vectors in the second embodiment will be described with reference to FIG.
In the example shown in FIG. 8, first, the partial region determination unit 14 calculates a partial region of the registered image RI and calculates one or more partial registered images RP (FIG. 8: Step S201). Then, when the search image QI is received and the search process is started, the partial region feature value calculation unit 18 calculates a feature vector whose elements are geometric invariants of the search image QI and the partial registration image RP.
 特徴ベクトルの計算では、特徴点抽出機能18aが、部分登録画像RPと、検索画像QIの特徴点を計算する(図8:ステップS202)。そして、特徴点配置機能18bが、複数の特徴点の配置を計算する(図8:ステップS203)。さらに、不変量計算機能18cが、特徴点の配置と座標とから幾何変換不変量を要素とする特徴ベクトルを計算する(図8:ステップS204)。 In the feature vector calculation, the feature point extraction function 18a calculates the feature points of the partial registration image RP and the search image QI (FIG. 8: step S202). Then, the feature point arrangement function 18b calculates the arrangement of a plurality of feature points (FIG. 8: Step S203). Further, the invariant calculation function 18c calculates a feature vector whose elements are geometric transformation invariants from the arrangement and coordinates of the feature points (FIG. 8: step S204).
 続いて、幾何変換パラメータ推定部16が、部分登録画像RPから計算された特徴ベクトルと検索画像QIから計算された特徴ベクトルとが一致するか否かを判定する(図8:ステップS205)。そして、幾何変換パラメータ推定部16は、当該特徴ベクトルの一致に基づいて幾何変換パラメータを推定する(図8:ステップS206)。この一致及び幾何変換パラメータの判定(図8:ステップS205,S206)は、幾何変換パラメータの値を変更しつつ繰り返し一致判定をしても良いし、投票処理や最小二乗法による最適化処理としても良い。 Subsequently, the geometric transformation parameter estimation unit 16 determines whether or not the feature vector calculated from the partially registered image RP matches the feature vector calculated from the search image QI (FIG. 8: Step S205). Then, the geometric transformation parameter estimation unit 16 estimates a geometric transformation parameter based on the matching of the feature vectors (FIG. 8: Step S206). In the determination of the coincidence and the geometric transformation parameter (FIG. 8: Steps S205 and S206), the coincidence judgment may be repeated while changing the value of the geometric transformation parameter, or as an optimization process by a voting process or a least square method. good.
 幾何変換パラメータが推定されると、部分領域決定部14が、当該幾何変換パラメータを使用した投影により検索画像QIの部分検索画像QPを計算する(図8:ステップS207)。幾何変換パラメータが複数推定された際には、部分検索画像QPは幾何変換パラメータ数と同一数、計算される。 When the geometric transformation parameter is estimated, the partial region determination unit 14 calculates the partial retrieval image QP of the retrieval image QI by projection using the geometric transformation parameter (FIG. 8: Step S207). When a plurality of geometric transformation parameters are estimated, the same number of partial search images QP as the number of geometric transformation parameters are calculated.
 部分検索画像QPが計算されると、再度、部分領域特徴量計算部18が計算をする。即ち、この部分領域特徴量計算部18は、部分検索画像QPの特徴ベクトルを計算する(図8:ステップS208)。そして、部分画像照合部20が、当該幾何変換パラメータを利用して特定される部分検索画像QPについての特徴ベクトルと、部分登録画像RPの特徴ベクトルとが一致するか否かを判定する(図8:ステップS209)。そして、画像位置検出部22が、部分画像照合部20で一致と判定された部分検索画像QPの位置を幾何変換パラメータの使用により計算する。 When the partial search image QP is calculated, the partial region feature amount calculation unit 18 calculates again. That is, the partial region feature quantity calculation unit 18 calculates the feature vector of the partial search image QP (FIG. 8: Step S208). Then, the partial image matching unit 20 determines whether or not the feature vector for the partial search image QP specified using the geometric transformation parameter matches the feature vector of the partial registration image RP (FIG. 8). : Step S209). Then, the image position detection unit 22 calculates the position of the partial search image QP determined to be coincident by the partial image matching unit 20 by using the geometric transformation parameter.
 この部分検索画像QPの位置の計算では、パラメータ信頼度計算機能22aが、幾何変換パラメータの信頼度である幾何変換パラメータ信頼度を計算し(図8:ステップS210)、画像位置推定機能22bが、幾何変換パラメータ信頼度を利用して登録画像RI中に少なくとも部分的に含まれる検索画像QIの位置を推定すると良い(図8:ステップS211)。 In the calculation of the position of the partial search image QP, the parameter reliability calculation function 22a calculates the geometric transformation parameter reliability, which is the reliability of the geometric transformation parameter (FIG. 8: step S210), and the image position estimation function 22b The position of the search image QI included at least partially in the registered image RI may be estimated using the geometric transformation parameter reliability (FIG. 8: step S211).
(ホモグラフィー行列)
 続いて、図9を参照して、幾何変換としてホモグラフィー行列を使用する場合の例を説明する。
 この図9に示す例では、幾何変換パラメータ推定部16が、幾何変換パラメータとしてホモグラフィー行列を計算するホモグラフィー行列計算機能16cを備えている。
(Homography matrix)
Next, an example of using a homography matrix as a geometric transformation will be described with reference to FIG.
In the example shown in FIG. 9, the geometric transformation parameter estimation unit 16 includes a homography matrix calculation function 16c that calculates a homography matrix as a geometric transformation parameter.
 そして、部分領域特徴量計算部18が、部分検索画像QPが部分登録画像RPに対してホモグラフィー行列で幾何変換されていても不変な量である幾何変換不変量を特徴として計算する不変量計算機能18cを備えている。 Then, the partial region feature quantity calculation unit 18 calculates the geometric transformation invariant, which is an invariable quantity even if the partial search image QP is geometrically transformed with the homography matrix with respect to the partial registration image RP, as a feature. A function 18c is provided.
 更に、画像位置検出部22が、ホモグラフィー行列を使用して登録画像RIの部分領域を検索画像QIに投影することで部分検索画像QPの位置を計算する投影位置計算機能22dを備えている。
 また、前述した第1実施形態と同様に、画像位置検出部22が、パラメータ信頼度計算機能22aと画像位置推定機能22bとを備えると良い。
Furthermore, the image position detection unit 22 includes a projection position calculation function 22d that calculates the position of the partial search image QP by projecting a partial region of the registered image RI onto the search image QI using a homography matrix.
Similarly to the first embodiment described above, the image position detection unit 22 may include a parameter reliability calculation function 22a and an image position estimation function 22b.
 ホモグラフィー行列Hは、座標値を対応させる幾何変換行列であり、登録画像RIの位置(xr,yr)と検索画像QIの位置(xq,yq)との間の関係を表す3×3行列である。具体的には次式(1)を満たす。
 ここで記号aは、(xr,yr)と(xq,yq)の値に応じて定まる定数である。
The homography matrix H is a geometric transformation matrix that associates coordinate values, and is a 3 × 3 matrix that represents the relationship between the position (xr, yr) of the registered image RI and the position (xq, yq) of the search image QI. is there. Specifically, the following expression (1) is satisfied.
Here, the symbol a is a constant determined according to the values of (xr, yr) and (xq, yq).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 この例では、幾何変換パラメータ推定部16のホモグラフィー行列計算機能16cが、幾何変換パラメータとしてホモグラフィー行列を計算する。すると、部分領域特徴量計算部18の不変量計算機能18cは、部分検索画像QPが部分登録画像RPに対してホモグラフィー行列で幾何変換されていても不変な量である幾何変換不変量を特徴として計算する。続いて、画像位置検出部22の投影位置計算機能22dが機能し、ホモグラフィー行列を使用して登録画像RIの部分領域を検索画像QIに投影することで部分検索画像QPの位置を計算する。
 このように、ホモグラフィー行列を使用することで、安定した画像処理手法を活用して部分画像の照合をすることができる。
In this example, the homography matrix calculation function 16c of the geometric transformation parameter estimation unit 16 calculates a homography matrix as a geometric transformation parameter. Then, the invariant calculation function 18c of the partial region feature quantity calculation unit 18 features a geometric transformation invariant that is an invariant quantity even if the partial search image QP is geometrically transformed with the homography matrix with respect to the partial registration image RP. Calculate as Subsequently, the projection position calculation function 22d of the image position detection unit 22 functions, and the position of the partial search image QP is calculated by projecting the partial area of the registered image RI onto the search image QI using the homography matrix.
Thus, by using a homography matrix, it is possible to collate partial images using a stable image processing technique.
 これを詳細に説明する。
 図10は、2個のホモグラフィー行列(H1、H2)が求められた場合の、部分登録特徴量RCと部分検索特徴量QCの一致判定結果の例を図示したものである。
This will be described in detail.
FIG. 10 illustrates an example of a match determination result between the partial registration feature quantity RC and the partial search feature quantity QC when two homography matrices (H1, H2) are obtained.
 図10の左側、「あ」という文字を囲っている外接矩形につき、上段が記憶手段10から読み出した登録画像RI(DB Image)の部分登録画像RP(RP[1]からRP[8])であり、下段が検索画像QIを投影した部分検索画像QP(QP[1]からQP[8]、Compensated Image)である。
 これらから計算された特徴量は、図10中のIFという記号で図示されている。これらは、各部分登録画像RPと各部分検索画像QPに対して計算されている。
On the left side of FIG. 10, for the circumscribed rectangle surrounding the character “a”, the upper row is a partial registration image RP (RP [1] to RP [8]) of the registration image RI (DB Image) read from the storage means 10. There is a partial search image QP (QP [1] to QP [8], Compensated Image) obtained by projecting the search image QI.
The feature amount calculated from these is indicated by the symbol IF in FIG. These are calculated for each partial registration image RP and each partial search image QP.
 図10の右側に、部分登録画像RPと部分検索画像QPとの比較結果の一例を図示している。図中、○が一致、×が不一致を表している。
 この例ではノイズなどのため、本来一致している部分領域に対して不一致の判定が生じたとする。
An example of a comparison result between the partial registration image RP and the partial search image QP is illustrated on the right side of FIG. In the figure, ◯ represents coincidence and x represents disagreement.
In this example, it is assumed that due to noise or the like, a mismatch determination has occurred for a partial region that originally matches.
 この図によれば、部分検索画像QPの照合が〔(部分検索画像QPの数)×(求められた幾何変換パラメータの個数)〕回実行されている。具体的には、図10に示す例では、8×2の16通りの比較結果が求められる。 According to this figure, the partial search image QP is collated [(number of partial search images QP) × (number of obtained geometric transformation parameters)] times. Specifically, in the example shown in FIG. 10, 16 comparison results of 8 × 2 are obtained.
 そして、比較が終了すると、画像位置検出部22のパラメータ信頼度計算機能22aは、幾何変換パラメータH1を固定して、登録画像RIから計算された部分登録画像RPと、検索画像QIから計算された部分検索画像QPとの照合結果を集計する。 When the comparison is completed, the parameter reliability calculation function 22a of the image position detection unit 22 fixes the geometric transformation parameter H1 and calculates the partial registration image RP calculated from the registration image RI and the search image QI. The collation results with the partial search image QP are totaled.
 図11に示す例では、ホモグラフィー行列H1に関しては、全体で計算された8個の部分検索画像QPに対して7個の部分検索画像QPが一致したと判定されている。
 その結果、信頼度は7/8と計算されている。ここでは、一致した部分検索画像QPの個数の割合を信頼度としているが、各部分検索画像QPの面積で重み付けした上で信頼度を計算してもよい。
In the example shown in FIG. 11, regarding the homography matrix H1, it is determined that seven partial search images QP match the eight partial search images QP calculated as a whole.
As a result, the reliability is calculated as 7/8. Here, the ratio of the number of matching partial search images QP is used as the reliability. However, the reliability may be calculated after weighting the area of each partial search image QP.
 画像位置検出部22のパラメータ信頼度計算機能22aは、図10に示す部分画像照合部20による照合結果を、図11に信頼度テーブルTBのデータ構造に整理し、部分検索画像QPの総数に対する一致数からパラメータ信頼度を算出し、その信頼度に基づいて幾何変換パラメータであるホモグラフィー行列が信頼できるか否かを判定すると良い。 The parameter reliability calculation function 22a of the image position detection unit 22 arranges the collation results by the partial image collation unit 20 shown in FIG. 10 into the data structure of the reliability table TB in FIG. 11, and matches the total number of partial search images QP. The parameter reliability is calculated from the number, and it is preferable to determine whether or not the homography matrix as the geometric transformation parameter is reliable based on the reliability.
 図11に示す例では、信頼度が予め定められた値、例えば、5/8以上の幾何変換パラメータを信頼する幾何変換パラメータであると判定するため、ホモグラフィー行列H1は信頼できる幾何変換パラメータであると判定される。
 一方、ホモグラフィー行列H2に関しては、8個の部分検索画像QPに対して1個の部分検索画像QPしか一致していない。この場合、信頼度は1/8となる。これは予め定められた値より小さいので、ホモグラフィー行列H2は信頼できるパラメータと判定されない。
In the example shown in FIG. 11, the homography matrix H1 is a reliable geometric transformation parameter because it is determined that the reliability is a geometric transformation parameter that has a predetermined reliability, for example, a geometric transformation parameter of 5/8 or more. It is determined that there is.
On the other hand, regarding the homography matrix H2, only one partial search image QP matches eight partial search images QP. In this case, the reliability is 1/8. Since this is smaller than a predetermined value, the homography matrix H2 is not determined as a reliable parameter.
 次に、画像位置検出部22の画像位置推定機能22bは、その信頼できる変換パラメータを用いて、登録画像RIの部分登録画像RPに相当する領域が検索画像QI中のどの位置であるかを推定する。図11に示す例では、幾何変換パラメータとしてホモグラフィー行列H1のみを用いる。 Next, the image position estimation function 22b of the image position detection unit 22 estimates which position in the search image QI the region corresponding to the partially registered image RP of the registered image RI is using the reliable conversion parameter. To do. In the example shown in FIG. 11, only the homography matrix H1 is used as the geometric transformation parameter.
 登録画像RI中の座標(xr,yr)と検索画像QI中の点(xq,yq)との関係は、上述した式(1)で表される。
 この関係を用いて、検索画像QI中に存在する登録画像RI(部分登録画像RP)の位置を求めることができる。
 なお、ホモグラフィー行列は、登録画像RI中の線分を検索画像QI中の線分へ投影する。したがって、登録画像RIの部分領域は矩形であるように構成されている場合には、登録画像RI中の部分領域である矩形を構成する四辺を検索画像QIに投影すれば、検索画像QI中の対応領域が求められる。
〔第3実施形態〕
The relationship between the coordinates (xr, yr) in the registered image RI and the point (xq, yq) in the search image QI is expressed by the above-described equation (1).
Using this relationship, the position of the registered image RI (partial registered image RP) existing in the search image QI can be obtained.
The homography matrix projects the line segment in the registered image RI onto the line segment in the search image QI. Therefore, when the partial area of the registered image RI is configured to be a rectangle, if the four sides constituting the rectangle that is the partial area in the registered image RI are projected onto the search image QI, A corresponding area is required.
[Third Embodiment]
 次に、本発明にかかる画像照合装置の第3実施形態を、図12乃至19の基づいて説明する。ここで、前述した各実施形態と同一の構成部材については同一の符号を用いるものとする。 Next, a third embodiment of the image collating apparatus according to the present invention will be described with reference to FIGS. Here, the same reference numerals are used for the same constituent members as those of the above-described embodiments.
 この第3実施形態では、図12に示すように、画像照合装置103に予め備えられている画像位置検出部22には、新たに統合処理部24が併設されている。又、画像処理手段12の幾何変換パラメータ推定部16は複数パラメータ計算機能16dを装備し、画像位置検出部22は新たに位置候補計算機能22eを備えた構成となっている。 In the third embodiment, as shown in FIG. 12, an integrated processing unit 24 is newly added to the image position detection unit 22 provided in advance in the image matching device 103. Further, the geometric transformation parameter estimation unit 16 of the image processing means 12 is equipped with a plural parameter calculation function 16d, and the image position detection unit 22 is newly provided with a position candidate calculation function 22e.
 幾何変換パラメータ推定部16の複数パラメータ計算機能16dは、部分領域決定部14によって複数の部分領域が計算された際に当該部分検索画像QP毎の複数の幾何変換パラメータを推定する。この処理により、1つの部分検索画像QPに対して複数の幾何変換パラメータが推定される。 The multiple parameter calculation function 16d of the geometric transformation parameter estimation unit 16 estimates a plurality of geometric transformation parameters for each partial search image QP when the partial region determination unit 14 calculates a plurality of partial regions. By this processing, a plurality of geometric transformation parameters are estimated for one partial search image QP.
 そして、画像位置検出部22の位置候補計算機能22eは、画像位置検出部22が、一致した部分検索画像QPの位置の候補を、幾何変換パラメータ毎の位置候補として計算する。この処理により、部分登録画像RPと一致するとの条件を満たした幾何変換パラメータと、この幾何変換パラメータで特定できる部分検索画像QPが残る。 Then, in the position candidate calculation function 22e of the image position detection unit 22, the image position detection unit 22 calculates a position candidate of the matched partial search image QP as a position candidate for each geometric transformation parameter. By this process, a geometric transformation parameter that satisfies the condition of matching with the partial registration image RP and a partial search image QP that can be specified by this geometric transformation parameter remain.
 本第3実施形態では、新たに装備された統合処理部24が、検索画像QI内で複数の部分検索画像QPが重なり合う際に当該重なり合う部分検索画像QP毎の位置候補を統合することで、当該部分検索画像QPの位置を計算する。これにより、部分検索画像QPの個数を高精度に計算することができる。
 この統合処理部24は、画像位置検出部22の出力である位置候補(部分検索画像QPの位置の候補)の2つ以上を統合する。これにより、僅かに異なる位置候補(幾何変換パラメータ)から代表的な位置を計算することが可能になる。
In the third embodiment, the newly-integrated integration processing unit 24 integrates the position candidates for each overlapping partial search image QP when a plurality of partial search images QP overlap in the search image QI. The position of the partial search image QP is calculated. Thereby, the number of partial search images QP can be calculated with high accuracy.
The integration processing unit 24 integrates two or more position candidates (candidates for the position of the partial search image QP) that are outputs of the image position detection unit 22. This makes it possible to calculate a representative position from slightly different position candidates (geometric transformation parameters).
 図13を参照すると、統合処理部24は、和集合位置機能24a、信頼度位置機能24b、または投票位置機能24cのいずれかを備えたものであってもよい。また、この統合処理部24は、この3つの機能の内の二以上を備え、複数の手法を併用して統合処理をするように構成してもよい。 Referring to FIG. 13, the integration processing unit 24 may be provided with any of the union position function 24a, the reliability position function 24b, or the voting position function 24c. Further, the integration processing unit 24 may include two or more of these three functions, and may be configured to perform integration processing using a plurality of methods in combination.
 和集合位置機能24aは、複数の位置候補の和集合を計算すると共に当該和集合の位置を位置と判定する。
 図14に示す例では、和集合位置機能24aは、複数の部分検索画像QP[10a]、QP[10b]、QP[10c]が計算された際に、図13(1)和集合で示すように、互いに重なりのある領域に関する和領域(OR領域)を統合して、外周形状を当該部分検索画像QP[10A]
の位置とする。
The union position function 24a calculates a union of a plurality of position candidates and determines the position of the union as a position.
In the example shown in FIG. 14, the union position function 24a indicates that the plurality of partial search images QP [10a], QP [10b], and QP [10c] are calculated as shown in FIG. 13 (1) union. In addition, the sum area (OR area) related to the overlapping areas is integrated to obtain the outer peripheral shape of the partial search image QP [10A].
The position of
 これにより、部分検索画像QPの特徴部分を漏れなく検出することができる。和集合位置機能24aは、同様に、複数の部分検索画像QP[11a]、QP[11b]を統合して部分検索画像QP[11A]とする。 Thereby, the characteristic part of the partial search image QP can be detected without omission. Similarly, the union position function 24a integrates a plurality of partial search images QP [11a] and QP [11b] into a partial search image QP [11A].
 検索画像位置判定機能24bは、幾何変換パラメータの信頼度を計算すると共に当該信頼度の高い幾何変換パラメータを使用して位置を判定する。すなわち、検索画像位置判定機能24bは、互いに重なりのある部分検索画像QP[11a]、QP[11b]がある場合に、部分検索画像QP[11a]、QP[11b]を特定したそれぞれの幾何変換パラメータの内、最も信頼度が高かった幾何変換パラメータによる位置候補を選択する。 The search image position determination function 24b calculates the reliability of the geometric transformation parameter and determines the position using the geometric transformation parameter having a high reliability. That is, when there are partial search images QP [11a] and QP [11b] that overlap each other, the search image position determination function 24b performs the respective geometric transformations that specify the partial search images QP [11a] and QP [11b]. Among the parameters, the position candidate based on the geometric transformation parameter having the highest reliability is selected.
 これにより、図14に示す例では、部分検索画像QP[11B]が選択される。同様に、検索画像位置判定機能24bは、複数の部分検索画像QP[10a]、QP[10b]、QP[10c]
のうち最も信頼度の高い幾何変換パラメータを使用して、部分検索画像QP[10B]とする。ここで、信頼度が一定値以上の幾何変換パラメータのパラメータ値PMの統計値を使用して、位置候補を選択するようにしても良い。
Thereby, in the example shown in FIG. 14, the partial search image QP [11B] is selected. Similarly, the search image position determination function 24b has a plurality of partial search images QP [10a], QP [10b], and QP [10c].
Among them, the most reliable geometric transformation parameter is used as a partial search image QP [10B]. Here, the position candidate may be selected using the statistical value of the parameter value PM of the geometric transformation parameter having a certain degree of reliability.
 他の画像位置判定機能24cは、幾何変換パラメータのパラメータ空間への投票を制御すると共に、投票結果がしきい値以上の複数の幾何変換パラメータの統計値を使用して位置を判定する。統計値は、例えば、平均値や中央値など統計処理により算出できる値である。 The other image position determination function 24c controls the voting of the geometric transformation parameters to the parameter space, and determines the position using statistical values of a plurality of geometric transformation parameters whose voting results are equal to or greater than a threshold value. The statistical value is a value that can be calculated by statistical processing such as an average value or a median value.
 図15に、他の画像位置判定機能24cによる統合処理の一例を示す。他の画像位置判定機能24cは、信頼できる幾何変換パラメータのパラメータ値PM[1]、PM[2]、...、PM[n]を投票箱VT[1]、VT[2]、...、VT[n]に投票し、一定以上の投票があった投票箱VTに関する幾何変換パラメータの平均を算出し、結果とする。 FIG. 15 shows an example of integration processing by another image position determination function 24c. The other image position determination function 24c converts the reliable geometric transformation parameter values PM [1], PM [2],..., PM [n] into ballot boxes VT [1], VT [2],. ., Vote for VT [n], calculate the average of the geometric transformation parameters related to the ballot box VT for which the vote is equal to or greater than a certain value, and obtain the result.
 図15に示す例では、部分検索画像QP[10a]、QP[10b]、QP[10c]のパラメータ値PMを投票した結果、1つの投票箱VT[1]への投票が一定数を上回ったため、この幾何変換パラメータを使用して部分検索画像QP[10C]の位置を特定する。同様に、投票箱VT[2]に対応する幾何変換パラメータを使用して部分検索画像QP[11C]の位置を特定する。 In the example shown in FIG. 15, as a result of voting the parameter values PM of the partial search images QP [10a], QP [10b], and QP [10c], the number of votes for one ballot box VT [1] exceeds a certain number. The position of the partial search image QP [10C] is specified using this geometric transformation parameter. Similarly, the position of the partial search image QP [11C] is specified using the geometric transformation parameter corresponding to the ballot box VT [2].
(第3実施形態の動作)
 次に、本第3実施形態の動作を、図16及び図17に基づいて説明する。
 前述した第1実施形態の画像処理(図4参照)と異なるのは、部分検索画像QPの位置の検出処理(図4:ステップS107)が第3実施形態では、部分検索画像QPの位置検出処理(図6:ステップS301)に置き換えられている点である。
(Operation of Third Embodiment)
Next, the operation of the third embodiment will be described with reference to FIGS.
Unlike the image processing of the first embodiment described above (see FIG. 4), the position detection processing of the partial search image QP (FIG. 4: step S107) is different from the position detection processing of the partial search image QP in the third embodiment. (FIG. 6: Step S301).
 図17は、部分検索画像QPの位置検出処理(図16:ステップS301)の詳細な動作例を表すフローチャートである。
 ここで、登録画像RIの部分登録画像RPが存在する検索画像QIの領域(部分検索画像QPの位置)を検出する処理(図16:ステップS301)が、前述した図4のステップS107と異なる点は、位置候補推定処理(図5:ステップS114)の後に領域統合処理(図17:ステップS311)が追加されている点である。この領域統合処理(図17:ステップS311)においては、統合処理部24が上述の方法で領域を統合する。
FIG. 17 is a flowchart showing a detailed operation example of the position detection process (FIG. 16: step S301) of the partial search image QP.
Here, the process (FIG. 16: step S301) for detecting the region of the search image QI (position of the partial search image QP) where the partial registration image RP of the registration image RI exists is different from step S107 of FIG. 4 described above. Is that a region integration process (FIG. 17: step S311) is added after the position candidate estimation process (FIG. 5: step S114). In this region integration processing (FIG. 17: step S311), the integration processing unit 24 integrates the regions by the method described above.
 上述したように、この第3実施形態では、僅かに異なる位置候補から代表的な位置を計算するように構成されているため、検索画像QIに含まれる各登録画像RIについて一つの領域を得られる可能性が高くなる。従って、検索画像QIに含まれる登録画像RIの個数を高精度に判定することができる。 As described above, in the third embodiment, since a representative position is calculated from slightly different position candidates, one region can be obtained for each registered image RI included in the search image QI. The possibility increases. Therefore, the number of registered images RI included in the search image QI can be determined with high accuracy.
 特に、統合処理部24が、検索画像QI内で複数の部分検索画像QPが重なり合う際に、当該重なり合う部分検索画像QPを対象として、統合処理をするため、重なり合いにより同一の検索結果と想定される部分検索画像QPの検索結果を良好に1つに統合することができ、さらに、信頼度又は投票等が一定数以上の幾何変換パラメータのパラメータ値PMを平均等した値を使用することで、真の値に近い位置を計算することができる。 In particular, since the integration processing unit 24 performs integration processing on the overlapping partial search images QP when a plurality of partial search images QP overlap in the search image QI, the same search result is assumed due to the overlap. It is possible to integrate the search results of the partial search images QP well into one, and further, by using a value obtained by averaging the parameter values PM of geometric transformation parameters having a certain degree of reliability or voting, etc. A position close to the value of can be calculated.
 更に、重なり合う部分検索画像QPを何らかの手法で統合することにより、検索画像QI中の部分登録画像RP(に対応する部分検索画像QP)の個数を厳格に計算することができる。 Further, by integrating overlapping partial search images QP by some method, the number of partial registration images RP (corresponding to partial search images QP) in the search image QI can be strictly calculated.
(情報表示)
 本第3実施形態における画像処理手段12には、図12に示すように表示デバイス95が併設されている。また、この表示デバイス95に情報表示動作を制御する表示制御部26が、前記画像処理手段12に備えられている。
 この表示制御部26は、部分登録画像RPと関連する表示用データDPを部分検索画像QPに対応した位置に表示制御する。これにより、画像検索の利便性を向上させることができる。
(Information display)
The image processing means 12 in the third embodiment is provided with a display device 95 as shown in FIG. Further, the image processing means 12 is provided with a display control unit 26 for controlling the information display operation on the display device 95.
The display control unit 26 controls display of display data DP related to the partial registration image RP at a position corresponding to the partial search image QP. Thereby, the convenience of an image search can be improved.
 表示用データDPは、画像照合に成功した結果をユーザに示す内容であれば、どのような表現や記述であっても良い。印刷又は手書きの文字の形状を検索した際には、その文字データ、意味、その文字を含むデータへのリンク、他の言語への訳語など、用途に応じて様々な表示用データDPを採用することができる。また、検索結果が標識や案内図であれば、その内容の説明文等を表示することもできる。 The display data DP may be any expression or description as long as the content indicates the result of successful image matching to the user. When searching for the shape of a printed or handwritten character, various display data DP such as the character data, meaning, a link to data containing the character, a translation into another language, etc. are adopted. be able to. If the search result is a sign or a guide map, an explanation of the contents can be displayed.
 表示制御部26は、類似度表示機能26aを備えてもよい。この類似度表示機能26aは、部分登録画像と検索部分画像との類似度を表示用データDPとして表示制御する。
 図18に、検索画像QI中から検出された部分登録画像RP[1]と部分登録画像RP[2]の位置を表示して検索画像QIとの対応を示した例を示す。
The display control unit 26 may include a similarity display function 26a. The similarity display function 26a controls the display of the similarity between the partial registration image and the search partial image as display data DP.
FIG. 18 shows an example in which the positions of the partial registration image RP [1] and partial registration image RP [2] detected from the search image QI are displayed to show the correspondence with the search image QI.
 この図18の例では、表示制御部26が、表示デバイス95に検索画像QIの全体を表示し、部分検索画像QPと対応させて、検索できた部分登録画像RP[1]、RP[2]の種類や内容を表示しつつ、表示用データDPとして類似度の値を表示している。 In the example of FIG. 18, the display control unit 26 displays the entire search image QI on the display device 95 and associates it with the partial search image QP so that the partial registration images RP [1] and RP [2] that can be searched. The value of similarity is displayed as the display data DP while displaying the type and content.
 これにより、表示制御部26は、部分登録画像RP[1]と関連させて表示用データDP[1]である「類似度0.9」を表示制御し、部分登録画像RP[2]と関連させて表示用データDP[2]である「類似度0.8」を表示制御する。これにより、部分検索画像QPの検索の確からしさを分かりやすく表示することができる。このように、類似度表示機能26aにより、検索結果の精度をユーザに伝達することができる。 As a result, the display control unit 26 performs display control of “similarity 0.9” that is the display data DP [1] in association with the partial registration image RP [1], and associates it with the partial registration image RP [2]. Display control is performed on “similarity 0.8” which is display data DP [2]. Thereby, the certainty of the search of the partial search image QP can be displayed in an easily understandable manner. As described above, the accuracy of the search result can be transmitted to the user by the similarity display function 26a.
 また、表示制御部26は、リンク表示機能26bを備えても良い。リンク表示機能26bは、表示用データDPとして当該部分登録画像RPと関連して予め定められたコンテンツへのリンクを表示する。コンテンツへのリンクは、同一の画像処理装置内のコンテンツでも良いし、画像照合装置103がネットワーク96と接続している際には他のサーバ装置70に管理されるコンテンツへのリンクでも良い。 Further, the display control unit 26 may include a link display function 26b. The link display function 26b displays a link to a predetermined content in association with the partial registration image RP as the display data DP. The link to the content may be content in the same image processing device, or may be a link to content managed by another server device 70 when the image matching device 103 is connected to the network 96.
 リンクは、音声へのリンク、URLなど、計算機で通常扱える任意の情報もしくは任意の情報へのリンクを表示することができる。これにより、単なる画像の表示だけでなく、利用者がさまざまなメディアを統合的に利用しやすい環境を提供することができる。 The link can display any information that can be normally handled by a computer, such as a link to voice or a URL, or a link to any information. Thereby, it is possible to provide not only a simple image display but also an environment in which a user can easily use various media in an integrated manner.
 図19に示す例では、あらかじめ記憶手段10などに格納されている登録画像RI(又は部分登録画像RP)に付随する情報を、画像照合結果に応じて重畳表示させる例である。図19に示す例では、検索画像QIの全体を表示デバイス95の全体に表示している。この例では、登録画像RIに関する位置候補、もしくは、統合された代表的な領域が検出されているので、その領域の周辺に情報を提示することができる。 The example shown in FIG. 19 is an example in which information accompanying the registered image RI (or partial registered image RP) stored in advance in the storage means 10 or the like is superimposed and displayed according to the image collation result. In the example shown in FIG. 19, the entire search image QI is displayed on the entire display device 95. In this example, a position candidate related to the registered image RI or an integrated representative area is detected, so that information can be presented around the area.
 図19に示すように、検索画像QIから部分登録画像RPに対応する部分検索画像QP[3]が検出され、「Explanation 1,説明文1」という情報(表示用データDP[3])が表示されている。この表示用データDP[3]は、テキスト文字列による文章である。その際、さまざまな言語で同じ説明内容を表す文章をあらかじめ格納させておけば、その中からユーザが提示される言語を選択して表示することもできる。 As shown in FIG. 19, the partial search image QP [3] corresponding to the partial registration image RP is detected from the search image QI, and the information “Explanation 1, explanation 1” (display data DP [3]) is displayed. Has been. The display data DP [3] is a sentence using a text character string. At that time, if sentences representing the same explanation contents in various languages are stored in advance, the language presented by the user can be selected and displayed.
 そして、表示制御部26は、部分検索画像QP[4]と対応した表示用データDP[4]を表示制御することで、部分検索画像QP[3]と部分検索画像QP[4]との相違を明確に表示することができる。さらに、この表示用データDPである説明文を他のコンテンツへのリンクとすると、ユーザはより詳しい説明に容易な操作でアクセスすることができる。 Then, the display control unit 26 performs display control of the display data DP [4] corresponding to the partial search image QP [4], whereby the difference between the partial search image QP [3] and the partial search image QP [4]. Can be clearly displayed. Furthermore, if the explanatory text, which is the display data DP, is used as a link to other contents, the user can access more detailed explanation with an easy operation.
 さらに、単なる文章ではなく、画像を表示させてもかまわない。画像の表示は、部分登録画像RPをそのまま表示させても良いし、ホモグラフィー行列の値を使用して登録画像RIを台形上に変形した上で表示させても良い(例えば、特許文献1の図2)。 Furthermore, you may display an image instead of just a sentence. For the display of the image, the partially registered image RP may be displayed as it is, or the registered image RI may be transformed into a trapezoid using the value of the homography matrix (for example, disclosed in Patent Document 1). Figure 2).
 また、図示しないスピーカーがあれば、登録画像RIに関連付けられた音声を自動再生させることも可能である。そして、上述したような表示・自動再生は複数組み合わせることが可能である。
 本発明の第3実施形態によれば、案内情報を提供する情報提供装置といった用途に適用できる。また、登録画像RIが看板や案内文といった場合にその内容に関する対訳文を表示する対訳表示システムといった用途にも適用可能である。
 その他の構成およびその作用効果は前述した第2実施形態と同一となっている。
Further, if there is a speaker (not shown), it is possible to automatically reproduce the sound associated with the registered image RI. A plurality of display / automatic reproductions as described above can be combined.
The third embodiment of the present invention can be applied to an application such as an information providing apparatus that provides guidance information. Further, when the registered image RI is a signboard or a guide sentence, the present invention can be applied to a use such as a parallel translation display system that displays a parallel translation sentence related to the content.
Other configurations and the effects thereof are the same as those of the second embodiment described above.
 ここで、上述した画像照合装置101、102、103の構成や動作は、実施の一例を示したものであり、画像照合の原理を損なわない範囲で、構成や動作の順序を変更することが可能である。 Here, the configuration and operation of the image collation apparatuses 101, 102, and 103 described above are examples of the implementation, and the configuration and operation order can be changed within a range that does not impair the principle of image collation. It is.
 以上、上述した配置照合装置、および、画像照合装置101、102、103の各実施形態において、各種手段をソフトウェアの形で実装する場合を例に説明した。しかしながら、各手段の一部又は全部をハードウェアで構成することも可能である。 As described above, in each of the embodiments of the arrangement collation device and the image collation devices 101, 102, and 103 described above, the case where various means are implemented in the form of software has been described as an example. However, part or all of each means can be configured by hardware.
〔ハードウェア資源との関係〕
 次に、上記第1実施形態から第3実施形態の画像照合装置101、102、103に共通する情報処理とそれに必要とするハードウェア資源との関係を、画像照合装置101を例示して説明する(図20参照)。
[Relationship with hardware resources]
Next, the relationship between the information processing common to the image collating apparatuses 101, 102, and 103 of the first to third embodiments and the hardware resources required for the information will be described by taking the image collating apparatus 101 as an example. (See FIG. 20).
 図1乃至図7に開示した画像照合装置101による情報処理は、ソフトウェアとハードウェア資源とが協働し、使用目的に応じて情報を演算処理し又は加工する具体的な技術的内容を示すものである。 Information processing by the image collating apparatus 101 disclosed in FIGS. 1 to 7 shows specific technical contents in which software and hardware resources cooperate to calculate or process information according to the purpose of use. It is.
 ハードウェア資源として、画像照合装置101は、実際には図20に示すように、情報処理をするコンピュータ80を有している。コンピュータ80は、中央処理装置(CPU)である演算手段82と、この演算手段82に記憶領域を提供する主記憶手段86を有する。コンピュータ80は、一般に、データバス及び入出力インタフェースを通じて接続される周辺機器を有する。周辺機器は、代表的には、通信手段88、外部記憶手段90、入力手段92、出力手段94である。周辺機器を含めた全体をコンピュータ80ということもある。 As a hardware resource, the image collating apparatus 101 actually has a computer 80 for information processing as shown in FIG. The computer 80 includes calculation means 82 that is a central processing unit (CPU), and main storage means 86 that provides a storage area for the calculation means 82. The computer 80 generally has peripheral devices connected through a data bus and an input / output interface. The peripheral devices are typically a communication unit 88, an external storage unit 90, an input unit 92, and an output unit 94. The whole including peripheral devices may be referred to as a computer 80.
 通信手段88は、有線又は無線のネットワークを介してサーバ装置70との通信を制御する。外部記憶手段90は、プログラムファイル100や、データを記憶する据え付け又は持ち運び可能な記憶媒体である。入力手段92はキーボード、タッチパネル、ポインティングデバイス、スキャナ等でありユーザの操作に応じてコンピュータ80で読み取り可能なデータを入力する。出力手段94は、ディスプレイ、プリンタなどコンピュータ80が計算したデータ等を表示出力する。 The communication unit 88 controls communication with the server device 70 via a wired or wireless network. The external storage means 90 is a program file 100 or a storage medium that can be installed or carried to store data. The input unit 92 is a keyboard, a touch panel, a pointing device, a scanner, or the like, and inputs data that can be read by the computer 80 in accordance with a user operation. The output means 94 displays and outputs data calculated by the computer 80 such as a display and a printer.
 本第1実施形態では、特に、記憶手段10は外部記憶手段90をハードウェア資源として使用し、特徴量を有する登録画像RIを記憶する。また、この記憶手段10は、画像処理手段12となるコンピュータ80と直接接続された外部記憶手段90のみならず、ネットワーク96を介して接続されたサーバ装置70のデータベース72から登録画像RI等をハードウェア資源として使用しても良い。 In the first embodiment, in particular, the storage unit 10 uses the external storage unit 90 as a hardware resource, and stores a registered image RI having a feature amount. Further, the storage means 10 stores not only the external storage means 90 directly connected to the computer 80 serving as the image processing means 12 but also the registered image RI and the like from the database 72 of the server device 70 connected via the network 96. It may be used as a wear resource.
 この外部記憶手段90又はデータベース72は、記憶手段10として、登録画像RI、部分登録画像RP、部分登録特徴量RCを記憶する。また、各実施例の画像照合処理にて、検索画像QI、部分検索画像QP、部分検索特徴量QC、信頼度テーブルTB等を記憶する。また、第3実施形態との関係では、部分登録画像RPと関連する表示用データDPを記憶しておくと良い。 The external storage unit 90 or the database 72 stores a registered image RI, a partially registered image RP, and a partially registered feature quantity RC as the storing unit 10. Further, the search image QI, the partial search image QP, the partial search feature quantity QC, the reliability table TB, and the like are stored in the image matching process of each embodiment. In relation to the third embodiment, display data DP related to the partial registration image RP may be stored.
 検索画像QIは、コンピュータ80に接続された入力手段92であるスキャナやファクシミリ受信装置から受信するようにしても良いし、他のサーバ装置70等から受信しても良い。そして、画像照合結果は、出力手段94の表示デバイス95に直接表示する他、外部記憶手段90又はデータベース72に格納しておき、他のサーバ装置70等からのアクセスに応じてデータとして送信しても良い。 The search image QI may be received from a scanner or facsimile receiving device which is the input means 92 connected to the computer 80, or may be received from another server device 70 or the like. The image collation result is displayed directly on the display device 95 of the output means 94, stored in the external storage means 90 or the database 72, and transmitted as data in response to access from another server device 70 or the like. Also good.
 前述した画像処理手段12は、CPUである演算手段82をハードウェア資源として使用して、当該登録画像RIと照合対象の検索画像QIとを比較する。主記憶手段86は、CPUの演算に必要な各種のデータを一時的に記憶する。このようなデータは、各実施例の画像処理例を示すフローチャートの各ステップで比較・生成されるデータであり、例えば、部分登録画像RP、部分検索画像QPの他、特徴量や幾何変換パラメータのパラメータ値PMなどである。 The above-described image processing means 12 compares the registered image RI with the search image QI to be collated using the computing means 82 which is a CPU as a hardware resource. The main storage means 86 temporarily stores various data necessary for the calculation of the CPU. Such data is data that is compared and generated in each step of the flowchart showing the image processing example of each embodiment. For example, in addition to the partial registration image RP and the partial search image QP, the feature amount and the geometric transformation parameter For example, the parameter value PM.
 図1等に示す画像照合装置の各部及び各機能は、ソフトウェアがハードウェア資源を使用して具体的に実現する情報処理をする。情報処理は、計算(論理演算)、比較、条件分岐、繰り返し、判定等の処理の組み合わせである。このソフトウェアは、コンピュータ80の実行環境に応じたプログラム手順(命令,コード)を有する。コード群は、一般に、プログラムファイル100として外部記憶手段90に格納されている。また、コンピュータ80からの要求に応じてサーバ装置70からプログラムファイル100をダウンロードするようにしても良い。プログラムファイル100に含まれるコード群のうち画像照合をするためのプログラムが画像照合用プログラムである。 Each part and each function of the image collating apparatus shown in FIG. 1 and the like perform information processing specifically realized by software using hardware resources. Information processing is a combination of processing such as calculation (logical operation), comparison, conditional branching, repetition, and determination. This software has program procedures (instructions and codes) corresponding to the execution environment of the computer 80. The code group is generally stored in the external storage unit 90 as a program file 100. Further, the program file 100 may be downloaded from the server device 70 in response to a request from the computer 80. Of the code groups included in the program file 100, a program for image collation is an image collation program.
 そして、画像照合装置の手段、部及び機能等としてまとめられる情報処理は、画像照合用のプログラムを画像照合装置が有するコンピュータ80に実行させることで、上記ハードウェア資源との協働により実現することができる。
 他の画像照合装置102、103についても同様である。
Information processing combined as means, units, functions, and the like of the image collation apparatus is realized by cooperation with the hardware resources by causing the computer 80 included in the image collation apparatus to execute a program for image collation. Can do.
The same applies to the other image matching devices 102 and 103.
 以上のように、本発明については、これを第1乃至第3の各実施形態をもって例示してきたが、本発明はその技術的内容をこれらの各実施形態に限定するものではなく、本発明と同等の効果を奏する限り、それらを包含するものである。 As described above, the present invention has been exemplified by the first to third embodiments. However, the present invention is not limited to the technical contents of these embodiments, and As long as equivalent effects are produced, these are included.
 上述した各実施形態について、その新規な技術内容の要点をまとめると、以下のようになる。なお、上記実施形態の一部又は全部は、新規な技術としては以下のようにまとめられるが、本発明は必ずしもこれに限定されるものではない。 For each of the embodiments described above, the main points of the new technical contents are summarized as follows. In addition, although part or all of the said embodiment is put together as follows as a novel technique, this invention is not necessarily limited to this.
〔付記1〕
 所定の特徴点を有する登録画像を記憶した記憶手段10と、当該登録画像と照合対象の検索画像とを比較する画像処理手段12とを備えた画像照合装置であって、
 前記画像処理手段12が、
 前記登録画像と検索画像の各特徴点及びその配置に基づいて幾何変換パラメータを推定する幾何変換パラメータ推定部16と、
 この幾何変換パラメータ推定部16により推定された前記幾何変換パラメータを利用して登録画像中の各部分領域画像に相当する検索画像中の投影部分画像領域を決定する部分領域決定部14と、
 前記検索画像の前記複数の各投影部分画像領域と前記登録画像の各部分領域画像の各特徴量を計算する部分領域特徴量計算部18と、
 前記登録画像の部分領域で特定される部分登録画像の前記特徴量に対して複数の前記部分検索画像の特徴量をそれぞれ比較することで前記部分検索画像と前記部分登録画像との一致を判定する部分画像照合部20と、
 前記幾何変換パラメータを使用して前記一致した前記部分検索画像の位置を計算し検出する登録画像位置検出部22と、を備えたことを特徴とする画像照合装置。
[Appendix 1]
An image collating apparatus comprising a storage unit 10 that stores a registered image having a predetermined feature point, and an image processing unit 12 that compares the registered image with a search image to be collated.
The image processing means 12 is
A geometric transformation parameter estimation unit 16 that estimates a geometric transformation parameter based on each feature point and the arrangement of the registered image and the search image;
A partial region determination unit 14 for determining a projected partial image region in a search image corresponding to each partial region image in a registered image using the geometric conversion parameter estimated by the geometric conversion parameter estimation unit 16;
A partial region feature amount calculation unit 18 that calculates each feature amount of each of the plurality of projection partial image regions of the search image and each partial region image of the registered image;
A match between the partial search image and the partial registration image is determined by comparing feature amounts of the plurality of partial search images with the feature amount of the partial registration image specified in the partial area of the registration image. A partial image matching unit 20;
An image collating apparatus comprising: a registered image position detecting unit 22 that calculates and detects the position of the matched partial search image using the geometric transformation parameter.
〔付記2〕
 付記1に記載の画像照合装置において、
 前記幾何変換パラメータ推定部16は、前記部分検索画像についての候補となる前記幾何変換パラメータを複数組について計算し推定するパラメータ推定機能(候補パラメータ計算機能16a)を備え、
 前記画像位置検出部22は、推定された前記幾何変換パラメータ毎に前記部分検索画像と前記部分登録画像との一致度を計算すると共に一致度の高い前記幾何変換パラメータの信頼度を高く設定する信頼度計算機能22aと、前記信頼度の高い前記幾何変換パラメータを使用して前記登録画像に相当する画像領域が前記検索画像中のどの領域に位置するかを計算し推定する全体画像位置推定機能22bとを備えていることを特徴とする画像照合装置。
[Appendix 2]
In the image collation device according to attachment 1,
The geometric transformation parameter estimation unit 16 includes a parameter estimation function (candidate parameter calculation function 16a) that calculates and estimates the geometric transformation parameters that are candidates for the partial search image for a plurality of sets.
The image position detection unit 22 calculates the degree of coincidence between the partial search image and the partial registered image for each estimated geometric transformation parameter, and sets the reliability of the geometric transformation parameter having a high degree of coincidence to a high level. Degree calculation function 22a and whole image position estimation function 22b for calculating and estimating in which area of the search image the image area corresponding to the registered image is located using the geometric transformation parameter having high reliability And an image collating apparatus.
〔付記3〕
 付記1又は2に記載の画像照合装置において、
 前記画像位置検出部22が、前記幾何変換パラメータと前記部分登録画像の予め設定された座標とを使用して当該部分検索画像の位置を計算する対応位置計算機能22cを備えていることを特徴とする画像照合装置。
[Appendix 3]
In the image collation device according to appendix 1 or 2,
The image position detection unit 22 includes a corresponding position calculation function 22c that calculates the position of the partial search image using the geometric transformation parameter and preset coordinates of the partial registration image. Image collation device.
〔付記4〕
 付記1,2,又は3に記載の画像照合装置において、
 前記幾何変換パラメータ推定部16が、前記部分領域決定部14によって複数の部分領域が算定された際に当該部分検索画像毎の複数の幾何変換パラメータを推定する複数パラメータ推定機能16dを備え、
 前記画像位置検出部22が、前記一致した前記部分検索画像の位置の候補を前記幾何変換パラメータ毎の位置候補として計算する位置候補計算機能(22e)を備え、
 この画像位置検出部22に、前記検索画像内で複数の前記部分検索画像が重なり合う際に当該重なり合う部分検索画像毎の前記位置候補を統合することで当該部分検索画像の位置を計算する統合処理部24を併設したことを特徴とする画像照合装置。
[Appendix 4]
In the image collating device according to appendix 1, 2, or 3,
The geometric transformation parameter estimation unit 16 includes a multiple parameter estimation function 16d that estimates a plurality of geometric transformation parameters for each partial search image when a plurality of partial regions are calculated by the partial region determination unit 14.
The image position detection unit 22 includes a position candidate calculation function (22e) that calculates the position candidate of the matched partial search image as a position candidate for each geometric transformation parameter,
An integration processing unit that calculates the position of the partial search image by integrating the position candidates for each of the overlapping partial search images when a plurality of the partial search images overlap in the search image. An image collating apparatus characterized by having 24.
〔付記5〕
 付記4に記載の画像照合装置において、
 前記統合処理部24が、前記複数の位置候補の和集合を計算すると共に当該和集合の位置を前記部分検索画像の位置と判定する和集合位置判定機能24aを備えたことを特徴とする画像照合装置。
[Appendix 5]
In the image collating device according to attachment 4,
The image collation characterized in that the integration processing unit 24 includes a union position determination function 24a that calculates a union of the plurality of position candidates and determines the position of the union as the position of the partial search image. apparatus.
〔付記6〕
 付記4に記載の画像照合装置において、
 前記統合処理部24が、前記幾何変換パラメータの信頼度を計算すると共に当該信頼度の高い前記幾何変換パラメータを使用して前記部分検索画像の位置を判定する検索画像位置判定機能24bを備えたことを特徴とする画像照合装置。
[Appendix 6]
In the image collating device according to attachment 4,
The integration processing unit 24 includes a search image position determination function 24b that calculates the reliability of the geometric conversion parameter and determines the position of the partial search image using the geometric conversion parameter having the high reliability. An image matching apparatus characterized by the above.
〔付記7〕
 付記4に記載の画像照合装置において、
 前記統合処理部24が、前記幾何変換パラメータのパラメータ空間への投票を制御すると共に投票結果がしきい値以上の複数の前記幾何変換パラメータの統計値を使用して前記部分検索画像の位置を判定する第2の検索画像位置判定機能24bを備えたことを特徴とする請求項4記載の画像照合装置。
[Appendix 7]
In the image collating device according to attachment 4,
The integration processing unit 24 controls voting of the geometric transformation parameters to the parameter space, and determines the position of the partial search image using a plurality of statistical values of the geometric transformation parameters whose voting results are equal to or greater than a threshold value. The image collating apparatus according to claim 4, further comprising a second search image position determination function 24 b.
〔付記8〕
 付記1乃至7の何れか一つに記載の画像照合装置において、
 前記画像処理手段12が、前記部分登録画像と関連する表示用データDPを予め装備した表示デバイス95の前記部分検索画像に対応した位置に表示制御する表示制御部26を備えたことを特徴とする画像照合装置。
[Appendix 8]
In the image collating device according to any one of appendices 1 to 7,
The image processing means 12 includes a display control unit 26 that performs display control at a position corresponding to the partial search image of a display device 95 that is preliminarily equipped with display data DP related to the partial registration image. Image matching device.
〔付記9〕
 所定の特徴量を有する登録画像を記憶した記憶手段と、当該登録画像と照合対象の検索画像とを比較する画像処理手段とを備えた画像照合装置にあって、
 前記登録画像と検索画像の各特徴点及びその配置に基づいて、前記画像処理手段の幾何変換パラメータ推定部16が幾何変換パラメータを推定し、
 この推定された前記幾何変換パラメータを利用して登録画像中の各部分領域画像に相当する検索画像中の複数の投影部分画像領域を、前記画像処理手段の部分領域決定部14が決定し、
 前記検索画像の前記複数の各投影部分画像領域と前記登録画像の各部分領域画像の各特徴量を、前記画像処理手段の部分領域特徴量計算部18が計算し、
 前記登録画像の部分領域で特定される部分登録画像の前記特徴量に対して対応する前記複数の前記部分検索画像の特徴量を、前記画像処理手段の部分画像照合部20がそれぞれ比較して前記部分検索画像と前記部分登録画像との一致の有無を判定し、
 前記部分画像照合部20により前記部分検索画像と前記部分登録画像とが一致したと判定された場合に、前記画像処理手段の画像位置検出部22が、前記登録画像に対する前記検索画像の幾何変換パラメータに基づいて前記部分検索画像の位置を計算し検出する構成としたことを特徴とする画像照合方法。
[Appendix 9]
An image collating apparatus comprising a storage unit that stores a registered image having a predetermined feature amount, and an image processing unit that compares the registered image with a search target search image,
Based on each feature point and the arrangement of the registered image and the search image, the geometric transformation parameter estimation unit 16 of the image processing means estimates a geometric transformation parameter,
The partial region determination unit 14 of the image processing means determines a plurality of projection partial image regions in the search image corresponding to each partial region image in the registered image using the estimated geometric transformation parameter,
A partial region feature amount calculation unit 18 of the image processing means calculates each feature amount of each of the plurality of projection partial image regions of the search image and each partial region image of the registered image,
The partial image matching unit 20 of the image processing unit compares the feature quantities of the plurality of partial search images corresponding to the feature quantities of the partial registration image specified in the partial area of the registration image, respectively. Determine whether or not the partial search image and the partial registration image match,
When the partial image collation unit 20 determines that the partial search image matches the partial registration image, the image position detection unit 22 of the image processing unit performs geometric conversion parameters of the search image with respect to the registration image. An image collating method characterized in that the position of the partial search image is calculated and detected based on the above.
〔付記10〕
 所定の特徴量を有する登録画像を記憶した記憶手段と、当該登録画像と照合対象の検索画像とを比較する画像処理手段とを備えた画像照合装置にあって、
 前記登録画像と検索画像の各特徴点及びその配置に基づいて幾何変換パラメータを推定する幾何変換パラメータ推定手順、
 この推定された前記幾何変換パラメータを利用して登録画像中の各部分領域画像に相当する検索画像中の複数の投影部分画像領域を決定する部分領域決定手順、
 前記検索画像の前記複数の各投影部分画像領域と前記登録画像の各部分領域画像の各特徴量を計算する特徴量計算手順、
 前記登録画像の部分領域で特定される部分登録画像の前記特徴量に対して対応する前記複数の前記部分検索画像の特徴量を、それぞれ比較することで前記部分検索画像と前記部分登録画像との一致を判定する部分画像照合手順、
 および前記部分画像照合手順で前記部分検索画像と部分登録画像とが一致したと判定された場合に、前記幾何変換パラメータを使用して前記一致した前記部分検索画像の位置を計算し検出する画像位置検出手順とを備え、
 これを前記画像処理手段が備えているコンピュータに実行させるようにしたことを特徴とする画像照合処理プログラム。
[Appendix 10]
An image collating apparatus comprising a storage unit that stores a registered image having a predetermined feature amount, and an image processing unit that compares the registered image with a search target search image,
A geometric transformation parameter estimation procedure for estimating a geometric transformation parameter based on each feature point of the registered image and the search image and its arrangement;
A partial region determination procedure for determining a plurality of projection partial image regions in a search image corresponding to each partial region image in a registered image using the estimated geometric transformation parameter;
A feature amount calculating procedure for calculating each feature amount of each of the plurality of projection partial image regions of the search image and each of the partial region images of the registered image;
By comparing the feature quantities of the plurality of partial search images corresponding to the feature quantities of the partial registration image specified in the partial area of the registration image, the partial search image and the partial registration image are compared. Partial image matching procedure to determine match,
And an image position for calculating and detecting the position of the matched partial search image using the geometric transformation parameter when it is determined in the partial image matching procedure that the partial search image and the partial registered image match. A detection procedure,
An image collation processing program characterized by causing a computer provided in the image processing means to execute this.
 この出願は2012年2月24日に出願された日本出願特願2012-038255を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese Patent Application No. 2012-038255 filed on February 24, 2012, the entire disclosure of which is incorporated herein.
 本発明は、検索画像から登録画像RIの一部と同一又は類似の部分を検索する総ての情報処理にかかる装置に対して利用可能である。 The present invention can be used for all information processing apparatuses that search for a part that is the same as or similar to a part of the registered image RI from the search image.
 10 記憶手段
 12 画像処理手段
 14 部分領域決定部
 16 幾何変換パラメータ推定部
 16a 候補パラメータ計算機能
 16d 複数パラメータ計算機能
 18 部分領域特徴量計算部
 20 部分画像照合部
 22 画像位置検出部
 22a パラメータ信頼度計算機能
 22b 画像位置推定機能
 22c 対応位置計算機能
 22d 投影位置計算機能
 22e 位置候補計算機能
 24 統合処理部
 24a 和集合位置判定機能
 24b 検索画像位置判定機能
 24c 他の画像位置判定機能
 26 表示制御部
 95 表示デバイス
DESCRIPTION OF SYMBOLS 10 Memory | storage means 12 Image processing means 14 Partial area determination part 16 Geometric transformation parameter estimation part 16a Candidate parameter calculation function 16d Multiple parameter calculation function 18 Partial area feature-value calculation part 20 Partial image collation part 22 Image position detection part 22a Parameter reliability calculation Function 22b Image position estimation function 22c Corresponding position calculation function 22d Projection position calculation function 22e Position candidate calculation function 24 Integration processing unit 24a Union position determination function 24b Search image position determination function 24c Other image position determination function 26 Display control unit 95 Display device

Claims (10)

  1.  所定の特徴点を有する登録画像を記憶した記憶手段と、当該登録画像と照合対象の検索画像とを比較する画像処理手段とを備えた画像照合装置であって、
     前記画像処理手段が、
     前記登録画像と検索画像の各特徴点及びその配置に基づいて幾何変換パラメータを推定する幾何変換パラメータ推定部と、
     この幾何変換パラメータ推定部により推定された前記幾何変換パラメータを利用して登録画像中の各部分領域画像に相当する検索画像中の投影部分画像領域を決定する部分領域決定部と、
     前記検索画像の前記複数の各投影部分画像領域と前記登録画像の各部分領域画像の各特徴量を計算する部分領域特徴量計算部と、
     前記登録画像の部分領域で特定される部分登録画像の前記特徴量に対して複数の前記部分検索画像の特徴量をそれぞれ比較することで前記部分検索画像と前記部分登録画像との一致を判定する部分画像照合部と、
     前記幾何変換パラメータを使用して前記一致した前記部分検索画像の位置を計算し検出する登録画像位置検出部と、を備えたことを特徴とする画像照合装置。
    An image collating apparatus comprising storage means for storing a registered image having a predetermined feature point, and image processing means for comparing the registered image with a search image to be collated.
    The image processing means
    A geometric transformation parameter estimation unit that estimates a geometric transformation parameter based on each feature point and the arrangement of the registered image and the search image;
    A partial region determination unit that determines a projection partial image region in a search image corresponding to each partial region image in a registered image using the geometric conversion parameter estimated by the geometric conversion parameter estimation unit;
    A partial region feature amount calculation unit for calculating each feature amount of each of the plurality of projection partial image regions of the search image and each of the partial region images of the registered image;
    A match between the partial search image and the partial registration image is determined by comparing feature amounts of the plurality of partial search images with the feature amount of the partial registration image specified in the partial area of the registration image. A partial image matching unit;
    A registered image position detection unit that calculates and detects the position of the matched partial search image using the geometric transformation parameter.
  2.  請求項1記載の画像照合装置において、
     前記幾何変換パラメータ推定部は、前記部分検索画像についての候補となる前記幾何変換パラメータを複数組について計算し推定するパラメータ推定機能を備え、
     前記画像位置検出部は、推定された前記幾何変換パラメータ毎に前記部分検索画像と前記部分登録画像との一致度を計算すると共に一致度の高い前記幾何変換パラメータの信頼度を高く設定するパラメータ信頼度計算機能と、前記信頼度の高い前記幾何変換パラメータを使用して前記登録画像に相当する画像領域が前記検索画像中のどの領域に位置するかを計算し推定する画像位置推定機能とを備えていることを特徴とする画像照合装置。
    The image collating apparatus according to claim 1,
    The geometric transformation parameter estimation unit includes a parameter estimation function for calculating and estimating the geometric transformation parameters as candidates for the partial search image for a plurality of sets,
    The image position detecting unit calculates the degree of coincidence between the partial search image and the partial registered image for each estimated geometric transformation parameter, and sets the reliability of the geometric transformation parameter having a high degree of coincidence to be high. A degree calculation function, and an image position estimation function for calculating and estimating in which area in the search image an image area corresponding to the registered image is located using the geometric transformation parameter with high reliability. An image collating apparatus characterized by comprising:
  3.  請求項1又は2記載の画像照合装置において、
     前記画像位置検出部が、前記幾何変換パラメータと前記部分登録画像の予め設定された座標とを使用して当該部分検索画像の位置を計算する対応位置計算機能を備えていることを特徴とする画像照合装置。
    In the image collation device according to claim 1 or 2,
    The image position detection unit includes a corresponding position calculation function for calculating the position of the partial search image using the geometric transformation parameter and the preset coordinates of the partial registration image. Verification device.
  4.  請求項1,2,又は3に記載の画像照合装置において、
     前記幾何変換パラメータ推定部が、前記部分領域決定部によって複数の部分領域が算定された際に当該部分検索画像毎の複数の幾何変換パラメータを推定する複数パラメータ推定機能を備え、
     前記画像位置検出部が、前記一致した前記部分検索画像の位置の候補を前記幾何変換パラメータ毎の位置候補として計算する位置候補計算機能を備え、
     この画像位置検出部に、前記検索画像内で複数の前記部分検索画像が重なり合う際に当該重なり合う部分検索画像毎の前記位置候補を統合することで当該部分検索画像の位置を計算する統合処理部を併設したことを特徴とする画像照合装置。
    In the image collation device according to claim 1, 2, or 3,
    The geometric transformation parameter estimation unit includes a multiple parameter estimation function for estimating a plurality of geometric transformation parameters for each partial search image when a plurality of partial regions are calculated by the partial region determination unit,
    The image position detection unit includes a position candidate calculation function for calculating a position candidate of the matched partial search image as a position candidate for each geometric transformation parameter,
    An integration processing unit that calculates the position of the partial search image by integrating the position candidates for each of the overlapping partial search images when the plurality of partial search images overlap in the search image. An image collation device characterized in that it is also provided.
  5.  請求項4に記載の画像照合装置において、
     前記統合処理部が、前記複数の位置候補の和集合を計算すると共に当該和集合の位置を前記部分検索画像の位置と判定する和集合位置判定機能を備えたことを特徴とする画像照合装置。
    The image collating apparatus according to claim 4,
    An image collating apparatus, wherein the integration processing unit includes a union position determination function that calculates a union of the plurality of position candidates and determines a position of the union as a position of the partial search image.
  6.  請求項4に記載の画像照合装置において、
     前記統合処理部が、前記幾何変換パラメータの信頼度を計算すると共に当該信頼度の高い前記幾何変換パラメータを使用して前記部分検索画像の位置を判定する検索画像位置判定機能を備えたことを特徴とする画像照合装置。
    The image collating apparatus according to claim 4,
    The integrated processing unit includes a search image position determination function that calculates the reliability of the geometric conversion parameter and determines the position of the partial search image using the geometric conversion parameter with the high reliability. Image collation device.
  7.  請求項4に記載の画像照合装置において、
     前記統合処理部が、前記幾何変換パラメータのパラメータ空間への投票を制御すると共に投票結果がしきい値以上の複数の前記幾何変換パラメータの統計値を使用して前記部分検索画像の位置を判定する第2の検索画像位置判定機能を備えたことを特徴とする画像照合装置。
    The image collating apparatus according to claim 4,
    The integration processing unit controls voting of the geometric transformation parameters to the parameter space, and determines the position of the partial search image using a plurality of statistical values of the geometric transformation parameters whose voting results are equal to or greater than a threshold value. An image collating apparatus comprising a second search image position determining function.
  8.  請求項1乃至7の何れか一つに記載の画像照合装置において、
     前記画像処理手段が、前記部分登録画像と関連する表示用データDPを予め装備した表示デバイスの前記部分検索画像に対応した位置に表示制御する表示制御部を備えたことを特徴とする画像照合装置。
    In the image collating device according to any one of claims 1 to 7,
    The image collating apparatus, wherein the image processing means includes a display control unit that performs display control at a position corresponding to the partial search image of a display device that is preliminarily equipped with display data DP related to the partial registration image. .
  9.  所定の特徴量を有する登録画像を記憶した記憶手段と、当該登録画像と照合対象の検索画像とを比較する画像処理手段とを備えた画像照合装置にあって、
     前記登録画像と検索画像の各特徴点及びその配置に基づいて、前記画像処理手段の幾何変換パラメータ推定部が幾何変換パラメータを推定し、
     この推定された前記幾何変換パラメータを利用して登録画像中の各部分領域画像に相当する検索画像中の複数の投影部分画像領域を、前記画像処理手段の部分領域決定部が決定し、
     前記検索画像の前記複数の各投影部分画像領域と前記登録画像の各部分領域画像の各特徴量を、前記画像処理手段の部分領域特徴量計算部が計算し、
     前記登録画像の部分領域で特定される部分登録画像の前記特徴量に対して対応する前記複数の前記部分検索画像の特徴量を、前記画像処理手段の部分画像照合部がそれぞれ比較して前記部分検索画像と前記部分登録画像との一致の有無を判定し、
     前記部分画像照合部により前記部分検索画像と前記部分登録画像とが一致したと判定された場合に、前記画像処理手段の画像位置検出部が、前記登録画像に対する前記検索画像の幾何変換パラメータに基づいて前記部分検索画像の位置を計算し検出する構成としたことを特徴とする画像照合方法。
    An image collating apparatus comprising a storage unit that stores a registered image having a predetermined feature amount, and an image processing unit that compares the registered image with a search target search image,
    Based on each feature point and the arrangement of the registered image and the search image, a geometric transformation parameter estimation unit of the image processing means estimates a geometric transformation parameter,
    The partial region determination unit of the image processing means determines a plurality of projection partial image regions in the search image corresponding to each partial region image in the registered image using the estimated geometric transformation parameter,
    A partial region feature amount calculation unit of the image processing means calculates each feature amount of each of the plurality of projection partial image regions of the search image and each of the partial region images of the registered image,
    The partial image collation unit of the image processing unit compares the feature amounts of the plurality of partial search images corresponding to the feature amounts of the partial registration image specified in the partial region of the registration image, respectively. Determine whether there is a match between the search image and the partial registration image,
    When the partial image collation unit determines that the partial search image and the partial registration image match, the image position detection unit of the image processing unit is based on a geometric transformation parameter of the search image with respect to the registration image. An image collating method characterized in that the position of the partial search image is calculated and detected.
  10.  所定の特徴量を有する登録画像を記憶した記憶手段と、当該登録画像と照合対象の検索画像とを比較する画像処理手段とを備えた画像照合装置にあって、
     前記登録画像と検索画像の各特徴点及びその配置に基づいて幾何変換パラメータを推定する幾何変換パラメータ推定手順、
     この推定された前記幾何変換パラメータを利用して登録画像中の各部分領域画像に相当する検索画像中の複数の投影部分画像領域を決定する部分領域決定手順、
     前記検索画像の前記複数の各投影部分画像領域と前記登録画像の各部分領域画像の各特徴量を計算する部分領域特徴量計算手順、
     前記登録画像の部分領域で特定される部分登録画像の前記特徴量に対して対応する前記複数の前記部分検索画像の特徴量を、それぞれ比較することで前記部分検索画像と前記部分登録画像との一致を判定する部分画像照合手順、
     および前記部分画像照合手順で前記部分検索画像と部分登録画像とが一致したと判定された場合に、前記幾何変換パラメータを使用して前記一致した前記部分検索画像の位置を計算し検出する画像位置検出手順とを備え、
     これを前記画像処理手段が備えているコンピュータに実行させるようにしたことを特徴とする画像照合処理プログラム。
    An image collating apparatus comprising a storage unit that stores a registered image having a predetermined feature amount, and an image processing unit that compares the registered image with a search target search image,
    A geometric transformation parameter estimation procedure for estimating a geometric transformation parameter based on each feature point of the registered image and the search image and its arrangement;
    A partial region determination procedure for determining a plurality of projection partial image regions in a search image corresponding to each partial region image in a registered image using the estimated geometric transformation parameter;
    A partial region feature amount calculation procedure for calculating each feature amount of each of the plurality of projection partial image regions of the search image and each partial region image of the registered image;
    By comparing the feature quantities of the plurality of partial search images corresponding to the feature quantities of the partial registration image specified in the partial area of the registration image, the partial search image and the partial registration image are compared. Partial image matching procedure to determine match,
    And an image position for calculating and detecting the position of the matched partial search image using the geometric transformation parameter when it is determined in the partial image matching procedure that the partial search image and the partial registered image match. A detection procedure,
    An image collation processing program characterized by causing a computer provided in the image processing means to execute this.
PCT/JP2013/053896 2012-02-24 2013-02-18 Image verification device, image verification method, and program WO2013125494A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012-038255 2012-02-24
JP2012038255 2012-02-24

Publications (1)

Publication Number Publication Date
WO2013125494A1 true WO2013125494A1 (en) 2013-08-29

Family

ID=49005679

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/053896 WO2013125494A1 (en) 2012-02-24 2013-02-18 Image verification device, image verification method, and program

Country Status (1)

Country Link
WO (1) WO2013125494A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006269651A (en) * 2005-03-23 2006-10-05 Canon Inc Image extraction device and its method, and exposure apparatus provided therewith and its method
JP2009087087A (en) * 2007-09-28 2009-04-23 Toshiba Corp License plate information processor and license plate information processing method
JP2009251892A (en) * 2008-04-04 2009-10-29 Fujifilm Corp Object detection method, object detection device, and object detection program
JP2010103877A (en) * 2008-10-27 2010-05-06 Sony Corp Image processing apparatus, image processing method, and program
WO2010053109A1 (en) * 2008-11-10 2010-05-14 日本電気株式会社 Image matching device, image matching method, and image matching program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006269651A (en) * 2005-03-23 2006-10-05 Canon Inc Image extraction device and its method, and exposure apparatus provided therewith and its method
JP2009087087A (en) * 2007-09-28 2009-04-23 Toshiba Corp License plate information processor and license plate information processing method
JP2009251892A (en) * 2008-04-04 2009-10-29 Fujifilm Corp Object detection method, object detection device, and object detection program
JP2010103877A (en) * 2008-10-27 2010-05-06 Sony Corp Image processing apparatus, image processing method, and program
WO2010053109A1 (en) * 2008-11-10 2010-05-14 日本電気株式会社 Image matching device, image matching method, and image matching program

Similar Documents

Publication Publication Date Title
CN109961009B (en) Pedestrian detection method, system, device and storage medium based on deep learning
KR102225093B1 (en) Apparatus and method for estimating camera pose
CN110033018B (en) Graph similarity judging method and device and computer readable storage medium
US9002066B2 (en) Methods, systems and processor-readable media for designing a license plate overlay decal having infrared annotation marks
US10049096B2 (en) System and method of template creation for a data extraction tool
KR101337874B1 (en) System and method for detecting malwares in a file based on genetic map of the file
CN108875731B (en) Target identification method, device, system and storage medium
CN111145214A (en) Target tracking method, device, terminal equipment and medium
CN109255300B (en) Bill information extraction method, bill information extraction device, computer equipment and storage medium
CN110008997B (en) Image texture similarity recognition method, device and computer readable storage medium
JP2014132453A (en) Word detection for optical character recognition constant to local scaling, rotation and display position of character in document
CN103383732A (en) Image processing method and device
CN112149663A (en) RPA and AI combined image character extraction method and device and electronic equipment
CN102272774A (en) Method, apparatus and computer program product for providing face pose estimation
CN112308069A (en) Click test method, device, equipment and storage medium for software interface
CN111859002A (en) Method and device for generating interest point name, electronic equipment and medium
CN110020593B (en) Information processing method and device, medium and computing equipment
CN112396047A (en) Training sample generation method and device, computer equipment and storage medium
CN110647595B (en) Method, device, equipment and medium for determining newly-added interest points
CN109325348B (en) Application security analysis method and device, computing equipment and computer storage medium
CN110688995A (en) Map query processing method, computer-readable storage medium and mobile terminal
WO2013125494A1 (en) Image verification device, image verification method, and program
CN112395450B (en) Picture character detection method and device, computer equipment and storage medium
CN115063826A (en) Mobile terminal driver license identification method and system based on deep learning
CN114639056A (en) Live content identification method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13751836

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13751836

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP