WO2012029662A1 - Procédé et appareil de traitement d'image - Google Patents

Procédé et appareil de traitement d'image Download PDF

Info

Publication number
WO2012029662A1
WO2012029662A1 PCT/JP2011/069320 JP2011069320W WO2012029662A1 WO 2012029662 A1 WO2012029662 A1 WO 2012029662A1 JP 2011069320 W JP2011069320 W JP 2011069320W WO 2012029662 A1 WO2012029662 A1 WO 2012029662A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
template
eigenfunctions
similarity
geometric transformation
Prior art date
Application number
PCT/JP2011/069320
Other languages
English (en)
Japanese (ja)
Inventor
上瀧剛
内村圭一
Original Assignee
国立大学法人熊本大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人熊本大学 filed Critical 国立大学法人熊本大学
Priority to JP2012531845A priority Critical patent/JP5317250B2/ja
Publication of WO2012029662A1 publication Critical patent/WO2012029662A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/242Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees

Definitions

  • the present invention relates to an image processing method and an image processing apparatus that perform image matching using a template image.
  • image verification is used for software for inspecting parts.
  • image matching is to use a template image.
  • a template image for example, a component that is known to be a non-defective product is imaged, and a template image is prepared in advance using the captured image. Then, the input image obtained by imaging the part to be inspected and the template image are collated, and the part to be inspected is inspected by comparing the input image and the template image.
  • Non-patent document 1 In order to detect the position and direction angle of a part to be inspected with high accuracy, processing is performed in which a template image given in advance and an input image are collated with sub-pixel accuracy ( Non-patent document 1).
  • a rotation collation method is known in which collation is performed while rotating a template image with respect to an input image.
  • the rotation matching method is an image matching technique using geometric transformation that performs geometric transformation on a template image.
  • an image similarity indicating image similarity in relation to an input image is calculated while the template image is rotated and translated, and the direction angle of the input image and the input image direction angle are calculated from the peak value of the image similarity.
  • Coordinate values are detected. That is, according to the rotation collation method, the template image is rotated and translated at a predetermined pitch, and the rotation angle and the XY coordinate values that approximate the input image to the template image are searched, whereby the direction angle of the input image. And coordinate values are detected.
  • the phase only correlation method As another image matching technique, the phase only correlation method is known (see Non-Patent Documents 4 to 6).
  • the phase-only correlation method has attracted attention as a technique that enables high-speed and high-precision alignment.
  • the phase-only correlation method does not have a very good detection accuracy of the rotation angle of the image, and cannot sufficiently cope with image deformation other than image misalignment, rotation, and enlargement / reduction.
  • Non-patent Document 7 there is a technique for performing image matching after converting a template image and an input image into an edge direction image or an incremental code image in order to perform image matching that is robust against brightness fluctuations and hiding of the input image.
  • this method does not support rotation of the input image.
  • the present invention has been made in view of the background art as described above, and has fewer collations than conventional image collation techniques using geometric transformation, such as a rotation collation method that collates while rotating a template image. It is an object of the present invention to provide an image processing method and an image processing apparatus that can ensure the same accuracy by the number of times and reduce the processing time.
  • the image processing method of the present invention is an image processing method for determining a value of a geometric transformation parameter of an input image with reference to the template image by performing collation using a predetermined template image.
  • a first image group generation step for generating a first image group composed of a plurality of template images after geometric conversion by applying geometric transformation, and eigenvalues in the first image group; Performing decomposition, eigenvalue decomposition step for calculating the same number of eigenvalues and eigenfunctions as the template images after the plurality of geometric transformations, and performing inner product operation of the plurality of eigenfunctions and the first image group
  • a second image group generation step for generating a second image group, and an image similarity indicating a degree of similarity between the input image and the second image group represented by a continuous curve.
  • An image similarity calculation step of calculating a, based on the image similarity calculated by the image similarity calculation step, a matching step of determining the value of the geometric transformation parameters are those comprising a
  • a selection step of selecting a predetermined number of the eigenfunctions in descending order of the eigenvalues among the plurality of eigenfunctions calculated in the eigenvalue decomposition step is further included.
  • the second image group generation step generates the second image group by performing an inner product operation of the plurality of eigenfunctions selected in the selection step and the first image group. It is.
  • the geometric transformation parameter includes a direction angle and a two-dimensional coordinate value, and the geometric transformation changes the rotation that changes the direction angle and the two-dimensional coordinate value. Includes translation.
  • the geometric transformation parameters include parameters necessary for projective transformation, and the geometric transformation includes projective transformation.
  • the geometric transformation parameter includes a parameter necessary for an image blurring process, and the geometric transformation includes an image blurring process.
  • the image processing apparatus performs input image acquisition means for acquiring an input image and collation using a predetermined template image, so that the value of the geometric transformation parameter of the input image based on the template image Image processing means for determining the first image comprising the template images after the plurality of geometric transformations having different geometric transformation amounts by applying geometric transformation to the template image.
  • a first image group generation unit that generates a group, and an eigenvalue decomposition unit that calculates the same number of eigenvalues and eigenfunctions as the template images after the plurality of geometric transformations by performing eigenvalue decomposition on the first image group
  • a second image group generation unit for generating a second image group by performing an inner product operation of the plurality of eigenfunctions and the first image group, and the input image represented by a continuous curve.
  • An image similarity calculating unit that calculates an image similarity indicating the degree of similarity between the image and the second image group; and a value of the geometric transformation parameter based on the image similarity calculated by the image similarity calculating unit.
  • a collation unit for determining.
  • the image processing unit selects a predetermined number of the eigenfunctions in descending order of the eigenvalues among the plurality of eigenfunctions calculated by the eigenvalue decomposition unit.
  • a second image group generation unit that performs an inner product operation of the plurality of eigenfunctions selected by the selection unit and the first image group, so that the second image group is generated. A group is generated.
  • the present invention compared with a conventional image matching technique using geometric transformation, such as a rotation matching method that performs matching while rotating a template image, the same accuracy can be ensured with a small number of matching times. Time can be shortened.
  • FIG. 1 is a schematic explanatory diagram of an image processing method according to an embodiment of the present invention. Explanatory drawing of the image processing method which concerns on one Embodiment of this invention. 1 is a diagram illustrating a configuration of an image processing apparatus according to an embodiment of the present invention. 1 is a control block diagram of an image processing apparatus according to an embodiment of the present invention. Explanatory drawing of the image processing of the image processing apparatus which concerns on one Embodiment of this invention. The flowchart which shows an example of the process sequence by the image processing apparatus which concerns on one Embodiment of this invention. Explanatory drawing about an example of the process sequence by the image processing apparatus which concerns on one Embodiment of this invention.
  • a new template image and an input image obtained by eigenvalue decomposition of a template image obtained by comparing the template image with the input image while rotating the template image are obtained. It pays attention to what is obtained by collating.
  • the present invention is intended to perform high-speed and high-precision collation by collating an input image with a new template image. Embodiments of the present invention will be described below.
  • the image processing method determines the value of the geometric transformation parameter of the input image 2 with reference to the template image 1 by performing collation using a predetermined template image 1.
  • the geometric transformation parameter for example, a direction angle and a two-dimensional coordinate value (XY coordinate value) are used.
  • the template image 1 is collated with respect to the input image 2, whereby the direction angle at which the image similarity, which is the degree of approximation of the template image 1 with respect to the input image 2, is the highest. And the process which calculates
  • the direction angle included in the collation result for example, the inclination (rotation angle) ⁇ of the input image 2 with respect to the reference line O1 indicating the direction angle of the template image 1 is obtained.
  • the two-dimensional coordinate value included in the collation result for example, the two-dimensional coordinate value (XY coordinate value) of the center position C1 of the input image 2 when the center position of the template image 1 is set as the origin O2 is obtained.
  • the image processing method of the present embodiment is used for industrial image processing for inspecting parts and the like, for example.
  • the template image 1 is acquired, for example, by imaging the same component as the component to be inspected by an imaging unit such as a camera, and the direction angle and the two-dimensional coordinate value are obtained.
  • An image indicating a reference is prepared in advance.
  • the input image 2 is acquired by imaging a component to be inspected by an imaging unit such as a camera, and is a target for collation of the template image 1.
  • the specific processing content of the image processing method of this embodiment is demonstrated.
  • a process for generating a converted template image group as a first image group by performing geometric transformation on the template image 1 is performed.
  • geometric transformation applied to the template image 1 for example, rotation for changing the direction angle and parallel movement for changing the two-dimensional coordinate value are performed.
  • post-conversion template images 360 geometrically converted template images 1 (hereinafter referred to as “post-conversion template images”) having different directional angles with a 1 ° pitch are obtained as a post-conversion template image group generated in this processing.
  • a geometric transformation to be added to the template image 1 when a parallel movement that changes the two-dimensional coordinate value is performed, for example, when the template image 1 is 128 ⁇ 128 (pixels), the center position of the template image 1 (the center position)
  • a process of generating a post-conversion template image group is performed by translating the two-dimensional coordinate value) within a range of 8 ⁇ 8 (pixels) at a 1 (pixel) pitch.
  • 64 converted template images having different center positions (center two-dimensional coordinate values) at a 1 (pixel) pitch are obtained as a group of converted template images generated by this processing.
  • a template image group after conversion which includes a plurality of template images 1 after geometric conversion with different amounts of geometric conversion by applying geometric conversion to the template image 1.
  • a first image group generation step of generating is performed.
  • the amount of geometric transformation corresponds to the rotation amount of the image with reference to the direction angle of the template image 1 when the geometric transformation is a direction angle, and 2 of the template image 1 when the geometric transformation is a two-dimensional coordinate value. This corresponds to the amount of parallel movement of the image based on the dimensional coordinate value. Therefore, the amount of geometric transformation corresponds to the magnitude of the value of the geometric transformation parameter.
  • the geometric transformation applied to the template image 1 may be, for example, image distortion such as image distortion, blur amount, depth of focus, trapezoid distortion, etc. in addition to the direction angle and the two-dimensional coordinate value.
  • image distortion such as image distortion, blur amount, depth of focus, trapezoid distortion, etc.
  • any one of the geometric transformations as described above is used, or a plurality of geometric transformations are used in combination.
  • a process of calculating eigenvalues and eigenfunctions by performing eigenvalue decomposition on the converted template image group is performed.
  • the same number of eigenvalues and eigenfunctions as the number of converted template images constituting the converted template image group are calculated.
  • the post-conversion unit consisting of the 360 images is generated.
  • Eigenvalue decomposition is performed on the template image group, and 360 eigenvalues and eigenfunctions are obtained.
  • a two-dimensional coordinate value is used as a geometric transformation applied to the template image 1 and a converted template image group including 64 converted template images is generated, the converted template image group including the 64 images. Is subjected to eigenvalue decomposition, and 64 eigenvalues and eigenfunctions are obtained.
  • eigenfunctions ⁇ 1 ( ⁇ ) to ⁇ 3 ( ⁇ ) represented by waveforms p1 to p3 are obtained, for example, as shown in FIG.
  • These eigenfunction groups can be said to be main eigenfunction groups.
  • the horizontal axis represents the direction angle ( ⁇ ) with reference to the template image 1
  • the vertical axis represents the eigenfunction value.
  • the waveform obtained as the eigenfunction is a continuous waveform in which eigenfunction values continuously exist with respect to the direction angle.
  • the eigenfunction value periodically varies as the direction angle changes.
  • These eigenfunctions are calculated as functions (P n ( ⁇ )) representing image similarity by multiplying correlation values (r n ) described later in the image processing method of the present embodiment.
  • eigenvalue decomposition is performed on the converted template image group, so that the same number of eigenvalues and eigenfunctions as the plurality of converted template images are obtained.
  • An eigenvalue decomposition step for calculating is performed.
  • eigenvalue decomposition template image group corresponds to the second image group in the present embodiment.
  • each eigenvalue decomposition template image E1 to E3 corresponds to each waveform p1 to p3 representing an eigenfunction.
  • the eigenvalue decomposition template image group is generated by performing the inner product operation of the plurality of eigenfunctions and the converted template image group after the eigenvalue decomposition step.
  • An image group generation step is performed.
  • a process for calculating an image similarity with respect to the eigenvalue decomposition template image group for the input image 2 is performed next.
  • the template image 1 is T (x, y)
  • the input image 2 is f (x, y)
  • the template image T (x, y) is rotated by ⁇ , that is, the converted template image is T (x, y; ⁇ )
  • the image similarity g ( ⁇ ) between the template image 1 and the input image 2 is calculated by the following equation (1). Note that x and y representing each image indicate two-dimensional coordinate values of pixels in the image.
  • E n (x, y) defined as the following equation (4) is an image calculated by the inner product of the converted template image T (x, y; ⁇ ) and the eigenfunction ⁇ n ( ⁇ ), that is, It becomes an eigenvalue decomposition template image.
  • the image represented by the above equation (4) can be calculated separately from the input image f (x, y). Therefore, by calculating E n (x, y) in advance, the input image f (x, y) is given thereafter, and from the result of matching with E n (x, y), all directions
  • the corner image similarity (g ( ⁇ )) can be calculated.
  • the image similarity (g ( ⁇ )) of the omnidirectional angle for example, as shown in FIG. 2, eigenfunctions ⁇ 1 ( ⁇ ) to ⁇ 3 ( ⁇ ) represented by waveforms p1 to p3 are at a predetermined ratio.
  • a waveform qs obtained by adding together is calculated.
  • the waveform obtained as the image similarity (g ( ⁇ )) for all directional angles is a continuous waveform (continuous curve) in which the image similarity is continuously present for the directional angles. That is, the image similarity (g ( ⁇ )) represents the distribution of the image similarity in the direction angle ( ⁇ ) direction.
  • the image similarity (g ( ⁇ )) is calculated as follows.
  • eigenfunctions ⁇ 1 ( ⁇ ) to ⁇ 3 ( ⁇ ) are multiplied by correlation values r 1 to r 3 , respectively.
  • functions P 1 ( ⁇ ) to P 3 ( ⁇ ) representing image similarity are calculated.
  • the functions P 1 ( ⁇ ) to P 3 ( ⁇ ) are waveforms q1 in which waveforms p1 to p3 as eigenfunctions ⁇ 1 ( ⁇ ) to ⁇ 3 ( ⁇ ) are represented at a predetermined ratio.
  • This predetermined ratio corresponds to the correlation values r 1 to r 3 .
  • the horizontal axis indicates the direction angle ( ⁇ ) with reference to the template image 1
  • the vertical axis indicates the image similarity.
  • the correlation value r n a predetermined real number representing the correlation between the input image and each eigenvalue decomposition template image eigenvalue decomposition template image group is defined as the following equation (5).
  • the image similarity that is represented by a continuous curve and indicates the degree of similarity between the input image 2 and the eigenvalue decomposition template image group is calculated.
  • An image similarity calculation step is performed.
  • the image processing method based on the image similarity (g ( ⁇ )) calculated as described above, the value (real value) of the geometric transformation parameter of the input image 2 based on the template image 1.
  • the process of determining is performed. That is, in this process, the template image 1 and the input image 2 are collated using the image similarity (g ( ⁇ )) value calculated as described above as an index.
  • the direction angle ⁇ a at which the image similarity is maximum is based on the template image 1. It is determined as the value of the geometric transformation parameter of the input image 2. That is, the direction angle of the input image 2 based on the template image 1 is calculated as the direction angle ⁇ a as a collation result of the image processing method of the present embodiment.
  • the direction angle ⁇ a obtained as a result of matching corresponds to the rotation angle of the image when the direction angle of the template image 1 is set as a reference (0 °). That is, in this case, in the range in which the template image 1 is rotated by 360 °, the template image 1 in the state rotated by the angle ⁇ a has the highest image similarity in relation to the input image 2.
  • the template image 1 is used as a reference based on the image similarity (g ( ⁇ )) calculated in the image similarity calculation step after the image similarity calculation step.
  • a collation step for determining the value of the geometric transformation parameter of the input image 2 is performed. In the collation step, focusing on the continuity of the image similarity due to the continuity of the eigenfunction, the real value of the geometric transformation parameter is determined.
  • predetermined values are arranged in descending order.
  • a selection step is performed to select a number of eigenfunctions.
  • the same number of eigenvalues and eigenfunctions as the plurality of converted template images are calculated in the eigenvalue decomposition step. That is, as described above, when 360 converted template images are obtained by applying a rotation of 1 ° pitch as geometric transformation to the template image 1, 360 eigenvalues and eigenfunctions are calculated. In this case, when all eigenfunctions are used in generating the eigenvalue decomposition template image group, the eigenvalue decomposition template image group becomes 360 eigenvalue decomposition template images.
  • a predetermined number of eigenfunctions are selected from the same number of eigenfunctions as the converted template images calculated in the eigenvalue decomposition step.
  • the selection of the eigenfunction is performed from the one with the larger eigenvalue. That is, in the selection step, a predetermined number of eigenfunctions calculated in the eigenvalue decomposition step are selected from the larger eigenvalue side. Therefore, for example, when 360 eigenfunctions are calculated as described above, 60 eigenfunctions with larger eigenvalues are selected. This is based on the fact that the eigenfunction having a larger eigenvalue has a greater influence (higher contribution) on the image similarity between the input image 2 and the eigenvalue decomposition template image group that is finally calculated.
  • an eigenvalue is calculated by performing an inner product operation of the plurality of eigenfunctions selected in the selection step and the converted template image group.
  • a decomposition template image group is generated.
  • an eigenvalue decomposition template image group is generated by using a part of the eigenvalues calculated in the eigenvalue decomposition step, a part of the eigenvalues having larger eigenvalues.
  • such a selection step corresponds to the truncation approximation of the series by a predetermined number in the above equation (3).
  • Expression (3) is expressed by the following Expression (7).
  • the image similarity at all angles is M images E n (x, y ). Then, as a method of selecting the eigenfunction sequence ⁇ n ( ⁇ ), the eigenvalue ⁇ n and the eigenfunction ⁇ n ( ⁇ ) of the converted template image sequence T (x, y; ⁇ ) are calculated. Select eigenfunctions.
  • the matrix C is a covariance matrix in which the i-th row and the j-th element are represented by the following equation (9).
  • ⁇ j is an average density value of T i (x, y).
  • An image E n (x, y) obtained using this eigenvector corresponds to an eigenvalue decomposition template image.
  • the image processing method of the present embodiment is equivalent to a conventional image matching technique using geometric transformation, such as a rotation matching method that performs matching while rotating a template image, with a small number of matching times. Accuracy can be ensured and processing time can be shortened.
  • the rotation collation method which is a conventional image collation technique, will be described with reference to FIG.
  • the image similarity in relation to the input image 102 while the rotation angle (direction angle) and position (two-dimensional coordinate value) of the template image 101 are changed. Is calculated. Then, the direction angle and the two-dimensional coordinate value that maximize the image similarity are detected as the direction angle and the two-dimensional coordinate value of the input image 102.
  • the horizontal axis represents the direction angle of the template image 101, that is, the rotation angle of the template image 101 to be rotated
  • the vertical axis represents the image of the template image 101 and the input image 102.
  • the graph G1 shown in the lower part of FIG. 12A shows a change in image similarity of the template image 101 with respect to the input image 102 due to a change in the direction angle of the template image 101.
  • the middle stage shows the process of rotation of the template image 101 that rotates in accordance with the direction angle in the graph G1 shown in FIG.
  • the template image 101 is rotated 360 ° at a rotation pitch of 1 ° (only four images are shown in FIG. 10 for convenience).
  • the upper part shows an input image 102 that is a detection target in the rotation matching method.
  • the direction angle P1 that maximizes the image similarity is detected as the direction angle of the input image 102. That is, according to the rotation matching method, the template image 101 is rotated at a predetermined rotation pitch, and the rotation angle (direction angle) that approximates the input image 102 is searched for the template image 101. Direction angles and the like are detected.
  • the calculated image similarity is calculated with a quadratic curved surface while performing the matching with a coarse rotation pitch of the template image 101.
  • the template image 101 and the input image 102 are collated at nine directional angles R1 to R9 with a pitch of 36 °. Then, the image similarity is calculated at each of R1 to R9, and based on the image similarity at each location, the image similarity in the range between each location is interpolated by a quadric surface or the like to obtain an estimated value.
  • a graph G2 is obtained. From this graph G2, the direction angle P2 having the maximum image similarity is detected as the estimated value of the direction angle of the input image 102.
  • the direction angle P2 as an estimated value is a value deviated from the direction angle P1 shown in FIG. 12 (a) due to a relatively fine rotation pitch.
  • an error due to interpolation occurs, and the detection accuracy decreases.
  • FIG. 13 in the image matching technique method of this example (hereinafter referred to as “conventional method”), a plurality of registered images 201 having different direction angles are registered in advance.
  • the image matching technique method of this example hereinafter referred to as “conventional method”
  • the four registered images 201a, 201b, 201c, and 201d are converted into feature vectors v 1 to v 4 by eigenvalue decomposition (principal component analysis), respectively.
  • the feature vector is a registered image obtained by information compression.
  • the input image 202 similarly to the registration image 201 is converted into a feature vector v in the eigenvalue decomposition. Then, for each of the feature vectors v 1 ⁇ v 4 based on the registered image 201, the difference is obtained between the feature vector v in based on the input image 202, registered image 201 is detected that the value of the difference is minimized.
  • a registered image 201 c having a direction angle of 45 ° is detected as the registered image 201 closest to the input image 202, and the direction angle 45 ° of the registered image 201 c is determined as the direction angle of the input image 202. Is done.
  • the image processing method of the present embodiment in comparison with the conventional image matching technology such as the rotation matching method and the conventional method in which it is difficult to simultaneously reduce the processing time and improve the detection accuracy.
  • the processing time can be shortened.
  • the geometric vector is not based on the feature vector comparison as in the conventional method described above, but based on the image similarity that appears as a continuous waveform for the geometric transformation parameter to be determined. Since the conversion parameter is determined, highly accurate detection processing can be performed.
  • the selection step of selecting a part of the eigenfunctions in descending order of the eigenvalues is performed, thereby further reducing the processing time while ensuring the detection accuracy. be able to. That is, according to the selection step according to the present embodiment, some eigenfunctions having a large influence (contribution) on the image similarity are used among the same number of eigenfunctions as the converted template images. Thereby, in comparison with the case where all of the calculated eigenfunctions are used, sufficient accuracy can be ensured for the determined geometric transformation parameters, and the processing speed can be increased by the eigenfunctions not selected. The processing time can be further shortened.
  • the following method can be adopted.
  • the number of post-conversion template image groups obtained by applying geometric transformation to the template image 1 as described above increases, the continuity of the eigenfunction increases. Therefore, in the image processing method of this embodiment, it is preferable to increase the number of converted template images to infinity. That is, in the series of processes in the image processing method of the present embodiment as described above, the number of converted template images constituting the converted template image group is mathematically infinite.
  • K (s, t) is represented by the following formula (11).
  • equation (10) is transformed into a differential equation problem.
  • equation (10) is transformed into a differential equation problem.
  • the eigenfunction becomes a continuous function in a true sense.
  • this continuous function can be approximated by a polynomial and expressed analytically.
  • an image similarity function (image similarity g) that is a weighted sum of eigenfunctions using correlation values is also expressed by a polynomial.
  • image similarity g image similarity function
  • the process of finding the maximum value of the image similarity function expressed by a polynomial may be performed by differentiating the polynomial and finding a position where it becomes zero. In this way, by increasing the number of post-conversion template images to infinity, it is possible to analytically determine the estimated value of the parameter, and the estimation accuracy of the value of the geometric parameter can be greatly increased.
  • the geometric transformation parameters determined for the input image 2 include a direction angle and a two-dimensional coordinate value
  • the geometric transformation applied to the template image 1 is a rotation that changes the direction angle. It is preferable to include a translation that changes the two-dimensional coordinate values.
  • the direction angle and the two-dimensional coordinate value are geometric transformation parameters that are relatively frequently used in industrial image matching processing, and are two-dimensional geometric transformation parameters.
  • the image processing method of the form can be performed. For this reason, high versatility can be obtained.
  • the geometric transformation parameters determined for the input image 2 include parameters necessary for projective transformation, and the geometric transformation applied to the template image 1 includes projective transformation.
  • the projective transformation is a general projective transformation including an affine transformation, and converts the coordinates (x, y) of the original image into the coordinates (x ′, y ′) on the projection plane by a predetermined transformation formula. It is. Accordingly, the parameters necessary for the projective transformation are coefficients in the general formula of the projective transformation.
  • the projection transformation is added to the template image 1 in the first image group generation step, so that A template image group is generated. Then, in the collating step, the value of the coefficient in the general formula of the projective transformation is determined as the value of the geometric transformation parameter of the input image 2.
  • the geometric transformation parameters determined for the input image 2 include parameters necessary for the image blurring process, and the geometric transformation applied to the template image 1 performs the image blurring process. Including.
  • the image blurring process is a process of blurring the image, and is performed as follows, for example.
  • an image blurring process for example, an average of colors (color values) of 3 ⁇ 3 pixels (9 pixels) centering on a pixel (x, y) to be processed is used as a color (color value) of the pixel to be processed. Is performed. Such processing is also referred to as smoothing.
  • the image blurring process is performed, for example, for each of R (red), G (green), and B (blue) components constituting the pixel color. As described above, when the average of the colors of 3 ⁇ 3 pixels is adopted as the color of the pixel to be processed, the weights of the pixels in the averaging process of 9 pixels are all the same (“1”).
  • the weight of each of the surrounding 8 pixels is doubled with respect to the pixel to be processed, or the average of the colors of only four pixels above, below, left, and right of the pixel to be processed is processed. It may also be used as the color.
  • the weight of the pixel to be processed is “1” and the weight of each surrounding pixel is “2”.
  • the weight of the upper, lower, left, and right pixels of the pixel to be processed is “1”.
  • the weight of each pixel of 5 pixels is “0”.
  • the weight of each pixel in the averaging process performed in the image blurring process is a parameter necessary for the image blurring process.
  • the number of pixels in the range to be averaged as the color of the pixel to be processed is also a parameter necessary for the image blurring process. Note that the greater the value of N, the greater the degree of image blur. Parameters necessary for such an image blurring process are included in the geometric transformation parameters determined for the input image 2.
  • image blurring processing when image blurring processing is used as geometric transformation applied to the template image 1, image blurring processing is added to the template image 1 in the first image group generation step. Thus, a post-conversion template image group is generated.
  • the value of the parameter necessary for the image blurring process as described above is determined as the value of the geometric transformation parameter of the input image 2.
  • the image blurring process is not limited to the image blurring process described above, and a well-known image blurring process can be appropriately employed.
  • the imaging direction by the imaging unit such as a camera is not perpendicular to the imaging plane set in the inspection object.
  • the imaging unit such as a camera
  • trapezoidal distortion there is a case in which a part to be inspected is distorted into a trapezoid (so-called trapezoidal distortion).
  • trapezoidal distortion that occurs in the input image 2 causes detection errors.
  • the input image 2 includes “blur” due to the distance between the imaging means such as a camera and the inspection object.
  • the imaging means such as a camera and the inspection object.
  • adjustment focusing with movement of a camera or a lens is performed by autofocus processing or the like.
  • adjustment with movement of the camera and lens is a mechanical process, so that the processing time becomes long.
  • the input image 2 is an image caused by a focus shift of the lens of the imaging unit in addition to geometric deformation such as noise and rotation of the inspection object. Blurring occurs.
  • image collation using the template image 1 is performed on the input image 2 which is a blurred image
  • the positional deviation becomes large. That is, when the input image 2 is blurred, the image matching between the template image 1 that is not blurred and the blurred input image 2 is performed, the image matching is not successful, and a positional deviation occurs.
  • misalignment caused as a result of image collation is a serious problem.
  • the template image 1 that is intentionally blurred is used for image matching, so that even if the input image 2 is blurred, the input image 2 and The template image 1 can be matched. Thereby, it is possible to avoid the problem of misalignment due to blurring of the input image 2. That is, matching accuracy can be increased by adding blur to the template image 1 in accordance with the blurred input image 2.
  • the image blurring process is used for noise removal.
  • blurring is performed by performing a blurring process on the template image 1 that is the target of matching of the input image 2. Is used to create a template image 1 to which is added.
  • the image processing method of the present embodiment is characterized by the use of the image blurring process.
  • a template image group with various blurs is input as a template image after conversion, and these are compressed using eigenvalue decomposition to reduce the calculation time as described above.
  • the problem can be solved. That is, since the post-conversion template image is information-compressed by eigenvalue decomposition, it is possible to suppress an increase in calculation time even if there are many variations in image blur.
  • the image processing apparatus 10 of this embodiment includes a computer 11 and a camera 12.
  • the image processing apparatus 10 images the IC chip 20 as a work with the camera 12, and uses the captured image as an input image to perform image collation using a template image prepared in advance in the computer 11.
  • the image processing apparatus 10 according to the present embodiment detects a direction angle and a two-dimensional coordinate value of an input image with reference to a template image as an image matching result.
  • the IC chip 20 picked by the arm 17 from the IC chip supply tray 16 accommodating the plurality of IC chips 20 is positioned at a predetermined position on the IC substrate.
  • the arm 17 is a component-extracting configuration such as a robot arm or a picking arm that constitutes a part of a device that transports the IC chip 20, for example.
  • the positioning of the IC chip 20 is performed as follows. First, a predetermined IC chip 20 in the IC chip supply tray 16 is picked by the arm 17 (see arrow A1). The picked IC chip 20 is captured by the camera 12 while being held by the arm 17. The captured image of the IC chip 20 is transmitted to the computer 11 as an input image, and image verification is performed using a template image.
  • the direction angle and two-dimensional coordinate value of the input image as an image collation result by the image processing apparatus 10 correspond to the angle and position of the IC chip 20 that is the imaging target. Therefore, the arm 17 holding the IC chip 20 is rotated and translated when positioning the IC chip 20 based on the result of image collation by the image processing apparatus 10. After the arm 17 is rotated and translated, the IC chip 20 is positioned at a predetermined position on the IC substrate as the arm 17 moves in the vertical direction.
  • the computer 11 includes an arithmetic control unit 13, an input unit 14, and a display unit 15.
  • the arithmetic control unit 13 controls a series of operations of the image processing apparatus 10.
  • the calculation control unit 13 includes a storage unit that stores a program and the like, a development unit that expands the program and the like, a calculation unit that performs a predetermined calculation according to the program and the like, a storage unit that stores calculation results and the like by the calculation unit, and the like.
  • arithmetic control unit 13 specifically, a configuration in which a CPU, a ROM, a RAM, an HDD, or the like is connected by a bus, or a configuration including a one-chip LSI or the like is used.
  • arithmetic control unit 13 in addition to a dedicated product, a commercially available personal computer, a workstation or the like in which the above program is stored is used.
  • the input unit 14 is connected to the calculation control unit 13 and inputs various information / instructions related to image processing to the calculation control unit 13. For example, a keyboard, a mouse, a pointing device, a button, a switch, or the like is used as the input unit 14.
  • the display unit 15 is connected to the calculation control unit 13 and displays an operation status of the image processing, an input content from the input unit 14 to the calculation control unit 13, a processing result by the image processing, and the like.
  • a liquid crystal display, a CRT (cathode ray tube), or the like is used as the display unit 15.
  • the camera 12 is a CCD camera, for example, and is connected to the computer 11 via a connection cable.
  • the camera 12 takes an image of the IC chip 20 to be picked by the arm 17 by moving relative to the IC chip 20 in the IC chip supply tray 16.
  • the camera 12 captures the IC chip 20 from above and captures a planar image of the IC chip 20.
  • Imaging data about the IC chip 20 captured by the camera 12 is sent to the computer 11 and input to the arithmetic control unit 13 as an input image 22 (see FIG. 5).
  • the camera 12 functions as an input image acquisition unit for acquiring the input image 22.
  • the calculation control unit 13 provided in the computer 11 performs collation using a predetermined template image 21, thereby making an input based on the template image 21. It functions as an image processing means for determining the value of the geometric transformation parameter of the image 22. Therefore, the program stored in the storage unit in the arithmetic control unit 13 includes a program for performing image matching between the template image 21 and the input image 22 captured by the camera 12.
  • a storage device built in the computer 11 such as a RAM, a CD (Compact Disk), an FD (Flexible Disk), an MO (Magneto-Optical Disk), a DVD ( A storage device such as Digital Versatile Disk) or HD (Hard Disk) is appropriately used.
  • the arithmetic control unit 13 includes an eigenvalue decomposition parameter image creation processing unit 18 and a detection processing unit 19.
  • the eigenvalue decomposition parameter image creation processing unit 18 generates a converted parameter image group based on a template image 21 stored in advance, performs eigenvalue decomposition on the generated converted parameter image group, and converts the calculated eigenfunction and conversion An eigenvalue decomposition template image group is created from the subsequent template image group.
  • the detection processing unit 19 calculates a correlation value between the input image 22 acquired by the camera 12 and the eigenvalue decomposition template image group generated by the eigenvalue decomposition parameter image creation processing unit 18.
  • the detection processing unit 19 calculates the image similarity from the calculated correlation value, and determines the value of the geometric transformation parameter of the input image 22 based on the template image 21 based on the calculated image similarity.
  • the arithmetic control unit 13 includes a template image storage unit 30, a post-conversion template image generation unit 31, an eigenvalue decomposition unit 32, an eigenfunction selection unit 33, and an eigenvalue decomposition template image generation unit 34.
  • the image similarity calculation unit 35 and the collation unit 36 are included.
  • Template image storage unit 30 stores template image 21 (see FIG. 5).
  • the template image 21 is obtained by imaging the same components as the IC chip 20 to be imaged by the camera 12 from the same angle as the camera 12.
  • the template image 21 is a target to be collated with the input image 22 as an image indicating the direction angle and the reference of the two-dimensional coordinate value in the step of positioning the IC chip 20 on the IC substrate.
  • the template image 21 is created so that the portion (size) occupied by the IC chip 20 in the image is approximately the same in relation to the input image 22 acquired by the camera 12.
  • the imaging position of the IC chip 20 is set so that the portion occupied by the IC chip 20 in the input image 22 to be acquired is the same as the portion occupied by the IC chip 20 in the template image 21. Is done.
  • Such a template image 21 is preset and stored in the template image storage unit 30 of the calculation control unit 13.
  • the post-conversion template image generation unit 31 performs a process corresponding to the first image group generation step in the image processing method described above. That is, the post-conversion template image generation unit 31 generates a post-conversion template image group as the first image group by performing geometric transformation on the template image 21 stored in the template image storage unit 30.
  • the post-conversion template image generation unit 31 performs rotation to change the direction angle and parallel movement to change the two-dimensional coordinate value as geometric transformation to be added to the template image 21. As described above, in the image processing apparatus 10, the post-conversion template image generation unit 31 performs geometric transformation on the template image 21, thereby converting post-conversion template image groups including a plurality of post-conversion template images having different amounts of geometric transformation. Functions as a first image group generation unit for generating.
  • the eigenvalue decomposition unit 32 performs processing corresponding to the eigenvalue decomposition step in the image processing method described above. That is, the eigenvalue decomposition unit 32 calculates eigenvalues and eigenfunctions by performing eigenvalue decomposition on the converted template image group.
  • the eigenvalue decomposition unit 32 calculates the same number of eigenvalues and eigenfunctions as the converted template images constituting the converted template image group. As described above, in the image processing apparatus 10, the eigenvalue decomposition unit 32 performs eigenvalue decomposition on the converted template image group to calculate the same number of eigenvalues and eigenfunctions as the plurality of converted template images.
  • the eigenfunction selection unit 33 performs processing corresponding to the selection step in the image processing method described above. That is, in the image processing apparatus 10, the eigenfunction selection unit 33 functions as a selection unit that selects a predetermined number of eigenfunctions in descending order of eigenvalues among the plurality of eigenfunctions calculated by the eigenvalue decomposition unit 32. . That is, the eigenfunction selection unit 33 selects some eigenfunctions on the side having a larger eigenvalue from among the plurality of eigenfunctions calculated by the eigenvalue decomposition unit 32.
  • the eigenvalue decomposition template image generation unit 34 performs a process corresponding to the second image group generation step in the image processing method described above. That is, the eigenvalue decomposition template image generation unit 34 performs an inner product operation on the plurality of eigenfunctions calculated by the eigenvalue decomposition unit 32 and the converted template image group generated by the converted template image generation unit 31, thereby performing eigenvalue decomposition. Processing for generating a template image group is performed.
  • the eigenvalue decomposition template image generation unit 34 functions as a second image group generation unit that generates an eigenvalue decomposition template image group by performing an inner product operation of the plurality of eigenfunctions and the converted template image group.
  • the eigenvalue decomposition template image generation unit 34 is an inner product of a plurality of eigenfunctions selected by the eigenfunction selection unit 33 and the converted template image group. By performing the calculation, an eigenvalue decomposition template image group is generated.
  • the image similarity calculation unit 35 performs processing corresponding to the image similarity calculation step in the above-described image processing method. That is, the image similarity calculation unit 35 calculates the image similarity (g ( ⁇ )) for the eigenvalue decomposition template image group for the input image 22 based on the above equations (1) to (9). That is, in the image processing apparatus 10, the image similarity calculation unit 35 calculates an image similarity that is represented by a continuous curve and indicates the degree of similarity between the input image 22 and the eigenvalue decomposition template image group.
  • the collation unit 36 performs processing corresponding to the collation step in the image processing method described above. That is, the collation unit 36 determines the value of the geometric transformation parameter of the input image 22 based on the template image 21 based on the image similarity (g ( ⁇ )) calculated by the image similarity calculation unit 35.
  • the collation unit 36 collates the template image 21 and the input image 22 with the image similarity value calculated by the image similarity calculation unit 35 as described above as an index.
  • the collation unit 36 functions as a collation unit that determines the value of the geometric transformation parameter based on the image similarity calculated by the image similarity calculation unit 35.
  • the matching unit 36 determines the direction angle and the two-dimensional coordinate value that maximize the image similarity as the value of the geometric transformation parameter of the input image 22 with the template image 21 as a reference. .
  • the direction angle and the two-dimensional coordinate value of the input image 22 are obtained as the matching result.
  • the direction angle included in the collation result for example, the inclination (rotation angle) ⁇ of the input image 22 with respect to the reference line O10 indicating the direction angle of the template image 21 is obtained.
  • the two-dimensional coordinate value included in the collation result for example, the two-dimensional coordinate value (XY coordinate value) of the center position C10 of the input image 22 when the center position of the template image 21 is the origin O20 is obtained.
  • the template image 21 is input (S10).
  • step S10 a template image 21 obtained by imaging the IC chip 20 to be inspected by a predetermined method is input to the template image storage unit 30 and stored in advance.
  • a converted template image group is generated (S20).
  • a plurality of eigenvalues and eigenfunctions are calculated (S30).
  • the eigenvalue decomposition unit 32 performs eigenvalue decomposition on the post-conversion template image group (N post-conversion template images), thereby calculating eigenvalues and eigenfunctions.
  • N the same number (N) of eigenvalues and eigenfunctions as the number of post-conversion template images generated by the eigenvalue decomposition unit 32 are calculated. Therefore, when 360 converted template images are generated as described above, 360 eigenvalues and eigenfunctions are calculated. In the example shown in FIG.
  • an eigenfunction is selected based on the size of the eigenvalue (S40).
  • the eigenfunction selection unit 33 selects a predetermined number (M) of eigenfunctions having larger eigenvalues among the N eigenfunctions calculated by the eigenvalue decomposition unit 32. For example, when 360 (N) eigenfunctions are calculated as described above, 60 (M) eigenfunctions having a larger eigenvalue are selected. In the example shown in FIG. 2, a group of eigenfunctions ⁇ 1 ( ⁇ ), ⁇ 2 ( ⁇ ),..., ⁇ M ( ⁇ ) consisting of M eigenfunctions from N eigenfunctions ⁇ n ( ⁇ ). Is selected.
  • an eigenvalue decomposition template image group is generated (S50).
  • the eigenvalue decomposition template image generation unit 34 generates the M eigenfunctions selected by the eigenfunction selection unit 33 and the converted template image generated by the converted template image generation unit 31 based on the above equation (4).
  • An inner product operation is performed on the image group to generate an eigenvalue decomposition template image group (M eigenvalue decomposition template images).
  • M eigenvalue decomposition template images M eigenvalue decomposition template images.
  • an eigenvalue decomposition template image group composed of eigenvalue decomposition template images E1 to E3 (E 1 (x, y) to E 3 (x, y)) is obtained.
  • the input image 22 is acquired (S60).
  • the IC chip 20 picked by the arm 17 and held by the arm 17 is imaged by the camera 12, and the imaging data is input to the arithmetic control unit 13 of the computer 11.
  • the input image 2 is acquired and input.
  • the position of the input image 22 is set (S70).
  • the position in the image that is, the two-dimensional coordinate value (x, y) of the pixel is specified for the input image 22 acquired in step S60.
  • the correlation value r n (x, y) between the input image 22 and each of the M eigenvalue decomposition template images at the position (x, y) of the input image 22 is calculated (S80).
  • the correlation value r n is calculated by the equation (5).
  • the correlation value r n is calculated.
  • the image similarity between the input image 22 and the eigenvalue decomposition template image group at the position (x, y) of the input image 22 is calculated (S90).
  • the image similarity calculation unit 35 calculates the position (x, y) of the input image 22 set in step S70 with respect to the eigenvalue decomposition template image group according to the above formulas (3) and (6).
  • An image similarity (g ( ⁇ , x, y)) is calculated. In the example shown in FIG.
  • the position (x, y) of the input image 22 that has been set in step S70 is calculated and the calculation of the image similarity g correlation value r n for all the rows for the two-dimensional coordinate values of the input image 22 Is called. That is, the position in the input image 22 (x, y) for all pixels to identify the calculation of the calculation and image similarity g of correlation value r n is performed.
  • step S70 the position (x, y) of the input image 22 starts from the upper left pixel 22a of the input image 22 and ends at the lower right pixel 22b.
  • the values are set while sequentially shifting by one pixel toward the right side in each row. Then, the calculation of the calculation and image similarity g of correlation value r n is performed for each pixel. That is, in the flowchart shown in FIG.
  • the setting of the position of the input image 22 (S70), the correlation value r n is calculated in (S80), the calculation of the image similarity g (S90) is a starting point position of the input image 22
  • the scanning is performed from the position of the pixel 22a to the pixel 22b as the end point, and is performed for all the pixels (S100).
  • the calculation of the correlation value r n (S80), and calculates the image similarity g (S90) is, for each pixel of the input image 22 is repeated for all pixels in sequence along the pixel arrangement (S70 ⁇ S100) .
  • image similarity g (theta, x, y) is maximum
  • the direction angle ( ⁇ ) and the position (x, y) are determined as the direction angle ( ⁇ ) and the two-dimensional coordinate value (x, y) of the input image 22 (S110).
  • the direction angle ( ⁇ ) of the input image 22 with the template image 21 as a reference A two-dimensional coordinate value (x, y) of the center position is determined.
  • the two-dimensional coordinate value (x, y) of the direction angle ( ⁇ ) and the center position at which the image similarity is maximized is determined as the value (real value) of the geometric transformation parameter of the input image 22.
  • step S110 specifically, the following processing is performed.
  • the calculation result of the image similarity g for all the pixels of the input image 22 in steps S70 to S100 that is, the image similarity of all positions (x, y) ⁇ omnidirectional angle ( ⁇ ).
  • the calculation result of g is stored in a predetermined portion of the arithmetic control unit 13 as an image similarity table 50 that is three-dimensional array data having ⁇ , x, and y as axes.
  • the image similarity table 50 represents a three-dimensional distribution of high and low image similarity in three-dimensional coordinates with ⁇ , x, and y as axes.
  • the image similarity table 50 is referred to, and the direction angle ( ⁇ ) and the position (x, y) at which the image similarity g ( ⁇ , x, y) is maximized are the direction angle ( ⁇ ) Is adopted as a two-dimensional coordinate value (x, y).
  • the direction angle ( ⁇ ) determined in step S110 corresponds to the inclination (rotation angle) ⁇ of the input image 22 with respect to the reference line O10 indicating the direction angle of the template image 21, as shown in FIG.
  • the two-dimensional coordinate value (x, y) determined here is the two-dimensional coordinate value of the center position C10 of the input image 22 when the center position of the template image 21 is the origin O20, as shown in FIG. It corresponds to.
  • the direction angle ⁇ a that maximizes the image similarity g is based on the template image 1 based on the image similarity (g ( ⁇ )) represented by the waveform qs calculated as described above.
  • steps S10 to S50 which are processes for creating an eigenvalue decomposition template image group based on the template image 21 stored in advance, are performed as eigenvalue decomposition. This is performed by the parameter image creation processing unit 18. Then, steps S60 to S110 for calculating the image similarity based on the eigenvalue decomposition template image group and the input image and determining the direction angle and the two-dimensional coordinate value are performed by the detection processing unit 19.
  • the eigenvalue decomposition template image group generation processing by the eigenvalue decomposition parameter image creation processing unit 18 is performed in advance when positioning the plurality of IC chips 20. Is called. Then, the processing for determining the direction angle and the two-dimensional coordinate value by the detection processing unit 19 is repeatedly performed using the eigenvalue decomposition template image group generated in advance in the step of positioning each IC chip 20. Become.
  • the directional angle and the two-dimensional coordinate value are used as the geometric transformation parameters determined for the input image 22, but as described above, as the geometric transformation applied to the template image 21.
  • Projective transformation or image blurring processing may be used, and parameters necessary for projective transformation or image blurring processing may be used as geometric transformation parameters.
  • a 128 ⁇ 128 pixel image is used as the template image.
  • a template image group consisting of a total of 360 template images after conversion was generated by rotating the template images at a pitch of 1 °.
  • the above-described template image is subjected to a positional shift of ⁇ 0.5 to 0.5 pixels in the x direction and the y direction, and a rotation of 0 to 180 °, whereby a test image is obtained.
  • a positional shift of ⁇ 0.5 to 0.5 pixels in the x direction and the y direction and a rotation of 0 to 180 °, whereby a test image is obtained.
  • 43560 test images were generated in total by changing the positioning deviation at a 0.1 pixel pitch and rotating at a 0.5 ° pitch.
  • the rotation verification method is adopted as a comparative example of the proposed method.
  • the rotation collation method a method was adopted in which the template image is collated by rotating a total 360 ° while rotating the template image at a pitch of 6 °. Therefore, in both the proposed method and the rotation collation method, the collation is performed 60 times in one search. For this reason, the calculation amount is the same between the proposed method and the rotation matching method.
  • FIG. 8 shows the relationship between the direction angle of the test image and the error regarding the direction angle and the positional deviation.
  • the horizontal axis represents the direction angle of the test image
  • the vertical axis represents the error with respect to the direction angle.
  • the horizontal axis indicates the direction angle of the test image
  • the vertical axis indicates an error regarding the positional deviation.
  • the graph shown as a continuous line is the data about the proposed method (Proposed)
  • the graph shown with a broken line is the data about the rotation matching method (Rotational matching).
  • each angle of the test image (values on the horizontal axis) obtained by rotating at 0 ° to 180 ° at 1 ° pitch with respect to the error values of the direction angle and the positional deviation.
  • the error value at (the value on the vertical axis) is the average value of errors at each position of 0.1 pixel pitch in the range of -0.5 to 0.5 pixels.
  • the number of matching times is 60 and the processing time is the same.
  • the error is extremely small. Specifically, the direction angle error was 0.03 ° in the proposed method and 0.36 ° in the rotation matching method on average. Further, the error of the positional deviation was 0.04 pixels in the proposed method on average and 0.09 pixel in the rotation matching method on the average.
  • Fig. 9 shows the relationship between the number of verifications and the error.
  • the horizontal axis represents the number of verifications required for one search
  • the vertical axis represents the estimated directional angle error.
  • the number of collations corresponds to the truncation order M in the case of the proposed method (see Expression (7)), and in the case of the rotation collation method, the number of collations is determined by the rotation pitch, and is obtained by dividing 360 ° by the rotation pitch. It becomes.
  • a graph indicated by a solid line is data on the proposed method, and shows an average error of the direction angle when the order M is changed from 10 to 360.
  • a graph indicated by a broken line is data about the rotation matching method, and shows an average error of the direction angle when the rotation pitch is changed.
  • phase-only correlation method (see Non-Patent Documents 4 to 6) was adopted in addition to the rotation matching method.
  • FIG. 10A and 10 (b) show the estimation error of the direction angle and the positional deviation for each of the proposed method, the rotation matching method, and the phase only correlation method.
  • FIG. 10A shows the estimation error of the direction angle (°) when white noise is added
  • FIG. 10B shows the estimation error of the positional deviation amount (pixel) when white noise is added. .
  • FIGS. 10A and 10B show that the proposed method has the smallest error value and the detection accuracy is good among the proposed method, the rotation matching method, and the phase-only correlation method.
  • the phase-only correlation method has a small error with respect to misalignment and a relatively good accuracy, but the direction angle accuracy is better than the other two methods. It can not be said. Furthermore, with respect to the phase-only correlation method, good results were obtained when the noise was small, but relatively good results were obtained when the noise was small. However, when the noise increased, the performance deteriorated significantly. This is considered to be because in the phase-only correlation method, a process of dividing the frequency resolution by the spectral intensity is included, so that the noise of the high-frequency component is amplified and adversely affects the matching. From these experimental results, according to the proposed method, high accuracy can be obtained in comparison with the rotation matching method and the phase-only correlation method, with the estimation accuracy being hardly influenced by the magnitude of noise.
  • SIFT Scale-Invariant Feature Transform
  • AR augmented reality
  • DOG Different Of Gaussian
  • SIFT Detection Of Gaussian processing
  • Gaussian filters with different amounts of blur to the input image to create a plurality of blurred images and detecting feature points from these image groups (DOG image groups)
  • DOG image groups image groups
  • feature points that are robust to scale changes can be detected.
  • SIFT if the number of blurred images is increased, the detection accuracy is improved, but the calculation amount is increased.
  • an operation of blurring an image can be performed by collating a template image given by a Gaussian function. Therefore, the eigenvalue decomposition template image obtained by compressing a template image group of Gaussian functions with different amounts of blur, such as the eigenvalue decomposition template image in the above-described embodiment, is used for the above-mentioned DOG processing, and calculation is performed while maintaining detection accuracy. The amount can be reduced.
  • the template image of the Gaussian function can be described continuously with the blur amount parameter
  • the eigenvalue decomposition template image can be described with an analytical expression by solving the problem of Expression (11). Since the eigenvalue decomposition template image is expressed by an analytical expression, the peak position of the DOG image group can also be obtained analytically.
  • the method of the present invention can detect feature points with high accuracy without numerical errors at high speed.
  • FIG. 11 shows the result of detecting feature points in this example.
  • feature points are detected for an input image 301 as shown in FIG. If the feature points are detected correctly, a total of 16 feature points 305 are detected as shown in the image 304 in FIG.
  • the detected feature points are represented by a graphic in which x is surrounded by ⁇ .
  • the image 302 shown in FIG. 11B there are a total of eight feature points 305a.
  • Eight feature points 305a in the image 302 are feature points detected by creating five DOG images by conventional SIFT processing.
  • the image 303 shown in FIG. 11C there are a total of 12 feature points 305b.
  • Twelve feature points in the image 303 are feature points detected by creating 15 DOG images. From these detection results, 5 DOG image groups are normally used in SIFT, but this does not provide good detection accuracy, and it is understood that all feature points cannot be detected even if the number of DOG images is increased to 15. .
  • the result of detecting the feature points by the method according to the present invention is an image 304 shown in FIG.
  • the number of eigenvalue decomposition template images is 5, and the amount of calculation is the same as in the case of the image 302 shown in FIG. Nevertheless, when the method according to the present invention is used, all 16 feature points 305 are correctly detected. From this detection result, it can be seen that according to the method of the present invention, it is possible to correctly detect all feature points without increasing the number of eigenvalue decomposition template images and the amount of calculation.
  • the eigenvalue decomposition is not performed on the gray value of the template image group, but the template image and the input image are converted into the edge direction image or the incremental code image, and then the template according to the present invention is used.
  • Image collation can also be performed by converting an image group into an eigenvalue decomposition template image group. That is, by performing image matching in consideration of the edge direction, it is possible to perform image matching that is robust against shielding such as fluctuations in brightness and hiding part of the object.
  • the present invention has proposed a new method for rapidly detecting a geometric transformation parameter such as a position and a directional angle that matches a template image from an input image.
  • the proposed method calculates a correlation value waveform by collating images obtained by eigenvalue decomposition of a template image. That is, the present invention performs eigenvalue decomposition on a template image group for image matching, extracts principal component information, and calculates a continuous similarity curve for estimating similarity of images based on this, thereby matching time. Shortening and improving collation accuracy.
  • a geometrical difference (geometric transformation parameter value) between an inspection target image (input image) and a template image is obtained without reducing accuracy by using a new template image whose number is reduced by eigenvalue decomposition. It can be said to be a compression template matching method to be estimated.
  • a second image group (eigenvalue decomposition template image group) generated by performing an inner product operation on the first image group (post-conversion template image group) that is the original template image group for the eigenfunction calculated by eigenvalue decomposition. This is used twice to generate the image and calculate the image similarity with the input image. Thereby, the image similarity represented by a continuous curve can be obtained, and it is possible to estimate the value of the geometric transformation parameter with high accuracy that cannot be obtained by the conventional method.
  • Applications of the present invention include inspection and measurement of various machine parts such as automobile parts, biometrics (fingerprint authentication), and positioning of semiconductor wafers and IC substrates (semiconductor inspection, mask exposure) that require high-precision alignment. Alignment, rotation angle alignment (alignment)), appearance inspection, and the like, and the present invention has applicability in various fields.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé susceptible de garantir une précision équivalente à celle atteinte par un procédé de mise en correspondance par rotation en permettant de procéder à une mise en correspondance en mettant en rotation une image modèle avec un facteur de correspondance inférieur au procédé de mise en correspondance par rotation, et susceptible de réduire le temps de traitement. Ce procédé comprend les étapes consistant à : soumettre une image modèle (1) à une transformation géométrique de façon à produire un premier groupe d'images comprenant une pluralité d'images modèles transformées géométriquement présentant différentes quantités de transformation géométrique; soumettre le groupe d'images à une décomposition en valeurs propres de façon à calculer autant de valeurs propres et de fonctions propres qu'il y a d'images modèles transformées; exécuter une opération de produit intérieur entre la pluralité de fonctions propres et le groupe d'images de façon à produire un second groupe d'images; calculer le degré de similarité entre une image d'entrée et le second groupe d'images; et déterminer une valeur d'un paramètre de transformation géométrique sur la base du degré de similarité calculé.
PCT/JP2011/069320 2010-08-31 2011-08-26 Procédé et appareil de traitement d'image WO2012029662A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2012531845A JP5317250B2 (ja) 2010-08-31 2011-08-26 画像処理方法および画像処理装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010-193160 2010-08-31
JP2010193160 2010-08-31

Publications (1)

Publication Number Publication Date
WO2012029662A1 true WO2012029662A1 (fr) 2012-03-08

Family

ID=45772752

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/069320 WO2012029662A1 (fr) 2010-08-31 2011-08-26 Procédé et appareil de traitement d'image

Country Status (2)

Country Link
JP (1) JP5317250B2 (fr)
WO (1) WO2012029662A1 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013254332A (ja) * 2012-06-06 2013-12-19 Kumamoto Univ 画像処理方法および画像処理装置
WO2014105724A1 (fr) * 2012-12-28 2014-07-03 Intel Corporation Génération de fonctions d'imagerie canoniques
JP2015176248A (ja) * 2014-03-13 2015-10-05 富士通株式会社 ナンバープレート認識装置、ナンバープレート認識システム、ナンバープレート認識プログラム及びナンバープレート認識方法
CN105809815A (zh) * 2015-01-19 2016-07-27 株式会社巨晶片 特征图像生成装置和判定装置以及特征图像生成方法和判定方法
JPWO2014061372A1 (ja) * 2012-10-18 2016-09-05 コニカミノルタ株式会社 画像処理装置、画像処理方法および画像処理プログラム
JP2016164737A (ja) * 2015-03-06 2016-09-08 富士通株式会社 認識装置、認識システム、認識プログラムおよび認識方法
JP2017037584A (ja) * 2015-08-14 2017-02-16 株式会社デンソーアイティーラボラトリ キーポイント検出器、キーポイント検出方法、及びキーポイント検出プログラム
JP2017134825A (ja) * 2015-12-30 2017-08-03 トムソン ライセンシングThomson Licensing 視聴覚データを含むコンテンツを選択するための方法、ならびに対応する電子装置、システム、コンピュータ読取り可能なプログラム、およびコンピュータ読取り可能な記憶媒体
JP2018004389A (ja) * 2016-06-30 2018-01-11 鹿島建設株式会社 亀裂解析方法及び亀裂解析システム
CN117349027A (zh) * 2023-12-04 2024-01-05 环球数科集团有限公司 一种降低算力需求的多模态大模型构建系统和方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1125269A (ja) * 1997-07-02 1999-01-29 Sanyo Electric Co Ltd 顔画像認識装置及びその方法
JP2002170120A (ja) * 2000-12-01 2002-06-14 Honda Motor Co Ltd 運動情報認識システム

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1125269A (ja) * 1997-07-02 1999-01-29 Sanyo Electric Co Ltd 顔画像認識装置及びその方法
JP2002170120A (ja) * 2000-12-01 2002-06-14 Honda Motor Co Ltd 運動情報認識システム

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013254332A (ja) * 2012-06-06 2013-12-19 Kumamoto Univ 画像処理方法および画像処理装置
JPWO2014061372A1 (ja) * 2012-10-18 2016-09-05 コニカミノルタ株式会社 画像処理装置、画像処理方法および画像処理プログラム
WO2014105724A1 (fr) * 2012-12-28 2014-07-03 Intel Corporation Génération de fonctions d'imagerie canoniques
JP2015176248A (ja) * 2014-03-13 2015-10-05 富士通株式会社 ナンバープレート認識装置、ナンバープレート認識システム、ナンバープレート認識プログラム及びナンバープレート認識方法
CN105809815A (zh) * 2015-01-19 2016-07-27 株式会社巨晶片 特征图像生成装置和判定装置以及特征图像生成方法和判定方法
CN105809815B (zh) * 2015-01-19 2019-01-04 株式会社巨晶片 特征图像生成装置和判定装置以及特征图像生成方法和判定方法
JP2016164737A (ja) * 2015-03-06 2016-09-08 富士通株式会社 認識装置、認識システム、認識プログラムおよび認識方法
JP2017037584A (ja) * 2015-08-14 2017-02-16 株式会社デンソーアイティーラボラトリ キーポイント検出器、キーポイント検出方法、及びキーポイント検出プログラム
JP2017134825A (ja) * 2015-12-30 2017-08-03 トムソン ライセンシングThomson Licensing 視聴覚データを含むコンテンツを選択するための方法、ならびに対応する電子装置、システム、コンピュータ読取り可能なプログラム、およびコンピュータ読取り可能な記憶媒体
JP2018004389A (ja) * 2016-06-30 2018-01-11 鹿島建設株式会社 亀裂解析方法及び亀裂解析システム
CN117349027A (zh) * 2023-12-04 2024-01-05 环球数科集团有限公司 一种降低算力需求的多模态大模型构建系统和方法
CN117349027B (zh) * 2023-12-04 2024-02-23 环球数科集团有限公司 一种降低算力需求的多模态大模型构建系统和方法

Also Published As

Publication number Publication date
JP5317250B2 (ja) 2013-10-16
JPWO2012029662A1 (ja) 2013-10-28

Similar Documents

Publication Publication Date Title
JP5317250B2 (ja) 画像処理方法および画像処理装置
CN110689579B (zh) 基于合作目标的快速单目视觉位姿测量方法及测量系统
JP5699788B2 (ja) スクリーン領域検知方法及びシステム
JP6899189B2 (ja) ビジョンシステムで画像内のプローブを効率的に採点するためのシステム及び方法
KR100305419B1 (ko) 영상 위치맞춤 방법, 영상 위치맞춤 장치 및 기록 매체
JP4603512B2 (ja) 異常領域検出装置および異常領域検出方法
JP7188201B2 (ja) 画像処理装置、画像処理方法、及び画像処理プログラム
US11094082B2 (en) Information processing apparatus, information processing method, robot system, and non-transitory computer-readable storage medium
CN112815843B (zh) 一种3d打印过程中工件表面打印偏差的在线监测方法
JP6880618B2 (ja) 画像処理プログラム、画像処理装置、及び画像処理方法
JP2007128373A (ja) 画像処理方法、画像処理用のプログラムならびにその記憶媒体、および画像処理装置
TW201731004A (zh) 減少配準及設計附近所引發之晶粒內檢查之雜訊
JP6778997B2 (ja) Cadデータとsem画像との位置合わせ
JP5945823B2 (ja) 画像処理方法および画像処理装置
JP2008116207A (ja) 画像計測装置、画像計測方法及びプログラム
JP2009146150A (ja) 特徴位置検出方法及び特徴位置検出装置
JP2008052598A (ja) 画像位置計測方法、画像位置計測装置、および画像位置計測プログラム
JP4097255B2 (ja) パターンマッチング装置、パターンマッチング方法およびプログラム
Liang et al. An integrated camera parameters calibration approach for robotic monocular vision guidance
JP4525526B2 (ja) パターンマッチング方法及び装置
JP4953211B2 (ja) 特徴抽出装置および特徴抽出方法
US20240233177A9 (en) System and method for efficiently scoring probes in an image with a vision system
US20220358631A1 (en) Optical Measurement of Workpiece Surface using Sharpness Maps
JP5355377B2 (ja) 画像パターン照合装置および方法
JP5378340B2 (ja) ひずみ測定装置およびひずみ測定方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11821679

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2012531845

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11821679

Country of ref document: EP

Kind code of ref document: A1