US8478051B2 - Generalized statistical template matching under geometric transformations - Google Patents
Generalized statistical template matching under geometric transformations Download PDFInfo
- Publication number
- US8478051B2 US8478051B2 US12/595,456 US59545608A US8478051B2 US 8478051 B2 US8478051 B2 US 8478051B2 US 59545608 A US59545608 A US 59545608A US 8478051 B2 US8478051 B2 US 8478051B2
- Authority
- US
- United States
- Prior art keywords
- template
- image
- line segments
- regions
- representation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
- G06V10/507—Summing image-intensity values; Histogram projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/758—Involving statistics of pixels or of feature values, e.g. histogram matching
Definitions
- the present invention relates to a method and apparatus for detecting or locating an object in an image.
- the present invention relates to a method and apparatus for matching a template to an image, to locate an object corresponding to the template, when the object has been subject to a geometric transformation.
- the present invention further relates to a method for determining a geometric transformation of an object in an image.
- Template matching is a standard computer vision tool for finding objects or object parts in images. It is used in many applications including remote sensing, medical imaging, and automatic inspection in industry. The detection of real-world objects is a challenging problem due to the presence of illumination and colour changes, partial occlusions, noise and clutter in the background, and dynamic changes in the object itself.
- Kanade “Fast template matching based on the normalized correlation by using multiresolution eigenimages”
- IEEE/RSJ/GI Int. Conf. on Intelligent Robots and Systems (IROS'94), Vol. 3, pp. 2086-2093, 1994 (reference 4, infra) describe fitting rigidly or non-rigidly deformed templates to image data.
- the general strategy of template matching is the following: for every possible location, rotation, scale, or other geometric transformation, compare each image region to a template and select the best matching scores.
- This computationally expensive approach requires O(N l N g N t ) operations, where N l is the number of locations in the image, N g is the number of transformation samples, and N t is the number of pixels used in matching score computation.
- Many methods try to reduce the computational complexity.
- N l and N g are usually reduced by the multiresolution approach (e.g., such as in reference 4, infra).
- the geometric transformations are not included in the matching strategy at all, assuming that the template and the image patch differ by translation only (such as in reference 11, infra).
- Another way to perform template matching is direct fitting of the template using gradient descent or gradient ascent optimization methods to iteratively adjust the geometric transformation until the best match is found.
- Such a technique is described in Lucas, T. Kanade, “An iterative image registration technique with an application to stereo vision” Proc. of Imaging understanding workshop, pp 121-130, 1981 (reference 10, infra). These techniques need initial approximations that are close to the right solution.
- N t in the computational complexity defined above is reduced by template simplification, e.g., by representing the template as a combination of rectangles.
- template simplification e.g., by representing the template as a combination of rectangles.
- integral images so-called integral images, and computing a simplified similarity score
- the normalized contrast between “positive” and “negative” image regions defined by the template the computational speed of rapid template matching is independent of the template size and depends only on the template complexity (the number of rectangles comprising the template).
- Haar-like features are not rotation-invariant, and a few extensions of this framework have been proposed to handle the image rotation. For example M. Jones, P.
- a first drawback is that it is not easy to generalize two-region Haar-like features to the case of three or more pixel groups.
- rectangle-based representation is redundant for curvilinear object shapes, e.g. circles. Usage of curved templates instead of the rectangular ones should result in such cases in higher matching scores and, therefore, in better detector performance.
- the present application proposes a new approach that can be placed in between rapid template matching methods and standard correlation-based template matching methods in terms of computational complexity and this matching speed.
- the proposed approach addresses some of the limitations of existing techniques described above and, optionally, can also be extended to an iterative refinement framework for precise estimation of object location and transformation.
- a new template matching framework is proposed, in which a template is a simplified representation of the object of interest by a set of pixel groups of any shape, and the similarity between the template and an image region is derived from the so-called F-test statistic.
- a set of geometrically transformed versions of the template (e.g. resulting from rotation and scaling using a predetermined discrete set of parameters) is applied at each location in the image, and the geometric parameters of the geometrically transformed template giving the best matching score are associated with the corresponding location.
- the template and each geometrically transformed version of the template is rasterized into sets of line segments, where each set of segments is the rasterized version of one region of the template.
- One or more complex regions, having the largest number of segments, are excluded from computations using a similarity score, such as the similarity score defined by equation (9), below.
- the similarity score may be further simplified by storing intermediate terms, computed for the outer region.
- a discrete set of geometrically transformed versions of the template are used in calculating the similarity score.
- an adaptive subpixel refinement method may be used to enhance the accuracy of matching of an object under arbitrary parametric 2D-transformations.
- the parameters maximizing the matching score may be found by a so-called “gradient ascent/descent method”. In one embodiment, this can be reduced to solving an equivalent eigenvalue problem.
- FIG. 1 is a flow diagram illustrating the steps of a method according to an embodiment of the present invention
- FIG. 2( a ) shows a template consisting of three regions of circular shape
- FIG. 2( b ) shows a 1 st region of interest (R 1 ) in an image
- FIG. 2( c ) shows a 2n d region in the image (R 2 )
- FIG. 2( d ) show the decomposition of R 1 into three regions by the template where the pixel groups are similar
- FIG. 3 illustrates object transformation in a perspective model
- FIG. 4 illustrates rotation of a two region template by 45°, and its representation by a set of lines, in accordance with the present invention
- FIG. 5( a ) illustrates a test image
- FIG. 5( b ) illustrates a two region template
- FIG. 5( c ) illustrates a matching score map, when the two region template of FIG. 5( b ) is applied to the image of FIG. 5( a ) in accordance with a method of an embodiment of the present invention
- FIG. 6( a ) illustrates a test image that has undergone perspective trans-formation
- FIG. 6( b ) illustrates a two region template
- FIG. 6( c ) illustrates the deregulation of the test image of FIG. 6( a ) in accordance with a method of an embodiment of the present invention
- FIG. 6( d ) illustrates iterations of image patch transformations, in accordance with a method of an embodiment of the present invention.
- template matching involves the processing of signals corresponding to images and templates of objects to be detected in images.
- the processing can be performed by any suitable system or apparatus, and can be implemented in the form of software.
- the template matching process produces a “matching score”, also called a “similarity score,” for locations of the template in an image.
- a method according to the present invention is based on so-called Statistical Template Matching (STM), first introduced in EP-A-1 693 783 (reference 2, infra), the contents of which are hereby incorporated by reference.
- STM Statistical Template Matching
- the framework of Statistical Template Matching is very similar to the rapid template matching framework discussed above; the main difference is that Statistical Template Matching uses a different matching score derived from the F-test statistic, thereby supporting multiple pixel groups.
- the Statistical Template Matching method is overviewed below.
- a first embodiment of the present invention concerns a new extension of Statistical Template Matching for matching rotated and scaled objects.
- the extension is based on using “integral lines”, as described in more detail below.
- a second embodiment concerns another new extension, termed “Adaptive Subpixel (AS) STM”, suitable for accurate estimation of parametric 2D-transformation of the object.
- a third embodiment concerns an efficient solution for a particular case of Haar-like templates.
- the name Statistical Template Matching originates from the fact that only statistical characteristics of pixel groups, such as mean and variance, are used in the analysis. These pixel groups are determined by a topological template, which is the analogue of the Haar-like feature in a two-group case.
- FIG. 2 FIG.
- FIG. 2( a ) shows a template consisting of three regions of circular shape, T 1 T 2 and T 3 .
- FIGS. 2( b ) and 2 ( c ) show first and second regions of interest R 1 and R 2 , respectively.
- the pixel groups are different (black, dark-gray and light-gray mean colours), from which it is possible to conclude that image region R 2 is similar to the template.
- ANOVA Analysis Of Variances
- the matching score in equation (3) can also be derived from the squared t-test statistic, which is the squared signal-to-noise ratio (SNR), ranging from 1 (noise), corresponding to the case when all groups are similar, to infinity (pure signal), corresponding to the case when the template strictly determines the layout of pixel groups and all pixels in a group are equal.
- SNR squared signal-to-noise ratio
- the distribution of pixel values in image patches can be arbitrary and usually does not satisfy the above assumptions (normal distribution, equal variances); therefore, in practice, it is convenient to interpret the matching score in equation (3) as SNR. Instead of using statistical tables for the F-variable, a reasonable SNR threshold above 1 can determine if the similarity in equation (3) between the template and the image region is large enough.
- FIG. 1 shows a method of Statistical Template Matching, which generalises the above described principles, according to an embodiment of the present invention.
- a template for an object of interest is received at step 100 and a predetermined set of geometric transforms are applied to derive a set of geometrically transformed templates.
- each of the geometrically transformed templates is rasterised to produce a set of line segments for each geometrically transformed template, each region of the template comprising a subset of the line segments.
- one or more most complex region of the rasterised template e.g. corresponding to a region with a largest or threshold number of line segments
- a test image is scanned pixel by pixel in step 200 , and template matching performed at the current location of the image as follows.
- Step 130 determines whether the template is a circular template. If the template is circular, template matching is performed using a simplified matching score computation, which does not take into account rotation of the entire template. Alternatively, if the template is not circular, template matching is performed using a standard matching score computation, as described below.
- a simplified matching score computation is performed in step 140 at the current location of the test image, to produce a set of matching scores for the templates (i.e. a matching score for each geometrically transformed version of the template) at the current location of the image.
- a standard matching score computation is performed at step 150 at the current location of the test image, to produce a set of matching scores at the current location of the image.
- Step 160 receives the set of matching scores from either step 140 step 150 , and selects the best matching score (e.g. a maximum score), and outputs the best geometric parameters, corresponding to the geometric transformation of the template with the best matching score, for the current location of the image.
- the best matching score e.g. a maximum score
- Statistical template matching is then performed, in accordance with steps 140 to 160 as described above, for all image locations, and a matching score map and geometric parameters map for all the locations of the image is output to step 170 .
- step 170 local maxima of matching score are selected, and object locations and transformations corresponding to the maxima are output.
- location and transformational refinement is performed by adaptive subpixel statistical template matching, in accordance with an alternative embodiment of the present invention. Step 180 enables accurate object locations and transformations to be obtained, in the case of more complex geometric transformations, as will be appreciated from the following description.
- the method of the embodiment of the present invention as illustrated in FIG. 1 may be performed by any suitable apparatus including a processor, for processing signals corresponding to images, and memory for storing data for images and templates.
- the method may be implemented in the form of a computer program stored on a computer readable medium, having instructions, executable by a processor.
- step 150 Techniques for performing matching score computation, as in step 150 , and simplified matching score computation, as in step 140 will be described below.
- step 140 the adaptive subpixel statistical template matching technique of the alternative embodiment will be described, thereafter.
- GSTM generalized STM
- the template should be transformed using the same model P.
- a predetermined set of similarity transforms is applied to the template at step 100 , and, for each location, the templates and corresponding rotation and scale parameters are selected that give the best matching score using equations (5)-(6), above.
- an iterative technique is used for recovering a full parametric 2D-transformation, which uses the similarity transform of the first embodiment as an initial approximation.
- each transformed template is rasterized at step 110 , and each template region is represented by a set of line segments ⁇ s i,j
- each line segment is a rectangle of one-pixel height, and thus, the integral images technique can be used to compute the variances, as in equation (3), using Statistical Template Matching.
- a more optimal way of computation that handles segments efficiently involves the use of a one-dimensional analogue of integral images, integral lines, defined as follows:
- equation (3) can be expressed in a more convenient form according to the following equation (9) using the definitions of equation (8):
- the algorithm does not require multiple sums of squared pixel values v i to compute the matching score. It is sufficient to compute only the sum of squared pixel values in the entire template T 0 and N sums of pixels in T 0 , T 1 , . . . , T n ⁇ 1 .
- v 0 and u 0 remain constant for each rotation angle, and only u 1 , . . . , u M ⁇ 1 need recomputing.
- T N the most complex region, consisting of the largest number of lines. Line configurations change during template rotation, thus alternating the most complex region at each rotation angle.
- FIG. 5 illustrates an example of image matching using the Generalised Statistical Template Matching (GSTM) technique of the present invention, as described above.
- FIG. 5( a ) shows an image of interest, which includes a large number of different, geometrically transformed versions of an object, an elephant, as represented by the template shown in FIG. 5( b ).
- the GSTM technique outputs a similarity map giving the best similarity score for all locations in the image.
- FIG. 5( c ) illustrates such a similarity map for the image of FIG. 5( a ) using the template of FIG. 5( b ), with large values for the similarity score represented in white and small values in black.
- peak values in the similarity map are identified, which correspond to locations of the object in the original image.
- an alternative embodiment of the present invention is not restricted by rotation and scale only, and uses full transformation P ( FIG. 3 ) to iteratively estimate object location and transformation with high accuracy.
- the perspective model which has eight transformation parameters p, is used for all simulations, but any other parametric transformation, is also applicable.
- the goal of this iterative STM method is to compute transformation parameters p adaptively from image data, maximizing the matching score S(x,p) at a particular object location x.
- Equation (12) includes partial derivatives of the image function on coordinates.
- Equation (12) includes also partial derivatives of the transformed coordinates on parameters of transformation. They have an analytic representation provided that the transformation model is given. In this embodiment the perspective model is used and such derivatives are presented in Appendix 1.
- Any state-of-the-art method from linear algebra can be used to find the largest eigenvalue S (which is also the maximized matching score) and corresponding eigenvector ⁇ p (the amendments to the image transformation parameters). Examples of such methods are power iterations and inverse iterations (see reference 8, infra, for a detailed review).
- the eigenvector ⁇ p is found, any vector of the form ⁇ p is also a solution of equation (16). It is possible to select an optimal ⁇ that improves the convergence and prevents the solution from oscillations around the maximum. This Linesearch strategy has been found to provide a robust solution. A detailed review of this and other strategies can be found in reference 9, infra.
- the original non-linear problem can be solved by iteratively applying the linearized solution.
- the iterations stop when the matching score, the centre of the image patch and/or parameter amendments do not change significantly.
- Steps 2, 3 of this algorithm perform image processing, and are presented in Appendix 1 in detail. Other steps perform only numerical operations based on prior art linear algebra methods.
- the above algorithm provides just one example of implementing the ASSTM method, using the result of GSTM as an initial approximation of the geometric transformation in a gradient ascent method.
- Other examples are possible.
- a gradient descent method would be used.
- step 4 of the ASSTM algorithm is implemented as follows:
- FIG. 6 illustrates an example of using the Adaptive Subpixel Statistical Template Matching (ASSTM) technique of the present invention, as described above, on synthetic image data.
- FIG. 6( a ) shows an image of interest, which includes a single, geometrically transformed version of an object, an elephant, as represented by the template shown in FIG. 6( b ).
- the GSTM technique outputs an initial approximation of the geometric transformation, corresponding to the geometrically transformed template that produces the best similarity score for the image.
- the Adaptive Subpixel Statistical Template Matching (ASSTM) technique is then applied to the initial approximation of FIG. 6( c ), and iterations of the geometric transformation of the object are derived.
- FIG. 6( d ) shows image patches derived using these iterations of the geometric transformation, in which the image of FIG. 6( c ) is transformed. It can be seen that the 18 th iteration corresponds to the template of FIG. 6( b ).
- the proposed methods can also be used to generalize the rapid object detection framework of reference 1, infra to:
- the method can be applied in any circumstance in which standard template matching methods are usually applied, using appropriate designs for the application-specific topological template.
- Another application is video coding, in which the local motion vectors are extracted by block matching methods. These methods are variants of the correlation-based template matching framework with quadratic complexity on template size. Replacing the templates by their equivalent topological template and computing the matching score of equation 9 will result in linear complexity of the algorithm on template size, and therefore will enable faster and more accurate video coding.
- Another application is registration of multimodal images.
- Examples of such data are images of the same scene taken by different sensors, e.g. Optical cameras and Synthetic Aperture Radars (SAR).
- SAR Synthetic Aperture Radars
- the majority of corresponding pixels are often uncorrelated, and the standard template matching techniques fail.
- the analyst it is possible for the analyst to detect some high-level structures, consisting of multiple regions that are presented in both kinds of data.
- these can be rivers, lakes, fields, roads and so on.
- a topological template as a collection of regions presented in one image it will be possible to register by the proposed method the other image transformed by a geometric transformation with unknown parameters.
- step 2 the image patch centred at current position is transformed using (20). This is a forward transformation and it is less suitable for computing the transformed image, because integer coordinates are mapped to floating-point coordinates.
- Pixel values f(x′,y′) at integer coordinates (x′,y′) are found by using inverted transformation equation (20) and by interpolating known pixels f(x,y) at integer coordinates (x,y). This is a well-known inverse mapping method.
- the image derivatives, included in equation (12) are obtained by their discrete approximations as equation (21)
- equation (12) is computed differentiating equation (20), for example:
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
Description
Taking into account degrees of freedom of VBG and VWG, the relationship VBG+VWG=n0σ0 2 and applying equivalent transformations, the F-variable becomes:
Removing constant terms in equation (2), the expression for the matching score (or similarity score) is obtained as:
Computed for all pixels x, the matching scores derived using equation (3) form a confidence map, in which the local maxima correspond to likely object locations. Application-dependent analysis of statistics mi, σi helps to reduce the number of false alarms. When photometric properties of the object parts are given in advance, e.g., some of the regions are darker or less textured than the others, additional constraints, such as relation (4), reject false local maxima:
mi<mj, σi<σj (4)
For Haar-like features (N=2), the matching score in equation (3) can also be derived from the squared t-test statistic, which is the squared signal-to-noise ratio (SNR), ranging from 1 (noise), corresponding to the case when all groups are similar, to infinity (pure signal), corresponding to the case when the template strictly determines the layout of pixel groups and all pixels in a group are equal. The distribution of pixel values in image patches can be arbitrary and usually does not satisfy the above assumptions (normal distribution, equal variances); therefore, in practice, it is convenient to interpret the matching score in equation (3) as SNR. Instead of using statistical tables for the F-variable, a reasonable SNR threshold above 1 can determine if the similarity in equation (3) between the template and the image region is large enough.
it is possible to recover an approximated object pose. The number of parameter combinations and computational time grow exponentially with the number of parameters; therefore, it is essential to use a minimal number of parameters. Many approaches, such as those in references 4-7, infra, use the fact that moderate affine and perspective distortions are approximated well by the similarity transform requiring only two additional parameters for rotation and scale. In a method according to an embodiment of the present invention, as shown in
Ti=si,1Usi,2Usi,3U . . .
where Ii(−1,y)=I2(−1,y)=0. Thus, the number of memory references is reduced from the number of pixels to the number of lines in the rasterized template.
where Δp=(1, Δp1, . . . Δpk)T is a vector of parameter amendments and
Equation (12) includes partial derivatives of the image function on coordinates. In this embodiment they are computed using discrete approximations as shown in
where A=V0−U0, B=V0−U1− . . . −Uk. The matrices A and B are one-rank modifications of the same covariance matrix V0. They are symmetric by definition and positive-definite, which follows from the fact that both numerator and denominator in quotient (15) are image variances.
AΔp=SBΔp, (16)
|
1. | Starts at iteration n=0 from initial values S0, x0, p0 obtained by |
|
|
2. | Resample image patch centered at coordinates xn using current |
parameters Pn | |
3. | Compute image derivatives from resampled image patch f(xn′,yn′); |
compute partial derivatives of the transformation model P in (12) | |
using current values of {pi}. | |
4. | Compute matrices V0, U1,...,Uk, A, B and solve the optimization |
problem (15) by finding maximal eigenvalue Smax and eigenvector | |
Δpn of (16) | |
5. | Use the Linesearch strategy to find αn maximizing |
Smax(pn+αn Δpn)≡Sn+1 | |
6. | Update parameters: pn+1 = pn+αn Δpn and a new object location xn+1 = |
P(xn, Pn+1). | |
7. | If |αnΔpn|<ε1 and/or |Sn+1−Sn|<ε2 then stop; else go to step 2 for |
a next iteration n=n+1. | |
Δp max =B −1 w (18)
S max =aw T Δp max+1 (19)
4.1) Compute matrices V0, U1, U2, B and vector w; |
4.2) Solve the system BΔpn=w by the efficient Cholecky decomposition |
method as follows: |
4.2.1) Apply Cholecky decomposition B=LLT, where L is a bottom- |
triangular matrix |
4.2.2) Solve a simplified linear system Lz=w, to find an intermediate |
vector z |
4.2.3) Solve a simplified linear system LTΔpn=z, to find a required vector |
Δpn |
-
- Non-Haar-like features,
- Features of complex shape
- Arbitrarily oriented features.
- 1. P. Viola, M. Jones, Rapid object detection using a boosted cascade of simple features. IEEE CVPR, pp. 511-518, 2001
- 2. Sibiryakov, M. Bober, Fast method of object detection by statistical template matching, European Patent Application No 05250973.4
- 3. Jain, Y. Zhong, S. Lakshmanan, Object Matching Using Deformable Templates, IEEE TPAMI, Vol. 18(3), pp 267-278, 1996
- 4. S. Yoshimura, T. Kanade, Fast template matching based on the normalized correlation by using multiresolution eigenimages, IEEE/RSJ/GI Int. Conf. on Intelligent Robots and Systems (IROS '94), Vol. 3, pp. 2086-2093, 1994.
- 5. M. Jones, P. Viola, Fast Multi-view Face Detection, IEEE CVPR, June 2003
- 6. R. Lienhart, J. Maydt. An extended set of Haar-like features for rapid object detection, ICIP'02, pp. 900-903, V.1, 2002.
- 7. Messom, C. H. and Barczak, A. L., Fast and Efficient Rotated Haar-like Features using Rotated Integral Images, Australasian Conf. on Robotics and Automation, 2006
- 8. Golub, C. Van Loan, Matrix computations, Johns Hopkins University Press, Baltimore, Md., 1996. ISBN: 0-8018-5414-8
- 9. N. Gould, S. Leyffer, An introduction to algorithms for nonlinear optimization. In J. F. Blowey, A. W. Craig, and T. Shardlow, Frontiers in Numerical Analysis, pp. 109-197. Springer Verlag, Berlin, 2003.
- 10. Lucas, T. Kanade, An iterative image registration technique with an application to stereo vision. Proc. of Imaging understanding workshop, pp 121-130, 1981
- 11. Zitova, J. Flusser: Image Registration Methods: a Survey, Image and Vision Computing, vol. 24, pp. 977-1000, 2003
where coordinates (x′, y′) are replaced by (x,y) for simplicity.
where a=p7x+p8y+1
Claims (24)
Ti=si,1∪si,2∪s1,3∪ . . .
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0707192.1 | 2007-04-13 | ||
GBGB0707192.1A GB0707192D0 (en) | 2007-04-13 | 2007-04-13 | Generalized statistical template matching |
PCT/GB2008/001006 WO2008125799A2 (en) | 2007-04-13 | 2008-03-20 | Generalized statistical template matching under geometric transformations |
Publications (2)
Publication Number | Publication Date |
---|---|
US20100119160A1 US20100119160A1 (en) | 2010-05-13 |
US8478051B2 true US8478051B2 (en) | 2013-07-02 |
Family
ID=38116716
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/595,456 Active 2030-06-10 US8478051B2 (en) | 2007-04-13 | 2008-03-20 | Generalized statistical template matching under geometric transformations |
Country Status (5)
Country | Link |
---|---|
US (1) | US8478051B2 (en) |
EP (1) | EP2153379A2 (en) |
JP (1) | JP5063776B2 (en) |
GB (1) | GB0707192D0 (en) |
WO (1) | WO2008125799A2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120182318A1 (en) * | 2011-01-18 | 2012-07-19 | Philip Andrew Mansfield | Transforming Graphic Objects |
US20120328160A1 (en) * | 2011-06-27 | 2012-12-27 | Office of Research Cooperation Foundation of Yeungnam University | Method for detecting and recognizing objects of an image using haar-like features |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8483489B2 (en) | 2011-09-02 | 2013-07-09 | Sharp Laboratories Of America, Inc. | Edge based template matching |
US8651615B2 (en) | 2011-12-19 | 2014-02-18 | Xerox Corporation | System and method for analysis of test pattern image data in an inkjet printer using a template |
US8774510B2 (en) * | 2012-09-11 | 2014-07-08 | Sharp Laboratories Of America, Inc. | Template matching with histogram of gradient orientations |
US9569501B2 (en) * | 2013-07-12 | 2017-02-14 | Facebook, Inc. | Optimizing electronic layouts for media content |
EP3048558A1 (en) * | 2015-01-21 | 2016-07-27 | Application Solutions (Electronics and Vision) Ltd. | Object detecting method and object detecting apparatus |
WO2017142448A1 (en) * | 2016-02-17 | 2017-08-24 | Telefonaktiebolaget Lm Ericsson (Publ) | Methods and devices for encoding and decoding video pictures |
CN106296706B (en) * | 2016-08-17 | 2018-12-21 | 大连理工大学 | A kind of depth calculation method for reconstructing for combining global modeling and non-local filtering |
CN107066989B (en) * | 2017-05-04 | 2020-04-24 | 中国科学院遥感与数字地球研究所 | Method and system for identifying accumulated snow of geostationary satellite remote sensing sequence image |
WO2019220622A1 (en) * | 2018-05-18 | 2019-11-21 | 日本電気株式会社 | Image processing device, system, method, and non-transitory computer readable medium having program stored thereon |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6249608B1 (en) * | 1996-12-25 | 2001-06-19 | Hitachi, Ltd. | Template matching image processor utilizing sub image pixel sums and sum of squares thresholding |
US6330353B1 (en) | 1997-12-18 | 2001-12-11 | Siemens Corporate Research, Inc. | Method of localization refinement of pattern images using optical flow constraints |
US6546137B1 (en) | 1999-01-25 | 2003-04-08 | Siemens Corporate Research, Inc. | Flash system for fast and accurate pattern localization |
US20040081360A1 (en) | 2002-10-28 | 2004-04-29 | Lee Shih-Jong J. | Fast pattern searching |
US20060029291A1 (en) | 2004-08-09 | 2006-02-09 | Eastman Kodak Company | Multimodal image registration using compound mutual information |
US20060088189A1 (en) * | 2000-02-18 | 2006-04-27 | Microsoft Corporation | Statistically comparing and matching plural sets of digital data |
EP1693783A1 (en) | 2005-02-21 | 2006-08-23 | Mitsubishi Electric Information Technology Centre Europe B.V. | Fast method of object detection by statistical template matching |
-
2007
- 2007-04-13 GB GBGB0707192.1A patent/GB0707192D0/en not_active Ceased
-
2008
- 2008-03-20 EP EP08718843A patent/EP2153379A2/en not_active Withdrawn
- 2008-03-20 WO PCT/GB2008/001006 patent/WO2008125799A2/en active Application Filing
- 2008-03-20 US US12/595,456 patent/US8478051B2/en active Active
- 2008-03-20 JP JP2010502558A patent/JP5063776B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6249608B1 (en) * | 1996-12-25 | 2001-06-19 | Hitachi, Ltd. | Template matching image processor utilizing sub image pixel sums and sum of squares thresholding |
US6330353B1 (en) | 1997-12-18 | 2001-12-11 | Siemens Corporate Research, Inc. | Method of localization refinement of pattern images using optical flow constraints |
US6546137B1 (en) | 1999-01-25 | 2003-04-08 | Siemens Corporate Research, Inc. | Flash system for fast and accurate pattern localization |
US20060088189A1 (en) * | 2000-02-18 | 2006-04-27 | Microsoft Corporation | Statistically comparing and matching plural sets of digital data |
US20040081360A1 (en) | 2002-10-28 | 2004-04-29 | Lee Shih-Jong J. | Fast pattern searching |
US20060029291A1 (en) | 2004-08-09 | 2006-02-09 | Eastman Kodak Company | Multimodal image registration using compound mutual information |
EP1693783A1 (en) | 2005-02-21 | 2006-08-23 | Mitsubishi Electric Information Technology Centre Europe B.V. | Fast method of object detection by statistical template matching |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120182318A1 (en) * | 2011-01-18 | 2012-07-19 | Philip Andrew Mansfield | Transforming Graphic Objects |
US20120182317A1 (en) * | 2011-01-18 | 2012-07-19 | Philip Andrew Mansfield | Adaptive Graphic Objects |
US8963959B2 (en) * | 2011-01-18 | 2015-02-24 | Apple Inc. | Adaptive graphic objects |
US9111327B2 (en) * | 2011-01-18 | 2015-08-18 | Apple Inc. | Transforming graphic objects |
US20120328160A1 (en) * | 2011-06-27 | 2012-12-27 | Office of Research Cooperation Foundation of Yeungnam University | Method for detecting and recognizing objects of an image using haar-like features |
Also Published As
Publication number | Publication date |
---|---|
WO2008125799A3 (en) | 2009-03-19 |
WO2008125799A2 (en) | 2008-10-23 |
US20100119160A1 (en) | 2010-05-13 |
JP5063776B2 (en) | 2012-10-31 |
JP2010524111A (en) | 2010-07-15 |
EP2153379A2 (en) | 2010-02-17 |
GB0707192D0 (en) | 2007-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8478051B2 (en) | Generalized statistical template matching under geometric transformations | |
US11615262B2 (en) | Window grouping and tracking for fast object detection | |
US8260059B2 (en) | System and method for deformable object recognition | |
US8452096B2 (en) | Identifying descriptor for person or object in an image | |
US7035465B2 (en) | Systems and methods for automatic scale selection in real-time imaging | |
EP1693783B1 (en) | Fast method of object detection by statistical template matching | |
US7881531B2 (en) | Error propogation and variable-bandwidth mean shift for feature space analysis | |
Murase et al. | Detection of 3D objects in cluttered scenes using hierarchical eigenspace | |
US20130089260A1 (en) | Systems, Methods, and Software Implementing Affine-Invariant Feature Detection Implementing Iterative Searching of an Affine Space | |
US20030059124A1 (en) | Real-time facial recognition and verification system | |
US20060291696A1 (en) | Subspace projection based non-rigid object tracking with particle filters | |
US7227977B1 (en) | Lighting correction for the outdoor environment with extension to the self adjusting algorithm for general lighting conditions | |
US20150030231A1 (en) | Method for Data Segmentation using Laplacian Graphs | |
Du et al. | Infrared and visible image registration based on scale-invariant piifd feature and locality preserving matching | |
Nayar et al. | Image spotting of 3D objects using parametric eigenspace representation | |
Talker et al. | Efficient sliding window computation for nn-based template matching | |
Lindner | Automated image interpretation using statistical shape models | |
EP2672425A1 (en) | Method and apparatus with deformable model fitting using high-precision approximation | |
US7907777B2 (en) | Manifold learning for discriminating pixels in multi-channel images, with application to image/volume/video segmentation and clustering | |
Cootes | Statistical shape models | |
Sibiryakov | Statistical template matching under geometric transformations | |
Crandall et al. | Object recognition by combining appearance and geometry | |
Kovalenko et al. | The Effect of Entropy Order in Image Aalignment by Maximum Mutual Information Criterion | |
Jang et al. | Lip localization based on active shape model and gaussian mixture model | |
Bayık | Automatic target recognition in infrared imagery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: MITSUBISHI ELECTRIC CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SIBIRYAKOV, ALEXANDER;REEL/FRAME:033111/0331 Effective date: 20041027 |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |