WO2018068304A1 - 一种图像匹配的方法及装置 - Google Patents

一种图像匹配的方法及装置 Download PDF

Info

Publication number
WO2018068304A1
WO2018068304A1 PCT/CN2016/102129 CN2016102129W WO2018068304A1 WO 2018068304 A1 WO2018068304 A1 WO 2018068304A1 CN 2016102129 W CN2016102129 W CN 2016102129W WO 2018068304 A1 WO2018068304 A1 WO 2018068304A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
foreground
template
pixel
coordinate
Prior art date
Application number
PCT/CN2016/102129
Other languages
English (en)
French (fr)
Inventor
王少飞
Original Assignee
深圳配天智能技术研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳配天智能技术研究院有限公司 filed Critical 深圳配天智能技术研究院有限公司
Priority to CN201680039124.8A priority Critical patent/CN109348731B/zh
Priority to PCT/CN2016/102129 priority patent/WO2018068304A1/zh
Publication of WO2018068304A1 publication Critical patent/WO2018068304A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the invention belongs to the technical field of image processing and the field of computer technology, and in particular relates to a method and a device for image matching.
  • Pattern matching is one of the main research areas in the field of computer (machine) vision and graphic image processing.
  • machine computer
  • image information template
  • target image the searched image
  • the position of the image-like sub-image can be judged by calculating the similarity between the template image and the sub-image in the searched image.
  • the matching process if the template image is highly similar to the sub-image, the matching is successful, and vice versa.
  • the industry has a wide range of applications for pattern matching technology, mainly through pattern matching technology for detection, identification and segmentation, such as automatic monitoring of industrial pipelines, and cutting of semiconductor wafers.
  • Gray value pattern matching is one of the earliest and most widely used algorithms in pattern matching. Gray value pattern matching uses the gray value of the image to measure the similarity between two images, and uses a similarity measure to determine two Correspondence in images, where algorithms that normalize cross-correlation as a measure of similarity are applied by most machine vision software.
  • the pattern matching is performed according to the similar sub-images in the rectangular template image and the similar sub-images in the target image, since the rectangular template image is included in the template image acquisition.
  • the foreground image of the main object, and the background image other than the main object image, since the background image also participates in pattern matching, if the image quality is poor, and the similar part in the target image is more, the pattern matching of the background image It is possible that misjudgment occurs in the process of similarity measure, so it will have a great influence on the final matching accuracy, and the pattern matching accuracy will be degraded.
  • the present invention provides a method and apparatus for image matching, which determines whether a foreground image and a sub-image in a target image match by performing normalized cross-correlation calculation only on the foreground image in the template image and the sub-image in the target image. To improve the accuracy of image matching.
  • the first aspect of the present invention provides a method for image matching, including:
  • the foreground image being a collection of pixel points of an actual object in the template image
  • the calculating a series of features of the pixel point gray value of the foreground image comprises:
  • ( ⁇ , ⁇ ) represents a reference point on the template image, and the reference point may be an upper left corner corresponding to a coordinate value on the target image;
  • s is a set of pixel points of the foreground image
  • t(x- ⁇ , y- ⁇ ) represents the gradation value of the pixel corresponding to the coordinates (x- ⁇ , y- ⁇ ) on the template image, where (x- ⁇ , y- ⁇ ) is expressed on the target image
  • the reference point ( ⁇ , ⁇ ) is the coordinate on the template image in the coordinate system established by the origin.
  • the calculating a set of grayscale values of pixel points of the target image includes:
  • f(x, y) represents the gray value of the pixel corresponding to the coordinate (x, y), wherein the coordinate (x, y) is the coordinate in the coordinate system established by the origin on the target image;
  • ( ⁇ , ⁇ ) represents a reference point on the template image, and the reference point may be a coordinate value on the target image in the upper left corner;
  • s is a set of pixel points of the foreground image.
  • performing mask processing on the template image to obtain a foreground mask includes:
  • calculating a normalized cross-correlation of the foreground image and the sub-image by using a gray value of the foreground image and a gray value of the sub-image includes:
  • ⁇ ( ⁇ , ⁇ ) represents a normalized mutual of the foreground image and the corresponding sub-image in the target image when the template image reference point is aligned with a coordinate ( ⁇ , ⁇ ) on the target image
  • f(x, y) represents a gray value of a pixel point corresponding to a coordinate (x, y), wherein the coordinate (x, y) is a coordinate in a coordinate system established by an origin on the target image;
  • t(x- ⁇ , y- ⁇ ) represents the gray value of the pixel corresponding to the coordinates (x- ⁇ , y- ⁇ ) on the template image, where (x- ⁇ , y- ⁇ ) is expressed on the target image
  • the reference point ( ⁇ , ⁇ ) is the coordinate on the template image in the coordinate system established by the origin;
  • the mean value of the gradation value of the pixel point in the target image sub-image corresponding to the ( ⁇ , ⁇ ) coordinate is represented.
  • a second aspect of the present invention provides an apparatus for image matching, including:
  • a first determining module configured to determine a foreground image in the template image, where the foreground image is a set of pixel points of an actual object in the template image;
  • a first calculating module configured to calculate a grayscale feature of the foreground image when the template image covers a position on the target image
  • a second calculating module configured to calculate a grayscale feature of the subimage when the template image is overlaid on a target image, where the subimage is when the template image is overlaid on the target image An image corresponding to the foreground image on the target image;
  • a third calculating module configured to calculate a normalized cross-correlation between the foreground image and the sub-image by using a gray value of the template image and a gray value of the target image;
  • a second determining module configured to determine that the foreground image matches the sub image when the normalized cross correlation is greater than a preset value.
  • the first calculating module is further configured to calculate an average value of pixel gray values of the foreground image.
  • the first calculating module is further configured to calculate a product of a pixel point variance and an s area of the foreground image as follows:
  • s is a set of pixel points of the foreground image
  • t(x- ⁇ , y- ⁇ ) represents the gray value of the pixel corresponding to the coordinates (x- ⁇ , y- ⁇ ) on the template image, where (x- ⁇ , y- ⁇ ) is expressed on the target image
  • the reference point ( ⁇ , ⁇ ) is the coordinate on the template image in the coordinate system established by the origin.
  • the second calculation module includes:
  • a first calculating unit configured to calculate a mean value of gray values of pixel points in the sub image
  • a mask processing unit configured to perform mask processing on the template image to obtain a foreground mask
  • a second calculating unit configured to obtain a pixel point gray value of the sub image in the target image for an arbitrary position ( ⁇ , ⁇ ) by frequency domain dot multiplication of the foreground mask and the target image Sum
  • f(x, y) represents the gray value of the pixel corresponding to the coordinate (x, y), wherein the coordinate (x, y) is the coordinate in the coordinate system established by the origin on the target image;
  • (x- ⁇ , y- ⁇ ) represents the coordinates on the template image in the coordinate system established with the reference point ( ⁇ , ⁇ ) on the target image as the origin;
  • the mask processing unit is further configured to perform mask processing on the template image, set a pixel of the foreground image to 1, and set a pixel of the background image to 0 to obtain a foreground mask, and the background
  • the image is an image on the template image other than the foreground image.
  • the third calculating module is further configured to calculate the normalized cross-correlation as follows:
  • ⁇ ( ⁇ , ⁇ ) represents a normalized mutual of the foreground image and the corresponding sub-image in the target image when the template image reference point is aligned with a coordinate ( ⁇ , ⁇ ) on the target image
  • f(x, y) represents a gray value of a pixel point corresponding to a coordinate (x, y), wherein the coordinate (x, y) is a coordinate in a coordinate system established by an origin on the target image;
  • t(x- ⁇ , y- ⁇ ) represents the gradation value of the pixel corresponding to the coordinate (x- ⁇ , y- ⁇ ) on the template image, where (x- ⁇ , y- ⁇ ) represents the coordinates on the template image in the coordinate system established with the reference point ( ⁇ , ⁇ ) on the target image as the origin;
  • the mean value of the gradation value of the pixel point in the target image sub-image corresponding to the ( ⁇ , ⁇ ) coordinate is represented.
  • a third aspect of the invention provides an apparatus for image matching, comprising:
  • the memory is used to store a program
  • the processor is configured to execute a program in the memory such that the image matching device performs the method of image matching in the first aspect of the invention.
  • a fourth aspect of the present invention provides a storage medium storing one or more programs, including:
  • the one or more programs include instructions that, when executed by the image matching device including one or more processors, cause the image matching device to perform image matching as described in the first aspect of the invention method.
  • the template image and the target image are normalized cross-correlation calculation by gray value
  • the template image is divided into a foreground image and a background image, and only the foreground image of the arbitrary shape in the template image is in the target image.
  • the sub-images are normalized and cross-correlated to determine whether the foreground image matches the sub-image of the target image.
  • the background image is not required to be calculated. The pixel points avoid misjudgment and effectively improve the accuracy of image matching.
  • FIG. 1 is a schematic diagram of an embodiment of a method for image matching according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of matching foreground images and sub-images according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of an embodiment of an image matching apparatus according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of another embodiment of an image matching apparatus according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of another embodiment of an image matching apparatus according to an embodiment of the present invention.
  • Embodiments of the present invention provide a method and apparatus for image matching, which are used to improve the accuracy of image matching.
  • an embodiment of a method for image matching according to the present invention includes:
  • the template image is obtained by a sensor, the template image is a rectangle, the template image includes a foreground image and a background image, and the foreground image is a collection of pixel points of an actual object in the template image, and the background image is an image other than the foreground image on the template image.
  • the foreground image may include an image of at least one actual object, and in practical applications, the actual object may be a product, an identification, a number, a letter, etc., and the shape of the actual object is various, and the shape of the foreground image is not limited in the method.
  • the foreground image in the template image may be determined according to a selection instruction input by the user, and the foreground image is a set of pixel points of the actual object in the template image, and the set of pixel points of the foreground image is represented by s.
  • the image corresponding to the foreground image on the target image is a sub-image, and the grayscale feature of the sub-image and the grayscale feature of the foreground image are calculated.
  • the template image and the target image may be rectangular or square, and the template is used in this embodiment.
  • the image and the target image are described by taking a square as an example.
  • the size of the target image is M ⁇ M
  • the size of the template image is N ⁇ N
  • M ⁇ N is M ⁇ N
  • the template image is placed on the target image, and a coordinate system can be established with the top left vertex of the target image as an origin, and the reference point on the template image corresponds to the coordinate ( ⁇ , ⁇ ) on the target image.
  • the reference point may be the lower left corner, the upper left corner, or the center point of the template image. In the embodiment, the reference point is the upper left corner as an example.
  • Calculating the grayscale features of the template image and the grayscale features of the target image may include:
  • f(x, y) represents the gradation value of the pixel corresponding to the coordinate (x, y), which is the coordinate in the coordinate system established by the origin on the target image.
  • t(x- ⁇ , y- ⁇ ) represents the gray value of the pixel corresponding to the coordinates (x- ⁇ , y- ⁇ ) on the template image, where (x- ⁇ , y- ⁇ ) is expressed on the target image
  • the reference point ( ⁇ , ⁇ ) is the coordinate on the template image in the coordinate system established by the origin, (x- ⁇ , y- ⁇ ) ⁇ S.
  • the normalized cross-correlation is calculated as follows:
  • ⁇ ( ⁇ , ⁇ ) represents a normalized cross-correlation of the foreground image with the sub-image when the upper left corner of the template image corresponds to the coordinates ( ⁇ , ⁇ ) on the target image;
  • f(x, y) represents the gradation value of the pixel corresponding to the coordinate (x, y), which is the coordinate in the coordinate system established by the origin on the target image.
  • t(x- ⁇ , y- ⁇ ) represents the gradation value of the pixel corresponding to the coordinates (x- ⁇ , y- ⁇ ) on the template image, where (x- ⁇ , y- ⁇ ) is expressed on the target image
  • the reference point ( ⁇ , ⁇ ) is the template map in the coordinate system established by the origin Like the coordinates on the top.
  • a mean value of gray values of pixel points in the sub-image is represented.
  • step 103 and step 104 the correlation gray feature according to the template image and the related gray feature of the target image are directly calculated by a normalized cross-correlation formula.
  • the calculation of the normalized cross-correlation may also be performed in steps.
  • calculating the grayscale features of the template image may include:
  • (x, y) represents a coordinate value on the target image when the template image is overlaid on the target image, x ⁇ [ ⁇ , ⁇ + N - 1], y ⁇ [v, v + N-1].
  • ( ⁇ , ⁇ ) indicates that the reference point on the template image corresponds to the coordinate value on the target image, and the reference point may be the lower left corner, the upper left corner, or the center point of the template image, etc., in the embodiment, reference is made to The point is the upper left corner as an example for explanation. If the reference point of the template image is (m, n) with respect to the coordinates of the upper left corner of the template, then the corresponding x ⁇ [ ⁇ -m, ⁇ -m+N-1], y ⁇ [vn, v-n+N -1]
  • s represents a set of pixel points of the foreground image.
  • t(x- ⁇ , y- ⁇ ) represents the gradation value of the pixel corresponding to the coordinates (x- ⁇ , y- ⁇ ) on the template image, where (x- ⁇ , y- ⁇ ) is expressed on the target image
  • the reference point ( ⁇ , ⁇ ) is the coordinate on the template image in the coordinate system established by the origin.
  • calculating grayscale features of the target image includes:
  • the mask image is masked, the pixels of the foreground image are set to 1, and the pixels of the background image are set to 0 to obtain a foreground mask.
  • f(x, y) represents the gray value of the pixel corresponding to the coordinate (x, y) on the target image.
  • ⁇ ( ⁇ , ⁇ ) represents a normalized cross-correlation of the foreground image with the sub-image when the upper left corner of the template image corresponds to the coordinates ( ⁇ , ⁇ ) on the target image;
  • f(x, y) represents a gray value of a pixel point corresponding to a coordinate (x, y), wherein the coordinate (x, y) is a coordinate in a coordinate system established by an origin on the target image;
  • t(x- ⁇ , y- ⁇ ) represents the gradation value of the pixel corresponding to the coordinates (x- ⁇ , y- ⁇ ) on the template image, where (x- ⁇ , y- ⁇ ) is expressed on the target image
  • the reference point ( ⁇ , ⁇ ) is the coordinate on the template image in the coordinate system established by the origin;
  • a mean value of gray values of pixel points in the sub-image is represented.
  • the calculation of the normalized cross-correlation can be directly calculated according to the normalized cross-correlation formula according to the grayscale feature of the target image and the grayscale feature of the template image, and can also be divided.
  • the steps are calculated in advance for some parameters, and the specific method is not limited.
  • the method of calculating the sub-steps in order to make it easier to understand, can refer to the following specific instructions:
  • the third item is 0. If the background pixels of the template image are all set to 0, then the first item is:
  • step 103 For the second item, with in the step-by-step description of step 103, the step of calculating the grayscale feature of the target image has been described, and will not be described herein.
  • step 103 For the second term of the denominator term of the normalized cross-correlation formula, in the step-by-step description of step 103, the step of calculating the product of the gray value variance and the s area of the pixel of the foreground image has been described, and will not be described herein. .
  • N S represents the number of points in the set S.
  • the foreground mask is multiplied by the frequency domain of the square image of the target image.
  • the second item is Way calculation.
  • the optimal size of the Fourier transform is based on the target image, so masking the template image does not change the size of the Fourier transform, and the large O complexity is Calculated by taking the side length of the target image as a parameter, the large O complexity is expressed as O(M 2 log 2 M), where M is the side length of the target image, where M is constant, therefore, the algorithm is large O complex
  • the degree is the same as the large O complexity of the normalized cross-correlation of the rectangular regions of the template image.
  • the gray value in the normalized cross-correlation in this embodiment can be directly represented by a gray value if the image is a grayscale image, and the gray value ranges from 0 to 255, if the image is Color images, for example, are represented by three channels of RGB (English: Red Green Blue, abbreviation: RGB) red, green, and blue.
  • RGB Red Green Blue
  • the color of a pixel is (123, 104, 238), which can be solved by a floating point algorithm.
  • the original RGB R, G, B can be replaced by the gray value, or the value of R, G, B as gray
  • the values are respectively taken into the above normalized cross-correlation formula, and three ⁇ values are calculated, which are ⁇ 1 , ⁇ 2 , and ⁇ 3 , respectively, and the average values of three values of ⁇ 1 , ⁇ 2 , and ⁇ 3 are obtained.
  • CMYK International: Cyan Mageata Yellow Black, abbreviation CMYK
  • the processing method is the same as that of RGB, and the specific method is not limited herein.
  • Step 105 When the normalized cross-correlation is greater than a preset value, determining that the foreground image matches the sub-image.
  • the grayscale feature of the foreground image and the grayscale feature of the subimage calculate that the normalized cross-correlation of the foreground image and the subimage is greater than a preset value, then Determining that the foreground image matches the sub-image.
  • the foreground image on the template image matches the target image
  • the foreground image needs to be compared with different parts of the target image, that is, the template image covers the position on the target image.
  • a change occurs, that is, ( ⁇ , ⁇ ) changes, and is compared at each position, that is, after ( ⁇ , ⁇ ) changes, steps 103 to 105 are repeatedly performed, and by determining whether the normalized cross-correlation is greater than
  • the preset value is used to determine whether the foreground image of the current position matches the sub image.
  • An image matching method provided by an embodiment of the present invention is applied to an image matching device, and can be implemented in the following scenarios.
  • the device acquires an image of the product on the assembly line through a sensor, the product may be an irregular shape, the image acquired by the device is a template image 20, the template image 20 is a square, and the image of the actual product in the template image 20 is a foreground image 21 and a foreground image.
  • the background image 22 other than 21 covers the acquired template image 20 on the target image 10, and the target image 10 may be an image pre-stored by the device, and moves the template image 20 on the target image 10, the upper left corner of the template image 20.
  • the template image and the target image are normalized cross-correlation calculation by gray value
  • the template image is divided into a foreground image and a background image, and only the foreground image and the target image of any shape in the template image are determined.
  • the sub-images in the sub-image are normalized cross-correlation calculation to determine whether the foreground image matches the sub-image of the target image.
  • the background image does not need to be calculated, and the image quality is poor, and the target image is similar. In some cases, the false positive is reduced, the accuracy of the matching is effectively improved, and the complexity of the large O is not increased.
  • an embodiment of the image matching device provided by the present invention includes:
  • the obtaining module 301 is configured to acquire a template image.
  • the first determining module 302 is configured to determine a foreground image in the template image acquired by the acquiring module 301, where the foreground image is a set of pixel points of an actual object in the template image.
  • the first calculating module 303 is configured to calculate a grayscale feature of the pixel of the foreground image.
  • a second calculating module 304 configured to calculate a grayscale feature of a pixel of the sub-image, where the sub-image is on the target image and the foreground when the template image covers a position on the target image The image corresponding to the image.
  • the third calculating module 305 is configured to calculate a normalized cross-correlation between the foreground image and the sub-image by using a grayscale feature of the template image and a grayscale feature of the target image.
  • the second determining module 306 is configured to determine that the foreground image matches the sub image when the normalized cross correlation is greater than a preset value.
  • another embodiment of the image matching apparatus provided by the present invention includes:
  • the first calculating module 303 is further configured to calculate a mean value of the pixel point gray value of the foreground image.
  • the first calculating module 303 is further configured to calculate a product of a pixel point variance and an s area of the foreground image as follows:
  • ( ⁇ , ⁇ ) represents a reference point on the template image (herein the upper left corner is taken as an example) corresponding to the coordinate value on the target image;
  • s is a set of pixel points of the foreground image
  • t(x- ⁇ , y- ⁇ ) represents the gradation value of the pixel corresponding to the coordinates (x- ⁇ , y- ⁇ ) on the template image, where (x- ⁇ , y- ⁇ ) is expressed on the target image
  • the reference point ( ⁇ , ⁇ ) is the coordinate on the template image in the coordinate system established by the origin.
  • the second calculating module 304 includes:
  • a first calculating unit 3041 configured to calculate an average value of gray values of pixel points in the sub image
  • a mask processing unit 3042 configured to perform mask processing on the template image to obtain a foreground mask
  • a second calculating unit 3043 configured to obtain a sum of pixel gradation values of the sub-images in the target image by frequency domain dot multiplication of the foreground mask and the target image
  • f(x, y) represents the gray value of the pixel corresponding to the coordinate (x, y), wherein the coordinate (x, y) is the coordinate in the coordinate system established by the origin on the target image;
  • ( ⁇ , ⁇ ) represents a reference point on the template image (herein the upper left corner is taken as an example) for the coordinate value on the target image;
  • (x- ⁇ , y- ⁇ ) represents the reference on the target image
  • the point ( ⁇ , ⁇ ) is the coordinate on the template image in the coordinate system established by the origin;
  • s is a set of pixel points of the foreground image.
  • the mask processing unit 3042 is further configured to perform mask processing on the template image, set a pixel of the foreground image to 1, and set a pixel of the background image to 0 to obtain a foreground mask.
  • the background image is an image on the template image other than the foreground image.
  • the third calculating module 305 is further configured to calculate the normalized cross-correlation according to the grayscale feature of the target image and the grayscale feature of the template image as follows:
  • ⁇ ( ⁇ , ⁇ ) indicates that the template image is based on a coordinate ( ⁇ , ⁇ ) on the target image, and the foreground image is correlated with the normalization of the sub-image;
  • f(x, y) represents a gray value of a pixel corresponding to a coordinate (x, y), wherein the coordinate (x, y) is a coordinate in a coordinate system established by an origin on the target image;
  • t(x- ⁇ , y- ⁇ ) represents the gradation value of the pixel corresponding to the coordinates (x- ⁇ , y- ⁇ ) on the template image, where (x- ⁇ , y- ⁇ ) is expressed on the target image
  • the reference point ( ⁇ , ⁇ ) is the coordinate on the template image in the coordinate system established by the origin;
  • a mean value of gray values of pixel points in the sub-image is represented.
  • the template image and the target image are normalized and cross-correlated by the gray value
  • the template image is divided into the foreground image and the background image, and only the foreground image and the target image of any shape in the template image are determined.
  • the sub-images are normalized and cross-correlated, and it is determined whether the foreground image matches the sub-image of the target image.
  • the normalized cross-correlation calculation is performed, the background image is not required to be calculated, and the image quality is poor, and the similar parts in the target image are compared. In many cases, the false positive is reduced, the accuracy of the matching is effectively improved, and the complexity of the large O is not increased.
  • FIG. 5 is a schematic structural diagram of an image matching device 40 according to an embodiment of the present invention.
  • Image matching device 40 may include input device 410, output device 420, processor 430, and memory 440.
  • the input device in the embodiment of the present invention may be a sensor.
  • the output device can be a display device.
  • Memory 440 can include read only memory and random access memory and provides instructions and data to processor 430. A portion of the memory 440 may also include a non-volatile random access memory (English name: Non-Volatile Random Access Memory, English abbreviation: NVRAM).
  • NVRAM Non-Volatile Random Access Memory
  • Memory 440 stores the following elements, executable modules or data structures, or subsets thereof, or their extended sets:
  • Operation instructions include various operation instructions for implementing various operations.
  • Operating system Includes a variety of system programs for implementing various basic services and handling hardware-based tasks.
  • the template image is acquired by the input device 410;
  • the processor 430 is configured to:
  • the foreground image being a collection of pixel points of an actual object in the template image
  • the processor 430 controls the operation of the image matching device 40.
  • the processor 430 may also be referred to as a central processing unit (English full name: Central Processing Unit: CPU).
  • Memory 440 can include read only memory and random access memory and provides instructions and data to processor 430. A portion of the memory 440 may also include an NVRAM.
  • the components of the image matching device 40 are coupled together by a bus system 450.
  • the bus system 450 may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus. However, for clarity of description, various buses are labeled as bus system 450 in the figure.
  • Processor 430 may be an integrated circuit chip with signal processing capabilities.
  • each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 430 or an instruction in a form of software.
  • the processor 430 may be a general-purpose processor, a digital signal processor (English name: digital signal processing, English abbreviation: DSP), an application-specific integrated circuit (English name: Application Specific Integrated Circuit, English abbreviation: ASIC), ready-made programmable Gate array (English name: Field-Programmable Gate Array, English abbreviation: FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. Can be realized or executed The methods, steps, and logical block diagrams disclosed in the embodiments of the present invention are provided.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in memory 440, and processor 430 reads the information in memory 440 and, in conjunction with its hardware, performs the steps of the above method.
  • processor 430 is further configured to:
  • ( ⁇ , ⁇ ) represents a reference point on the template image (herein the upper left corner is taken as an example) corresponding to the coordinate value on the target image;
  • s is a set of pixel points of the foreground image
  • t(x- ⁇ , y- ⁇ ) represents the gradation value of the pixel corresponding to the coordinates (x- ⁇ , y- ⁇ ) on the template image.
  • processor 430 is further configured to:
  • f(x, y) represents the gray value of the pixel corresponding to the coordinate (x, y), wherein the coordinate (x, y) is the coordinate in the coordinate system established by the origin on the target image;
  • ( ⁇ , ⁇ ) represents a coordinate value when a reference point on the template image is on the target image
  • s is a set of pixel points of the foreground image.
  • processor 430 is further configured to:
  • processor 430 is further configured to:
  • the normalized cross-correlation is calculated as follows:
  • ⁇ ( ⁇ , ⁇ ) indicates that the template image is based on a coordinate ( ⁇ , ⁇ ) on the target image, and the foreground image is correlated with the normalization of the sub-image;
  • f(x, y) represents a gray value of a pixel corresponding to a coordinate (x, y), wherein the coordinate (x, y) is a coordinate in a coordinate system established by an origin on the target image;
  • t(x- ⁇ , y- ⁇ ) represents the gradation value of the pixel corresponding to the coordinates (x- ⁇ , y- ⁇ ) on the template image, where (x- ⁇ , y- ⁇ ) is expressed on the target image
  • the reference point ( ⁇ , ⁇ ) is the coordinate on the template image in the coordinate system established by the origin;
  • a mean value of gray values of pixel points in the sub-image is represented.
  • the template image and the target image are normalized and cross-correlated by the gray value
  • the template image is divided into the foreground image and the background image, and only the foreground image of the arbitrary shape in the template image and the target image are The sub-images are normalized and cross-correlated to determine whether the foreground image matches the sub-image of the target image.
  • the normalized cross-correlation calculation if the large O complexity is not increased, it is not necessary to calculate the background image. Pixels, avoid false positives, and effectively improve the accuracy of image matching.
  • FIG. 4 The related description of FIG. 4 can be understood by referring to the related description and effect of the method part of FIG. 1, and no further description is made herein.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read only memory (English full name: Read-Only Memory, English abbreviation: ROM), a random access memory (English full name: Random Access Memory, English abbreviation: RAM), magnetic A variety of media that can store program code, such as a disc or a disc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

一种图像匹配的方法和装置,用于提高模板图像的前景图像与目标图像的子图像匹配准确率,该方法包括:获取模板图像(101);确定模板图像中的前景图像(102),前景图像为模板图像中实际物体的像素点的集合;当模板图像覆盖在目标图像上时,计算前景图像的灰度特征和子图像的灰度特征(103),子图像为当模板图像覆盖在目标图像上时,在目标图像上与前景图像对应的图像;通过前景图像的灰度特征和目标图像的灰度特征计算前景图像与子图像的归一化互相关(104);当归一化互相关大于预置值时,确定前景图像与子图像匹配(105)。上述方法用于提高模板图像的前景图像与目标图像的子图像匹配准确率。

Description

一种图像匹配的方法及装置 技术领域
本发明属于图像处理技术领域及计算机技术领域,尤其涉及一种图像匹配的方法及装置。
背景技术
人们利用图像采集设备获取所需要的物体图像,将有用信息存储到计算机中,接着通过计算机将采集到的图像或图像序列的信息提取出来,最后完成对这些图像信息的处理、识别和理解,这个过程是用计算机来代替人的视觉器官,由此形成了一门新兴的学科,称为计算机视觉。
模式匹配是计算机(机器)视觉和图形图像处理领域研究的主要内容之一。在计算机(机器)识别物体的过程中,常常需要把传感器获取到物体的图像信息(模板)在被搜索图(目标图像)中寻找与之相似的子图像,在目标图像中想要找到与模板图像相似子图像的位置,我们可以通过计算模板图像与被搜索图像中子图像的相似度来进行判断。在匹配过程中,如果模板图像与子图像相似度高,则匹配成功,反之则失败。目前工业对模式匹配技术的应用比较广泛,主要通过模式匹配技术进行检测、识别和分割等等,如工业流水线的自动监控、半导体晶片的切割等。
灰度值模式匹配是模式匹配中提出最早、应用最广泛的一种算法,灰度值模式匹配利用图像的灰度值度量两幅图像之间的相似性,用某种相似性度量,判定两幅图像中的对应关系,其中,通过归一化互相关来作为相似性度量的算法被大多数机器视觉软件所应用。
现有技术中,通过灰度值模式匹配仅支持输入矩形模板图像,根据矩形模板图像中全部像素点与目标图像中的相似子图像进行模式匹配,由于在采集模板图像时,矩形模板图像中包括了主要物体的前景图像,还有除了主要物体图像之外的背景图像,由于背景图像也参与了模式匹配,若图像质量较差,目标图像中相似部分较多的情况下,背景图像的模式匹配可能在相似性度量的过程中产生误判,因此会对最终的匹配精度产生很大的影响,使模式匹配精度下降。
发明内容
本发明提供了一种图像匹配的方法及装置,通过仅对模板图像中的前景图像与目标图像中的子图像进行归一化互相关计算,来确定前景图像与目标图像中的子图像是否匹配,以提高图像匹配的准确率。
有鉴于此,本发明第一方面提供一种图像匹配的方法,包括:
获取模板图像;
确定所述模板图像中的前景图像,所述前景图像为模板图像中实际物体的像素点的集合;
当所述模板图像覆盖在目标图像上一个位置时,计算所述前景图像的灰度特征和子图像的灰度特征,所述子图像为当所述模板图像覆盖在所述目标图像上时,在所述目标图像上与所述前景图像对应的图像;
通过所述前景图像的灰度特征和所述子图像的灰度特征计算所述前景图像与子图像的归一化互相关;
当所述归一化互相关大于预置值时,确定所述前景图像与所述子图像匹配。
进一步的,所述计算所述前景图像的像素点灰度值一系列特征包括:
计算所述模板图像中所述前景图像的像素点的灰度值的均值
Figure PCTCN2016102129-appb-000001
按照如下方式计算所述前景图像的像素点的灰度值方差与s面积之积:
Figure PCTCN2016102129-appb-000002
其中,(x,y)所述目标图像上一点的坐标值;
(μ,ν)表示所述模板图像上的一参考点,该参考点可以为左上角对应于所述目标图像上的坐标值;
s为所述前景图像的像素点的集合;
t(x-μ,y-ν)表示模板图像上的坐标(x-μ,y-ν)对应的像素点的灰度值,其中,(x-μ,y-ν)表示以目标图像上的参考点(μ,ν)为原点建立的坐标系中模板图像上的坐标。进一步的,所述计算目标图像的像素点的灰度值一系列特征包括:
对所述模板图像进行掩膜处理,得到前景掩膜;
通过将所述前景掩膜与所述目标图像进行频域点乘,得到对于任意位置(μ,ν),所述目标图像中所述子图像的像素点灰度值的总和
Figure PCTCN2016102129-appb-000003
对于任意位置(μ,ν),按如下方式计算区域方差与s面积之积:
Figure PCTCN2016102129-appb-000004
其中,f(x,y)表示坐标(x,y)对应的像素点的灰度值,其中,所述坐标(x,y)是以目标图像上的原点建立的坐标系中的坐标;
(μ,ν)表示所述模板图像上的一参考点,该参考点可以为左上角对于所述目标图像上的坐标值;
s为所述前景图像的像素点的集合。
进一步的,所述对所述模板图像进行掩膜处理,得到前景掩膜包括:
对所述模板图像进行掩膜处理,将所述前景图像的像素置为1,背景图像的像素置为0,得到前景掩膜,所述背景图像为所述模板图像上除所述前景图像之外的图像。
进一步的,对于任意位置(μ,ν),通过前景图像的灰度值和所述子图像的灰度值计算所述前景图像与所述子图像的归一化互相关包括:
Figure PCTCN2016102129-appb-000005
其中,γ(μ,ν)表示将所述模板图像参考点对齐于目标图像上的一坐标(μ,ν)时,所述前景图像与所述目标图像中对应的子图像的归一化互相关;
f(x,y)表示坐标(x,y)对应的像素点的灰度值,其中,所述坐标(x,y)是以目标图像上的原点建立的坐标系中的坐标;
t(x-μ,y-ν)表示模板图像上的坐标(x-μ,y-ν)对应的像素的灰度值,其中,(x-μ,y-ν)表示以目标图像上的参考点(μ,ν)为原点建立的坐标系中模板图像上的坐标;
Figure PCTCN2016102129-appb-000006
表示所述前景图像的像素点灰度值的均值;
Figure PCTCN2016102129-appb-000007
表示(μ,ν)坐标对应的所述目标图像子图像中像素点的灰度值的均值。
本发明第二方面提供一种图像匹配的装置,包括:
获取模块,用于获取模板图像;
第一确定模块,用于确定所述模板图像中的前景图像,所述前景图像为模板图像中实际物体的像素点的集合;
第一计算模块,当所述模板图像覆盖在目标图像上一个位置时,用于计算所述前景图像的灰度特征;
第二计算模块,当所述模板图像覆盖在目标图像上一个位置时,用于计算子图像的灰度特征,所述子图像为当所述模板图像覆盖在所述目标图像上时,在所述目标图像上与所述前景图像对应的图像;
第三计算模块,用于通过模板图像的灰度值和所述目标图像的灰度值计算所述前景图像与子图像的归一化互相关;
第二确定模块,用于当所述归一化互相关大于预置值时,确定所述前景图像与所述子图像匹配。
进一步的,所述第一计算模块,还用于计算所述前景图像的像素点灰度值的均值
Figure PCTCN2016102129-appb-000008
所述第一计算模块,还用于按照如下方式计算前景图像的像素点方差与s面积之积:
Figure PCTCN2016102129-appb-000009
其中,(x,y)表示目标图像上一点的坐标值;
(μ,ν)表示所述模板图像上的一参考点对应于所述目标图像上的坐标值;
s为所述前景图像的像素点的集合;
t(x-μ,y-ν)表示模板图像上坐标(x-μ,y-ν)对应的像素点的灰度值,其中,(x-μ,y-ν)表示以目标图像上的参考点(μ,ν)为原点建立的坐标系中模板图像上的坐标。
进一步的,所述第二计算模块包括:
第一计算单元,用于计算所述子图像中像素点的灰度值的均值
Figure PCTCN2016102129-appb-000010
掩膜处理单元,用于对所述模板图像进行掩膜处理,得到前景掩膜;
第二计算单元,用于通过将所述前景掩膜与所述目标图像进行频域点乘,得到对于任意位置(μ,ν),所述目标图像中所述子图像的像素点灰度值的总和
Figure PCTCN2016102129-appb-000011
按如下方式计算区域方差与s面积之积:
Figure PCTCN2016102129-appb-000012
其中,f(x,y)表示坐标(x,y)对应的像素点的灰度值,其中,所述坐标(x,y)是以目标图像上的原点建立的坐标系中的坐标;
(x-μ,y-ν)表示以目标图像上的参考点(μ,ν)为原点建立的坐标系中模板图像上的坐标;
s为所述前景图像的像素点的集合。进一步的,所述掩膜处理单元,还用于对所述模板图像进行掩膜处理,将所述前景图像的像素置为1,背景图像的像素置为0,得到前景掩膜,所述背景图像为所述模板图像上除所述前景图像之外的图像。进一步的,所述第三计算模块,还用于按如下方式计算所述归一化互相关:
Figure PCTCN2016102129-appb-000013
其中,γ(μ,ν)表示将所述模板图像参考点对齐于目标图像上的一坐标(μ,ν)时,所述前景图像与所述目标图像中对应的子图像的归一化互相关;
f(x,y)表示坐标(x,y)对应的像素点的灰度值,其中,所述坐标(x,y)是以目标图像上的原点建立的坐标系中的坐标;
t(x-μ,y-ν)表示模板图像上坐标(x-μ,y-ν)对应的像素的灰度值,其中, (x-μ,y-ν)表示以目标图像上的参考点(μ,ν)为原点建立的坐标系中模板图像上的坐标;
Figure PCTCN2016102129-appb-000014
表示所述前景图像的像素点灰度值的均值;
Figure PCTCN2016102129-appb-000015
表示(μ,ν)坐标对应的所述目标图像子图像中像素点的灰度值的均值。
本发明的第三方面提供一种图像匹配的装置,包括:
处理器以及存储器;
所述存储器用于存储程序;
所述处理器用于执行所述存储器中的程序,使得所述图像匹配装置执行本发明第一方面中的图像匹配的方法。
本发明的第四方面提供一种存储一个或多个程序的存储介质,包括:
所述一个或多个程序包括指令,所述指令当被包括一个或多个处理器的所述图像匹配装置执行时,使所述图像匹配装置执行如本发明第一方面所述的图像匹配的方法。
从以上技术方案可以看出,本发明实施例具有以下优点:
本实施例中,在模板图像与目标图像通过灰度值进行归一化互相关计算时,将模板图像分为前景图像和背景图像,只将模板图像中的任意形状的前景图像与目标图像中的子图像进行归一化互相关,从而确定前景图像与目标图像的子图像是否匹配,在进行归一化互相关计算时,保证大O复杂度不增加的情况下,不需要计算背景图像中的像素点,避免误判,有效的提高了图像匹配的精确度。
附图说明
图1为本发明实施例一种图像匹配的方法的一个实施例示意图;
图2为本发明实施例中前景图像与子图像匹配示意图;
图3为本发明实施例图像匹配装置的一个实施例的结构示意图;
图4为本发明实施例图像匹配装置的另一个实施例的结构示意图;
图5为本发明实施例图像匹配装置的另一个实施例的结构示意图。
具体实施方式
本发明实施例提供了一种图像匹配的方法及装置,用于提高图像匹配的准确性。
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
以下通过实施例进行具体描述,请参阅图1所示,本发明一种图像匹配的方法的一个实施例包括:
101、获取模板图像。
通过传感器获取模板图像,模板图像为矩形,模板图像包括了前景图像和背景图像,前景图像为模板图像中实际物体的像素点的集合,背景图像为模板图像上除了前景图像之外的图像。前景图像可以包括至少一个实际物体的图像,且在实际应用中,实际物体可以为产品,标识,数字,字母等,实际物体的形状多样,本方法中对于前景图像的形状不限定。
102、确定所述模板图像中的前景图像。
可以根据用户输入的选择指令,确定模板图像中的前景图像,该前景图像为模板图像中实际物体的像素点的集合,该前景图像的像素点的集合用s表示。
103、当所述模板图像覆盖在目标图像上一个位置时,所述目标图像上对应于所述前景图像的图像为子图像,计算所述子图像的灰度特征和前景图像的灰度特征。
模板图像和目标图像可以为矩形也可以为正方形,本实施例中将所述模板 图像和目标图像以正方形为例进行说明,目标图像的大小为M×M,模板图像的大小为N×N,且M≥N。
将模板图像置于该目标图像上,可以以目标图像的左上角顶点为原点,建立坐标系,所述模板图像上的参考点对应于所述目标图像上时的坐标(μ,ν),该参考点可以为模板图像的左下角、左上角或中心点等,本实施例中以参考点为左上角为例进行说明。当所述μ和v的值变化时,表示模板图像覆盖在目标图像的不同位置。
计算模板图像的灰度特征和目标图像的灰度特征可以包括:
计算所述模板图像中所述前景图像的像素点的灰度值的均值
Figure PCTCN2016102129-appb-000016
f(x,y)表示坐标(x,y)对应的像素点的灰度值,其中,所述坐标(x,y)是以目标图像上的原点建立的坐标系中的坐标。
计算所述子图像中像素点的灰度值的均值
Figure PCTCN2016102129-appb-000017
t(x-μ,y-ν)表示模板图像上坐标(x-μ,y-ν)对应的像素点的灰度值,其中,(x-μ,y-ν)表示以目标图像上的参考点(μ,ν)为原点建立的坐标系中模板图像上的坐标,(x-μ,y-ν)∈S。
其中,x∈[μ,μ+N-1],y∈[v,v+N-1]。
104、通过所述前景图像的灰度特征和所述子图像的灰度特征计算所述前景图像与子图像的归一化互相关;
按如下方式计算所述归一化互相关:
Figure PCTCN2016102129-appb-000018
其中,γ(μ,ν)表示当所述模板图像的左上角对应于所述目标图像上坐标(μ,ν)时,所述前景图像与所述子图像的归一化互相关;
f(x,y)表示坐标(x,y)对应的像素点的灰度值,其中,所述坐标(x,y)是以目标图像上的原点建立的坐标系中的坐标。
t(x-μ,y-ν)表示模板图像上的坐标(x-μ,y-ν)对应的像素点的灰度值,其中,(x-μ,y-ν)表示以目标图像上的参考点(μ,ν)为原点建立的坐标系中模板图 像上的坐标。
Figure PCTCN2016102129-appb-000019
表示所述前景图像的像素点灰度值的均值;
Figure PCTCN2016102129-appb-000020
表示所述子图像中像素点的灰度值的均值。
需要说明的,在步骤103和步骤104中,是根据模板图像的相关灰度特征与目标图像的相关灰度特征通过归一化互相关公式直接进行计算,可选的,在实际应用中对于该归一化互相关的计算也可以分步骤进行,例如,在步骤103中,计算模板图像的灰度特征可以包括:
计算所述模板图像中所述前景图像的像素点的灰度值的均值
Figure PCTCN2016102129-appb-000021
按照如下方式计算所述前景图像的像素点的灰度值方差与s面积之积:
Figure PCTCN2016102129-appb-000022
其中,(x,y)表示当所述模板图像覆盖在所述目标图像上时,所述目标图像上的坐标值,x∈[μ,μ+N-1],y∈[v,v+N-1]。
(μ,ν)表示所述模板图像上的参考点对应于所述目标图像上时的坐标值,该参考点可以为模板图像的左下角、左上角或中心点等,本实施例中以参考点为左上角为例进行说明。若模板图像的参考点相对于模板左上角的坐标为(m,n),那么相应的有x∈[μ-m,μ-m+N-1],y∈[v-n,v-n+N-1]
s表示所述前景图像的像素点的集合。
t(x-μ,y-ν)表示模板图像上的坐标(x-μ,y-ν)对应的像素点的灰度值,其中,(x-μ,y-ν)表示以目标图像上的参考点(μ,ν)为原点建立的坐标系中模板图像上的坐标。
进一步的,在步骤103中,计算目标图像的灰度特征包括:
计算所述子图像中像素点的灰度值的均值
Figure PCTCN2016102129-appb-000023
对所述模板图像进行掩膜处理,将所述前景图像的像素置为1,背景图像的像素置为0,得到前景掩膜。
通过将所述前景掩膜与所述目标图像进行频域点乘,得到所述目标图像中所述子图像的像素点灰度值的总和
Figure PCTCN2016102129-appb-000024
按如下方式计算区域方差与s面积之积:
Figure PCTCN2016102129-appb-000025
其中,f(x,y)表示目标图像上坐标(x,y)对应像素点的灰度值。
再按如下方式计算所述归一化互相关:
Figure PCTCN2016102129-appb-000026
其中,γ(μ,ν)表示当所述模板图像的左上角对应于所述目标图像上坐标(μ,ν)时,所述前景图像与所述子图像的归一化互相关;
f(x,y)表示坐标(x,y)对应的像素点的灰度值,其中,所述坐标(x,y)是以目标图像上的原点建立的坐标系中的坐标;
t(x-μ,y-ν)表示模板图像上的坐标(x-μ,y-ν)对应的像素点的灰度值,其中,(x-μ,y-ν)表示以目标图像上的参考点(μ,ν)为原点建立的坐标系中模板图像上的坐标;
Figure PCTCN2016102129-appb-000027
表示所述前景图像的像素点灰度值的均值;
Figure PCTCN2016102129-appb-000028
表示所述子图像中像素点的灰度值的均值。
可以理解的是,本实施例中,对于归一化互相关的计算,可以根据目标图像的灰度特征和模板图像的灰度特征,通过上述归一化互相关的公式直接计算,也可分步骤预先对一些参数进行计算,具体方法不做限定。
其中,对分步骤进行计算的方法,为了更便于理解,可参阅下述具体说明:
其中,对于归一化互相关公式,展开分子项得:
Figure PCTCN2016102129-appb-000029
其中,第三项为0。如果将模板图像的背景像素全部置为0,那么第一项为:
Figure PCTCN2016102129-appb-000030
即对模板图像中所有像素点进行运算(包括前景图像的和背景图像),因此直接通过傅里叶变换计算互相关项:
Figure PCTCN2016102129-appb-000031
即可。
对于第二项中的,
Figure PCTCN2016102129-appb-000032
Figure PCTCN2016102129-appb-000033
在步骤103的分步骤描述中,计算目标图像的灰度特征步骤中,已经说明,此处不赘述。
对于归一化互相关公式的分母项第二项在步骤103的分步骤描述中,计算所述前景图像的像素点的灰度值方差与s面积之积步骤中,已经说明,此处不赘述。对于分母项中的第一项展开得:
Figure PCTCN2016102129-appb-000034
其中,NS表示集合S中点的个数。第一项
Figure PCTCN2016102129-appb-000035
的计算与
Figure PCTCN2016102129-appb-000036
同理,应用掩膜处理,将前景掩膜与目标图像的平方图像频域点乘得到。第二项按照
Figure PCTCN2016102129-appb-000037
方式计算。
需要说明的是,本发明实施例中,傅里叶变换的最优尺寸以目标图像为准,所以对模板图像进行掩膜处理,并不会改变傅里叶变换的尺寸,大O复杂度是以目标图像的边长为参数计算的,大O复杂度表示为O(M2log2M),其中,M为目标图像的边长,这里的M不变,因此,此算法的大O复杂度与以模板图像的矩形区域的归一化互相关的大O复杂度相同。
可以理解的是,本实施例中的归一化互相关中的灰度值,若图像为灰度图像可以直接以灰度值表示,灰度值的范围为0至255之间,若图像为彩色图像,例如,以三通道RGB(英文:Red Green Blue,缩写:RGB)红,绿,蓝三种颜色表示,某像素点的颜色为(123,104,238),可以通过浮点算法,整数方法,移位方法或平均值法等进行灰度值转换,将原来的RGB中的R,G,B统一用灰度值替换即可,或者,将R,G,B的值作为灰度值分别带入上述归一化互相关公式中,则计算出三个γ值,分别为γ1,γ2,γ3,再求出γ1,γ2,γ3三个值的平均值。对于CMYK(英文:Cyan Mageata Yellow Black,缩写CMYK)是以青,品红,黄,黑四种颜色表示,处理方法与RGB的方法相同,此处对于具体方法不做限定。
步骤105、当所述归一化互相关大于预置值时,确定所述前景图像与所述子图像匹配。
当所述模板图像覆盖在目标图像上一个位置时,前景图像的灰度特征和所述子图像的灰度特征计算所述前景图像与子图像的归一化互相关大于预置值时,则确定所述前景图像与所述子图像匹配。
为了确定所述模板图像上的前景图像与所述目标图像是否匹配,需要将所述前景图像与所述目标图像的不同局部进行比较,也即所述模板图像覆盖在所述目标图像上的位置发生变化,也即(μ,ν)发生变化,并在每个位置进行比较,也即(μ,ν)发生变化以后,重复执行步骤103至步骤105,并通过判断归一化互相关是否大于预置值来判断当前位置的前景图像与子图像是否匹配。
本发明实施例提供的一种图像匹配的方法,应用于图像匹配的装置,且可以在以下场景中实现,请参阅图2的前景图像与子图像匹配示意图所示,例如,在工业流水线检测,装置通过传感器获取流水线上产品的图像,该产品可能为不规则形状,装置获取的图像为模板图像20,模板图像20为正方形,模板图像20中的实际产品的图像为前景图像21和除前景图像21之外的背景图像22,将获取到的模板图像20覆盖在目标图像10上,目标图像10可以为装置预存储的图像,并使模板图像20在目标图像10上移动,模板图像20左上角对应于目标模板上的坐标为(μ,ν),在(μ,ν)位置归一化互相关为模板图像20的前景图像21与子图像11的归一化互相关,若该归一化互相关大于预置值,则确定 前景图像21与子图像11匹配,进而可以确定进行下一个工序流程。
本实施例中,在模板图像与目标图像通过灰度值进行归一化互相关计算时,将模板图像分为前景图像和背景图像,只确定将模板图像中的任意形状的前景图像与目标图像中的子图像进行归一化互相关计算,确定前景图像与目标图像的子图像是否匹配,在进行归一化互相关计算时,不需要计算背景图像,在图像质量较差,目标图像中相似部分较多的情况下,降低误判,有效的提高了匹配的精确度,且保证大O复杂度不增加。
上面对图像匹配的方法进行描述,该方法应用于图像匹配装置,下面对该装置进行描述,请参阅图3所示,本发明提供的图像匹配装置的一个实施例包括:
获取模块301,用于获取模板图像。
第一确定模块302,用于确定所述获取模块301获取的模板图像中的前景图像,所述前景图像为模板图像中实际物体的像素点的集合。
第一计算模块303,用于计算所述前景图像的像素点的灰度特征。
第二计算模块304,用于计算子图像的像素点的灰度特征,所述子图像为当所述模板图像覆盖在所述目标图像上一个位置时,在所述目标图像上与所述前景图像对应的图像。
第三计算模块305,用于通过模板图像的灰度特征和所述目标图像的灰度值特征计算所述前景图像与子图像的归一化互相关。
第二确定模块306,用于当所述归一化互相关大于预置值时,确定所述前景图像与所述子图像匹配。
请参阅图4所示,在上述实施例的基础上,本发明提供的图像匹配装置的另一个实施例包括:
可选的,所述第一计算模块303,还用于计算前景图像的像素点灰度值的均值
Figure PCTCN2016102129-appb-000038
所述第一计算模块303,还用于按照如下方式计算前景图像的像素点方差与s面积之积:
Figure PCTCN2016102129-appb-000039
其中,(x,y)目标图像上一点的坐标值;
(μ,ν)表示所述模板图像上的一参考点(此处以左上角为例)对应于所述目标图像上时的坐标值;
s为所述前景图像的像素点的集合;
t(x-μ,y-ν)表示模板图像上的坐标(x-μ,y-ν)对应的像素点的灰度值,其中,(x-μ,y-ν)表示以目标图像上的参考点(μ,ν)为原点建立的坐标系中模板图像上的坐标。
可选的,所述第二计算模块304包括:
第一计算单元3041,用于计算所述子图像中像素点的灰度值的均值
Figure PCTCN2016102129-appb-000040
掩膜处理单元3042,用于对所述模板图像进行掩膜处理,得到前景掩膜;
第二计算单元3043,用于通过将所述前景掩膜与所述目标图像进行频域点乘,得到所述目标图像中所述子图像的像素点灰度值的总和
Figure PCTCN2016102129-appb-000041
按如下方式计算区域方差与s面积之积:
Figure PCTCN2016102129-appb-000042
其中,f(x,y)表示坐标(x,y)对应的像素的灰度值,其中,所述坐标(x,y)是以目标图像上的原点建立的坐标系中的坐标;
(μ,ν)表示所述模板图像上的一参考点(此处以左上角为例)对于所述目标图像上时的坐标值;(x-μ,y-ν)表示以目标图像上的参考点(μ,ν)为原点建立的坐标系中模板图像上的坐标;
s为所述前景图像的像素点的集合。
可选的,所述掩膜处理单元3042,还用于对所述模板图像进行掩膜处理,将所述前景图像的像素置为1,背景图像的像素置为0,得到前景掩膜,所述背景图像为所述模板图像上除所述前景图像之外的图像。
可选的,所述第三计算模块305,还用于根据目标图像的灰度特征和模板图像的灰度特征按如下方式计算所述归一化互相关:
Figure PCTCN2016102129-appb-000043
其中,γ(μ,ν)表示所述模板图像以目标图像上的一坐标(μ,ν)为参考点,所述前景图像与所述子图像的归一化互相关;
f(x,y)表示坐标(x,y)对应的像素的灰度值,其中,所述坐标(x,y)是以目标图像上的原点建立的坐标系中的坐标;
t(x-μ,y-ν)表示模板图像上的坐标(x-μ,y-ν)对应的像素点的灰度值,其中,(x-μ,y-ν)表示以目标图像上的参考点(μ,ν)为原点建立的坐标系中模板图像上的坐标;
Figure PCTCN2016102129-appb-000044
表示所述前景图像的像素点灰度值的均值;
Figure PCTCN2016102129-appb-000045
表示所述子图像中像素点的灰度值的均值。
本实施例中,图像匹配装置的具体工作过程可以参阅方法实施例中的具体描述,此处不赘述。
本实施例中,在模板图像与目标图像通过灰度值进行归一化互相关时,将模板图像分为前景图像和背景图像,只确定将模板图像中的任意形状的前景图像与目标图像中的子图像进行归一化互相关,确定前景图像与目标图像的子图像是否匹配,在进行归一化互相关计算时,不需要计算背景图像,在图像质量较差,目标图像中相似部分较多的情况下,降低误判,有效的提高了匹配的精确度,且保证大O复杂度不增加。
请参阅图5所示,图5是本发明实施例图像匹配装置40的结构示意图。图像匹配装置40可包括输入设备410、输出设备420、处理器430和存储器440。本发明实施例中的输入设备可以是传感器。输出设备可以是显示设备。
存储器440可以包括只读存储器和随机存取存储器,并向处理器430提供指令和数据。存储器440的一部分还可以包括非易失性随机存取存储器(英文全称:Non-Volatile Random Access Memory,英文缩写:NVRAM)。
存储器440存储了如下的元素,可执行模块或者数据结构,或者它们的子集,或者它们的扩展集:
操作指令:包括各种操作指令,用于实现各种操作。
操作系统:包括各种系统程序,用于实现各种基础业务以及处理基于硬件的任务。
本发明实施例中,通过输入设备410获取模板图像;
处理器430用于:
确定所述模板图像中的前景图像,所述前景图像为模板图像中实际物体的像素点的集合;
当所述模板图像覆盖在目标图像上一个位置时,计算所述前景图像的灰度特征和子图像的灰度特征;所述子图像为当所述模板图像覆盖在所述目标图像上时,在所述目标图像上与所述前景图像对应的图像;
通过所述前景图像的灰度特征和所述子图像的灰度特征计算所述前景图像与子图像的归一化互相关;当所述归一化互相关大于预置值时,确定所述前景图像与所述子图像匹配。
处理器430控制图像匹配装置40的操作,处理器430还可以称为中央处理单元(英文全称:Central Processing Unit,英文缩写:CPU)。存储器440可以包括只读存储器和随机存取存储器,并向处理器430提供指令和数据。存储器440的一部分还可以包括NVRAM。具体的应用中,图像匹配装置40的各个组件通过总线系统450耦合在一起,其中总线系统450除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线系统450。
上述本发明实施例揭示的方法可以应用于处理器430中,或者由处理器430实现。处理器430可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器430中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器430可以是通用处理器、数字信号处理器(英文全称:digital signal processing,英文缩写:DSP)、专用集成电路(英文全称:Application Specific Integrated Circuit,英文缩写:ASIC)、现成可编程门阵列(英文全称:Field-Programmable Gate Array,英文缩写:FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执 行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器440,处理器430读取存储器440中的信息,结合其硬件完成上述方法的步骤。
可选地,处理器430还用于:
计算所述模板图像中所述前景图像的像素点的灰度值的均值
Figure PCTCN2016102129-appb-000046
按照如下方式计算所述前景图像的像素点的灰度值方差与s面积之积:
Figure PCTCN2016102129-appb-000047
其中,(x,y)表示所述目标图像上一点的坐标值;
(μ,ν)表示所述模板图像上的一参考点(此处以左上角为例)对应于所述目标图像上时的坐标值;
s为所述前景图像的像素点的集合;
t(x-μ,y-ν)表示在所述模板图像上的坐标(x-μ,y-ν)对应的像素点的灰度值。
可选地,处理器430还用于:
计算所述子图像中像素点的灰度值的均值
Figure PCTCN2016102129-appb-000048
对所述模板图像进行掩膜处理,得到前景掩膜;
通过将所述前景掩膜与所述目标图像进行频域点乘,得到所述目标图像中所述子图像的像素点灰度值的总和
Figure PCTCN2016102129-appb-000049
按如下方式计算区域方差与s面积之积:
Figure PCTCN2016102129-appb-000050
其中,f(x,y)表示坐标(x,y)对应的像素的灰度值,其中,所述坐标(x,y)是以目标图像上的原点建立的坐标系中的坐标;
(μ,ν)表示所述模板图像上的一参考点对于所述目标图像上时的坐标值;
s为所述前景图像的像素点的集合。
可选的,处理器430还用于:
对所述模板图像进行掩膜处理,将所述前景图像的像素置为1,背景图像的像素置为0,得到前景掩膜,所述背景图像为所述模板图像上除所述前景图像之外的图像。
可选的,处理器430还用于:
按如下方式计算所述归一化互相关:
Figure PCTCN2016102129-appb-000051
其中,γ(μ,ν)表示所述模板图像以目标图像上的一坐标(μ,ν)为参考点,所述前景图像与所述子图像的归一化互相关;
f(x,y)表示坐标(x,y)对应的像素的灰度值,其中,所述坐标(x,y)是以目标图像上的原点建立的坐标系中的坐标;
t(x-μ,y-ν)表示模板图像上的坐标(x-μ,y-ν)对应的像素点的灰度值,其中,(x-μ,y-ν)表示以目标图像上的参考点(μ,ν)为原点建立的坐标系中模板图像上的坐标;
Figure PCTCN2016102129-appb-000052
表示所述前景图像的像素点灰度值的均值;
Figure PCTCN2016102129-appb-000053
表示所述子图像中像素点的灰度值的均值。
本实施例中,在模板图像与目标图像通过灰度值进行归一化互相关时,将模板图像分为前景图像和背景图像,只将模板图像中的任意形状的前景图像与目标图像中的子图像进行归一化互相关,从而确定前景图像与目标图像的子图像是否匹配,在进行归一化互相关计算时,保证大O复杂度不增加的情况下,不需要计算背景图像中的像素点,避免误判,有效的提高了图像匹配的精确度。
图4的相关描述可以参阅图1方法部分的相关描述和效果进行理解,本处不做过多赘述。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述 的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(英文全称:Read-Only Memory,英文缩写:ROM)、随机存取存储器(英文全称:Random Access Memory,英文缩写:RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上对本发明所提供的一种人脸识别的方法进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的技术人员,依据 本发明实施例的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (12)

  1. 一种图像匹配的方法,其特征在于,包括:
    获取模板图像;
    确定所述模板图像中的前景图像,所述前景图像为模板图像中实际物体的像素点的集合;
    当所述模板图像覆盖在目标图像上一个位置时,计算所述前景图像的灰度特征和子图像的灰度特征,所述子图像为当所述模板图像覆盖在所述目标图像上时,在所述目标图像上与所述前景图像对应的图像;
    通过所述前景图像的灰度特征和所述子图像的灰度特征计算所述前景图像与子图像的归一化互相关;
    当所述归一化互相关大于预置值时,确定所述前景图像与所述子图像匹配。
  2. 根据权利要求1所述的方法,其特征在于,所述计算所述前景图像的像素点灰度特征包括:
    计算所述前景图像的像素点的灰度值的均值
    Figure PCTCN2016102129-appb-100001
    按照如下方式计算所述前景图像的像素点的灰度值方差与s面积之积:
    Figure PCTCN2016102129-appb-100002
    其中,(x,y)表示所述目标图像上的坐标值;
    (μ,ν)表示所述模板图像上的一参考点对应于所述目标图像上时的坐标值;
    s为所述前景图像的像素点的集合;
    t(x-μ,y-ν)表示模板图像上的坐标(x-μ,y-ν)对应的像素点的灰度值,其中,(x-μ,y-ν)表示以目标图像上的参考点(μ,ν)为原点建立的坐标系中模板图像上的坐标。
  3. 根据权利要求2所述的方法,其特征在于,所述计算子图像的像素点的灰度特征包括:
    计算所述子图像中像素点的灰度值的均值
    Figure PCTCN2016102129-appb-100003
    对所述模板图像进行掩膜处理,得到前景掩膜;
    通过将所述前景掩膜与所述目标图像进行频域点乘,得到所述目标图像中所述子图像的像素点灰度值的总和
    Figure PCTCN2016102129-appb-100004
    按如下方式计算区域方差与s面积之积:
    Figure PCTCN2016102129-appb-100005
    其中,f(x,y)表示坐标(x,y)对应的像素点的灰度值,其中,所述坐标(x,y)是以目标图像上的原点建立的坐标系中的坐标;
    (x-μ,y-ν)表示以目标图像上的参考点(μ,ν)为原点建立的坐标系中模板图像上的坐标;
    s为所述前景图像的像素点的集合。
  4. 根据权利要求3所述的方法,其特征在于,所述对所述模板图像进行掩膜处理,得到前景掩膜包括:
    对所述模板图像进行掩膜处理,将所述前景图像的像素置为1,背景图像的像素置为0,得到前景掩膜,所述背景图像为所述模板图像上除所述前景图像之外的图像。
  5. 根据权利要求1至4任一项所述的方法,其特征在于,所述通过前景图像的灰度值和所述子图像的灰度值计算所述前景图像与所述子图像的归一化互相关包括:
    按如下方式计算所述归一化互相关:
    Figure PCTCN2016102129-appb-100006
    其中,γ(μ,ν)表示所述模板图像以目标图像上的一坐标(μ,ν)为参考点,所述前景图像与所述子图像的归一化互相关;
    f(x,y)表示坐标(x,y)对应的像素点的灰度值,其中,所述坐标(x,y)是以目标图像上的原点建立的坐标系中的坐标;
    t(x-μ,y-ν)表示模板图像上的坐标(x-μ,y-ν)对应的像素的灰度值,其中, (x-μ,y-ν)表示以目标图像上的参考点(μ,ν)为原点建立的坐标系中模板图像上的坐标;
    Figure PCTCN2016102129-appb-100007
    表示所述前景图像的像素点灰度值的均值;
    Figure PCTCN2016102129-appb-100008
    表示所述子图像中像素点的灰度值的均值。
  6. 一种图像匹配的装置,其特征在于,包括:
    获取模块,用于获取模板图像;
    第一确定模块,用于确定所述模板图像中的前景图像,所述前景图像为模板图像中实际物体的像素点的集合;
    第一计算模块,用于计算所述前景图像的像素点的灰度特征;
    第二计算模块,用于计算子图像的像素点的灰度特征,所述子图像为当所述模板图像覆盖在所述目标图像上一个位置时,在所述目标图像上与所述前景图像对应的图像;
    第三计算模块,用于通过模板图像的灰度特征和所述目标图像的灰度特征计算所述前景图像与子图像的归一化互相关;
    第二确定模块,用于当所述归一化互相关大于预置值时,确定所述前景图像与所述子图像匹配。
  7. 根据权利要求6所述的装置,其特征在于,
    所述第一计算模块,还用于计算所述前景图像的像素点灰度值的均值
    Figure PCTCN2016102129-appb-100009
    所述第一计算模块,还用于按照如下方式计算前景图像的像素点方差与s面积之积:
    Figure PCTCN2016102129-appb-100010
    其中,(x,y)表示目标图像上的坐标值;
    (μ,ν)表示所述模板图像上的一参考点对应于所述目标图像上时的坐标值;
    s为所述前景图像的像素点的集合;
    t(x-μ,y-ν)表示模板图像上的坐标(x-μ,y-ν)对应的像素的灰度值,其中,(x-μ,y-ν)表示以目标图像上的参考点(μ,ν)为原点建立的坐标系中模板图像上的坐标。
  8. 根据权利要求7所述的装置,其特征在于,所述第二计算模块包括:
    第一计算单元,用于计算所述子图像中像素点的灰度值的均值
    Figure PCTCN2016102129-appb-100011
    掩膜处理单元,用于对所述模板图像进行掩膜处理,得到前景掩膜;
    第二计算单元,用于通过将所述前景掩膜与所述目标图像进行频域点乘,得到所述目标图像中所述子图像的像素点灰度值的总和
    Figure PCTCN2016102129-appb-100012
    按如下方式计算区域方差与s面积之积:
    Figure PCTCN2016102129-appb-100013
    其中,f(x,y)表示坐标(x,y)对应的像素的灰度值,其中,所述坐标(x,y)是以目标图像上的原点建立的坐标系中的坐标;
    (x-μ,y-ν)表示以目标图像上的参考点(μ,ν)为原点建立的坐标系中模板图像上的坐标;
    s为所述前景图像的像素点的集合。
  9. 根据权利要求8所述的装置,其特征在于,
    所述掩膜处理单元,还用于对所述模板图像进行掩膜处理,将所述前景图像的像素置为1,背景图像的像素置为0,得到前景掩膜,所述背景图像为所述模板图像上除所述前景图像之外的图像。
  10. 根据权利要求6至9任一项所述的装置,其特征在于,
    所述第三计算模块,还用于按如下方式计算所述归一化互相关:
    Figure PCTCN2016102129-appb-100014
    其中,γ(μ,ν)表示所述模板图像以目标图像上的一坐标(μ,ν)为参考点,所述前景图像与所述子图像的归一化互相关;
    f(x,y)表示坐标(x,y)对应的像素的灰度值,其中,所述坐标(x,y)是以目标图像上的原点建立的坐标系中的坐标;
    t(x-μ,y-ν)表示模板图像上的坐标(x-μ,y-ν)对应的像素点的灰度值,其 中,(x-μ,y-ν)表示以目标图像上的参考点(μ,ν)为原点建立的坐标系中模板图像上的坐标;
    Figure PCTCN2016102129-appb-100015
    表示所述前景图像的像素点灰度值的均值;
    Figure PCTCN2016102129-appb-100016
    表示所述子图像中像素点的灰度值的均值。
  11. 一种图像匹配的装置,其特征在于,包括:
    处理器以及存储器;
    所述存储器用于存储程序;
    所述处理器用于执行所述存储器中的程序,使得所述图像匹配装置执行如权利要求1至5任一项所述的图像匹配的方法。
  12. 一种存储一个或多个程序的存储介质,其特征在于,所述一个或多个程序包括指令,所述指令当被包括一个或多个处理器的所述图像匹配装置执行时,使所述图像匹配装置执行如权利要求1至5任一项所述的图像匹配的方法。
PCT/CN2016/102129 2016-10-14 2016-10-14 一种图像匹配的方法及装置 WO2018068304A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680039124.8A CN109348731B (zh) 2016-10-14 2016-10-14 一种图像匹配的方法及装置
PCT/CN2016/102129 WO2018068304A1 (zh) 2016-10-14 2016-10-14 一种图像匹配的方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/102129 WO2018068304A1 (zh) 2016-10-14 2016-10-14 一种图像匹配的方法及装置

Publications (1)

Publication Number Publication Date
WO2018068304A1 true WO2018068304A1 (zh) 2018-04-19

Family

ID=61906106

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/102129 WO2018068304A1 (zh) 2016-10-14 2016-10-14 一种图像匹配的方法及装置

Country Status (2)

Country Link
CN (1) CN109348731B (zh)
WO (1) WO2018068304A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111105447A (zh) * 2019-12-31 2020-05-05 西安科技大学 一种基于局部处理的屏幕图像传递方法
CN111340795A (zh) * 2020-03-09 2020-06-26 珠海格力智能装备有限公司 物品质量的确定方法及装置
CN111369599A (zh) * 2018-12-25 2020-07-03 阿里巴巴集团控股有限公司 一种图像匹配方法、设备、装置及存储介质
CN111507995A (zh) * 2020-04-30 2020-08-07 柳州智视科技有限公司 一种基于彩色图像金字塔和颜色通道分类的图像分割方法
CN112164032A (zh) * 2020-09-14 2021-01-01 浙江华睿科技有限公司 一种点胶方法、装置、电子设备及存储介质
CN114494265A (zh) * 2022-04-19 2022-05-13 南通宝田包装科技有限公司 识别化妆品生产领域包装印刷质量方法及人工智能系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210565B (zh) * 2019-06-05 2021-04-30 中科新松有限公司 归一化互相关图像模板匹配实现方法
CN110288034A (zh) * 2019-06-28 2019-09-27 广州虎牙科技有限公司 图像匹配方法、装置、电子设备及可读存储介质
CN113066121A (zh) * 2019-12-31 2021-07-02 深圳迈瑞生物医疗电子股份有限公司 图像分析系统和识别重复细胞的方法
CN114140700A (zh) * 2021-12-01 2022-03-04 西安电子科技大学 基于级联网络的分步异源图像模板匹配方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770583A (zh) * 2010-01-15 2010-07-07 华中科技大学 一种基于场景全局特征的模板匹配方法
US20140099046A1 (en) * 2012-10-04 2014-04-10 Olympus Corporation Image processing apparatus
CN104318568A (zh) * 2014-10-24 2015-01-28 武汉华目信息技术有限责任公司 一种图像配准的方法和系统
CN104915940A (zh) * 2015-06-03 2015-09-16 厦门美图之家科技有限公司 一种基于图像对齐的图像去噪的方法和系统
CN105678778A (zh) * 2016-01-13 2016-06-15 北京大学深圳研究生院 一种图像匹配方法和装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4696856B2 (ja) * 2005-11-02 2011-06-08 オムロン株式会社 画像処理装置、画像処理方法、そのプログラム、およびそのプログラムを記録したコンピュータ読取り可能な記録媒体
CN101639858A (zh) * 2009-08-21 2010-02-03 深圳创维数字技术股份有限公司 基于目标区域匹配的图像检索方法
CN103177458B (zh) * 2013-04-17 2015-11-25 北京师范大学 一种基于频域分析的可见光遥感图像感兴趣区域检测方法
CN103593838B (zh) * 2013-08-01 2016-04-13 华中科技大学 一种快速互相关灰度图像匹配方法与装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770583A (zh) * 2010-01-15 2010-07-07 华中科技大学 一种基于场景全局特征的模板匹配方法
US20140099046A1 (en) * 2012-10-04 2014-04-10 Olympus Corporation Image processing apparatus
CN104318568A (zh) * 2014-10-24 2015-01-28 武汉华目信息技术有限责任公司 一种图像配准的方法和系统
CN104915940A (zh) * 2015-06-03 2015-09-16 厦门美图之家科技有限公司 一种基于图像对齐的图像去噪的方法和系统
CN105678778A (zh) * 2016-01-13 2016-06-15 北京大学深圳研究生院 一种图像匹配方法和装置

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111369599A (zh) * 2018-12-25 2020-07-03 阿里巴巴集团控股有限公司 一种图像匹配方法、设备、装置及存储介质
CN111369599B (zh) * 2018-12-25 2024-04-16 阿里巴巴集团控股有限公司 一种图像匹配方法、设备、装置及存储介质
CN111105447A (zh) * 2019-12-31 2020-05-05 西安科技大学 一种基于局部处理的屏幕图像传递方法
CN111105447B (zh) * 2019-12-31 2023-02-28 西安科技大学 一种基于局部处理的屏幕图像传递方法
CN111340795A (zh) * 2020-03-09 2020-06-26 珠海格力智能装备有限公司 物品质量的确定方法及装置
CN111340795B (zh) * 2020-03-09 2023-11-10 珠海格力智能装备有限公司 物品质量的确定方法及装置
CN111507995A (zh) * 2020-04-30 2020-08-07 柳州智视科技有限公司 一种基于彩色图像金字塔和颜色通道分类的图像分割方法
CN111507995B (zh) * 2020-04-30 2023-05-23 柳州智视科技有限公司 一种基于彩色图像金字塔和颜色通道分类的图像分割方法
CN112164032A (zh) * 2020-09-14 2021-01-01 浙江华睿科技有限公司 一种点胶方法、装置、电子设备及存储介质
CN112164032B (zh) * 2020-09-14 2023-12-29 浙江华睿科技股份有限公司 一种点胶方法、装置、电子设备及存储介质
CN114494265A (zh) * 2022-04-19 2022-05-13 南通宝田包装科技有限公司 识别化妆品生产领域包装印刷质量方法及人工智能系统
CN114494265B (zh) * 2022-04-19 2022-06-17 南通宝田包装科技有限公司 识别化妆品生产领域包装印刷质量方法及人工智能系统

Also Published As

Publication number Publication date
CN109348731A (zh) 2019-02-15
CN109348731B (zh) 2022-05-17

Similar Documents

Publication Publication Date Title
WO2018068304A1 (zh) 一种图像匹配的方法及装置
WO2019169772A1 (zh) 图片处理方法、电子装置及存储介质
US20210366124A1 (en) Graphical fiducial marker identification
CN110544258B (zh) 图像分割的方法、装置、电子设备和存储介质
US9754164B2 (en) Systems and methods for classifying objects in digital images captured using mobile devices
US9418283B1 (en) Image processing using multiple aspect ratios
EP3454250A1 (en) Facial image processing method and apparatus and storage medium
US9355312B2 (en) Systems and methods for classifying objects in digital images captured using mobile devices
CN112381775B (zh) 一种图像篡改检测方法、终端设备及存储介质
US9412164B2 (en) Apparatus and methods for imaging system calibration
EP2879080B1 (en) Image processing device and method, and computer readable medium
TWI240067B (en) Rapid color recognition method
US11886492B2 (en) Method of matching image and apparatus thereof, device, medium and program product
WO2020082731A1 (zh) 电子装置、证件识别方法及存储介质
CN105590319A (zh) 一种深度学习的图像显著性区域检测方法
US20180253852A1 (en) Method and device for locating image edge in natural background
CN110728722B (zh) 图像颜色迁移方法、装置、计算机设备和存储介质
WO2018082308A1 (zh) 一种图像处理方法及终端
Vanetti et al. Gas meter reading from real world images using a multi-net system
CN112396050B (zh) 图像的处理方法、设备以及存储介质
CN112651953A (zh) 图片相似度计算方法、装置、计算机设备及存储介质
CN113469092A (zh) 字符识别模型生成方法、装置、计算机设备和存储介质
Mu et al. Finding autofocus region in low contrast surveillance images using CNN-based saliency algorithm
WO2021051580A1 (zh) 基于分组批量的图片检测方法、装置及存储介质
EP3435281B1 (en) Skin undertone determining method and an electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16918781

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16918781

Country of ref document: EP

Kind code of ref document: A1