CN113077508A - Image similarity matching method and device - Google Patents

Image similarity matching method and device Download PDF

Info

Publication number
CN113077508A
CN113077508A CN202110354716.1A CN202110354716A CN113077508A CN 113077508 A CN113077508 A CN 113077508A CN 202110354716 A CN202110354716 A CN 202110354716A CN 113077508 A CN113077508 A CN 113077508A
Authority
CN
China
Prior art keywords
image
matching
template
corner
matched
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110354716.1A
Other languages
Chinese (zh)
Inventor
陈宇龙
陈晓春
邱华东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mohist Security Technology Co ltd
Original Assignee
Shenzhen Mohist Security Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mohist Security Technology Co ltd filed Critical Shenzhen Mohist Security Technology Co ltd
Priority to CN202110354716.1A priority Critical patent/CN113077508A/en
Publication of CN113077508A publication Critical patent/CN113077508A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image similarity matching method and device. The method comprises the following steps: an angular point extraction step, which is to extract angular point characteristics in the template image and the image to be detected by an image angular point extraction method; the method comprises the following steps of (1) corner feature matching, namely comparing the corner features of two images to obtain the mutually matched corner features between the two images, extracting rectangular areas where the mutually matched corners in the two images are located by setting anchor points, and respectively zooming the rectangular areas into template matching images and images to be matched, wherein the template matching images and the images to be matched have the same resolution; a secondary angular point extraction step, which is to extract and match angular point characteristics of the template matching image and the image to be matched respectively; and a similarity calculation step, wherein the similarity calculation is carried out according to the template matching image and the image to be matched. The method has wider application scenes and better universality, and can perform more accurate matching especially when the image to be detected is a subgraph or a part of the template image.

Description

Image similarity matching method and device
Technical Field
The invention relates to the field of digital image processing, in particular to an image similarity matching method and device.
Background
The similarity comparison of the two images is widely applied to the fields of industry, agriculture, intellectual property rights and the like. Especially, the internet is developed more and more nowadays, digital images are becoming a kind of digital assets, and the protection and detection of digital assets copyright will make a great use of this technology.
Some of the currently used technologies have significant disadvantages, such as that the image similarity comparison is limited to the comparison of two whole images, for example, the technology disclosed in chinese patent CN202011059920.2 requires that the two commodity images to be compared have the same size, and the commodity image is divided into at least two sub-images to compare the similarity. The technology disclosed in chinese patent CN202011239115.8 realizes the matching of aluminum template images by making an aluminum template image fingerprint code library. The technology disclosed by the Chinese patent CN202011071088.8 calculates the public visible area of two targets to be matched, and measures the similarity of the targets to be matched by using a random matching strategy. These methods are often customized according to the field application scenario, and lack a certain versatility. In particular, one image is not similar to another image, but when the image is a sub-image or part of another image, the existing similarity comparison method is not always sufficient.
Disclosure of Invention
The present invention is directed to overcome the above drawbacks of the prior art, and provides a method and an apparatus for matching image similarity with better versatility.
In order to achieve the purpose, the invention adopts the following technical scheme: an image similarity matching method for comparing the similarity between a template image and an image to be detected at least comprises the following steps:
an angular point extraction step, which is to extract angular point characteristics in the template image and the image to be detected by an image angular point extraction method;
the method comprises the following steps of (1) corner feature matching, namely comparing the corner features of two images to obtain the mutually matched corner features between the two images, extracting rectangular areas where the mutually matched corners in the two images are located by setting anchor points, and respectively zooming the rectangular areas into template matching images and images to be matched, wherein the template matching images and the images to be matched have the same resolution;
a secondary angular point extraction step, which is to extract and match angular point characteristics of the template matching image and the image to be matched respectively;
and a similarity calculation step, wherein the similarity calculation is carried out according to the template matching image and the image to be matched.
Further, before the corner point extracting step, the method further comprises: and a pre-processing step, namely firstly carrying out graying on the template image and the image to be detected respectively, and then carrying out fuzzy processing on the two grayed images so as to enable the characteristics of the images to be more obvious. In the preprocessing step, the fuzzy processing adopts one of Gaussian fuzzy, median filtering or sharpening processing.
Further, in the corner extraction step, the image corner extraction method is that a fixed-size window is used for moving in any area of the image in small sizes from left to right, up and down, the degree of gray level change of pixels in the window before sliding and after sliding is compared, if large gray level change exists, a contact point exists in the window and is regarded as a corner, gray values of surrounding pixel points are recorded, and the corner features are obtained.
Further, in the step of extracting the corner point, the coordinate position of the corner point is also obtained while the feature of the corner point is extracted, the coordinate position values of the corner point are respectively the horizontal pixel distance and the vertical pixel distance of the corner point relative to the image origin, and the image origin is the upper left vertex, the lower left vertex, the upper right vertex or the lower right vertex of the image.
Further, the corner feature matching step includes:
comparing the corner features of the template image and the corner features of the image to be detected pairwise to obtain matched corner features which form a matching information table,
finding out four extreme angles of the template image in the matching information tableThe point is used as an anchor point: top left vertex t1Lower left vertex t2Top right vertex t3And a lower right vertex t4Finding out the corner points which are correspondingly matched with the image to be detected in the image to be detected and using the corner points as four anchor points of the image to be detected;
the coordinates of the four anchor points of the template image are respectively as follows: t is t1(xt1,yt1)、t2(xt2,yt2)、t3(xt3,yt3)、t4(xt4,yt4) (ii) a Respectively take Xmin=min(xt1,xt2,xt3,xt4)、Ymin=min(yt1,yt2,yt3,yt4)、Xmax=max(xt1,xt2,xt3,xt4) And Ymax=max(yt1,yt2,yt3,yt4) From point tmin(Xmin,Ymin) And point tmax(Xmax,Ymax) Constructing template image rectangular matching areas as two opposite vertexes of a rectangle respectively;
the coordinates of the four anchor points of the image to be detected are respectively as follows: f. of1(xf1,yf1)、f2(xf2,yf2)、f3(xf3,yf3)、f4(xf4,yf4) (ii) a Respectively take Xmin=min(xf1,xf2,xf3,xf4)、Ymin=min(yf1,yf2,yf3,yf4)、Xmax=max(xf1,xf2,xf3,xf4) And Ymax=max(yf1,yf2,yf3,yf4) From point fmin(Xmin,Ymin) And point fmax(Xmax,Ymax) Constructing rectangular matching areas of the image to be detected as two opposite vertexes of a rectangle respectively;
and respectively zooming the template image rectangular matching area and the rectangular matching area of the image to be detected into a template matching image and an image to be detected which have the same resolution.
Further, in the corner feature matching step, if any one of the four extreme corner points of the template image is found to be more than 1 corner point corresponding to the image to be detected, the redundant matched corner point is removed by adopting a parallel line and length principle; the parallel lines and the length principle are as follows: the connecting line of the two extreme angle points of the template image at the upper part is parallel to the connecting line of the two extreme angle points corresponding to the image to be detected, the connecting line of the two extreme angle points of the template image at the lower part is parallel to the connecting line of the two extreme angle points corresponding to the image to be detected, and the ratio of the connecting length of the two extreme angle points of the template image at the upper part and the lower part is approximately equal to the ratio of the connecting length of the two extreme angle points of the image to be detected.
Furthermore, in the corner feature matching step, before image scaling, redundant matching needs to be eliminated by using a parallel line principle; the parallel line principle is as follows: the connecting line of one anchor point of the template image to a certain angular point is parallel to the connecting line of the corresponding anchor point of the image to be detected to the corresponding matching angular point; matches that do not satisfy the parallel line principle are redundant matches.
Further, in the secondary angular point extraction step, angular point feature extraction is respectively carried out on the template matching image and the image to be matched by an image angular point extraction method, angular points of the two images are mutually matched after extraction, and finally redundant matching is eliminated by the following method: and selecting one of a top left vertex, a bottom left vertex, a top right vertex and a bottom right vertex in the corner set of the template matching image as an anchor point, wherein a connecting line from the anchor point to a certain corner of the template matching image is parallel to a connecting line from the corresponding anchor point of the image to be matched to the corresponding matching corner, and the unmatched matching is redundancy matching.
Further, the similarity calculation step includes:
firstly, judging whether the ratio of the number of the corner points of the image to be matched to the template matching image is less than 0.1, if so, judging that the two images are not matched and quitting;
calculating the proportion I of the number of matching angular points between the template matching image and the image to be matched and the number of angular points of the image to be matched;
calculating a second proportion of the area of the template matching image to the area of the template image;
and calculating the sum of the proportion I and the proportion II by adopting a weight mode to obtain the similarity between the template image and the image to be detected.
The invention also discloses an electronic device, comprising: a processor and a memory having computer readable instructions stored thereon which, when executed by the processor, implement the above method.
The invention also discloses a computer-readable storage medium on which a computer program is stored, which computer program, when being executed by a processor, carries out the above method.
Compared with the prior art, the invention has the beneficial effects that: the method comprises the steps of finding out the corner features matched with each other between two images through corner extraction and matching, dividing the region where the matched corner is located according to an anchor point to form the image with the same resolution, then carrying out the corner extraction and matching once, and finally calculating the similarity of the two images according to the divided images.
Drawings
FIG. 1 is a flowchart of an image similarity matching method according to the present invention.
It should be noted that, the products shown in the above views are all appropriately reduced/enlarged according to the size of the drawing and the clear view, and the size of the products shown in the views is not limited.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the embodiments of the disclosure can be practiced without one or more of the specific details, or with other methods, components, materials, devices, steps, and so forth. In other instances, well-known structures, methods, devices, implementations, materials, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. That is, these functional entities may be implemented in the form of software, or in one or more software-hardened modules, or in different networks and/or processor devices and/or microcontroller devices.
The embodiment is an image similarity matching method, which is used for comparing the similarity between a template image and an image to be detected. As shown in fig. 1, the image similarity matching method includes: a preprocessing step S1, a corner extraction step S2, a corner feature matching step S3, a quadratic corner extraction step S4, and a similarity calculation step S5.
The image similarity matching method of the embodiment finds out the corner features matched with each other between the two images through corner extraction and matching, then divides the region where the matched corner is located according to the anchor point to form the image with the same resolution, then carries out the corner extraction and matching once, and finally calculates the similarity of the two images according to the divided images. The application scene is wider, and the universality is better. Especially, when the image to be detected is a subgraph or a part of the template image, more accurate matching can be carried out.
Each step is specifically described below.
The purpose of the preprocessing step S1 is to compute image features more accurately for subsequent steps. In the preprocessing step S1, the template image template _ img and the image format _ img to be detected are read in, and the template image and the image format _ img to be detected are grayed to eliminate the influence of color. If the image is an image with four channels RGBA colors, the A channel (transparent channel, common in png format pictures) data needs to be deleted and then grayed. And then blurring the two grayed images to make the characteristics of the images more obvious. In the preprocessing step S1, the blurring processing is one of gaussian blurring, median filtering, and sharpening. The template image width height (template _ width, template _ height) and the image width and height (format _ width, format _ height) to be detected, which are used to represent the resolution of the template image and the image to be detected, respectively, are also obtained at the same time in the preprocessing step S1.
The corner extraction step S2 is to extract the corner features in the template image and the image to be detected by the image corner extraction method. In the corner extraction step S2, the image corner extraction method is to use a fixed-size window (e.g., a rectangular window with a resolution of 5 × 5) to move in any region of the image in small steps from left to right and up and down, compare the gray-level variation degrees of pixels in the window before and after sliding, if there is a large gray-level variation, consider that there is a contact in the window and use the contact as a corner, record the gray-level values of surrounding pixels, and obtain a vector matrix, i.e., the corner feature. All corner features obtained after traversing the whole image by adopting an image corner extraction method are (K multiplied by N) matrixes, wherein K is the number of corners, N is the number of features, and the matrixes are digital descriptions of the corner features of the picture. Further, in the corner point extracting step S2, the coordinate positions of the corner points are acquired while extracting the feature of the corner points. The coordinate position values of the corner points are the horizontal pixel distance and the vertical pixel distance of the corner points relative to the image origin respectively. The image origin is the top left vertex, bottom left vertex, top right vertex, or bottom right vertex of the image. For example, in this embodiment, the origin of the template image is set as the top left vertex of the template image and the coordinates are (0,0), the coordinates of the bottom right vertex of the template image are (template _ width, template _ height), and the coordinates of any point in the template image are (horizontal pixel distance x, vertical pixel distance y). Therefore, in the corner extraction step S2, the corner features and the corner positions in the template image and the image to be detected are obtained.
In the corner feature matching step S3 of this embodiment, the corner features of the two images are compared with each other to obtain the corner features matched with each other between the two images, and then rectangular regions where the mutually matched corners in the two images are located are extracted by setting anchor points and respectively scaled into template matching images and images to be matched, which have the same resolution. Specifically, in the corner feature matching step S3, the corner features of the template image and the corner features of the image to be detected are compared with each other in a traversal manner to obtain matched corner features, which form a matching information table. And finding out four extreme corner points of the template image as anchor points in the matching information table: top left vertex t1Lower left vertex t2Top right vertex t3And a lower right vertex t4And finding out the corner points which are correspondingly matched with the image to be detected in the image to be detected and using the corner points as four anchor points of the image to be detected. Setting the coordinates of four anchor points of the template image as: t is t1(xt1,yt1)、t2(xt2,yt2)、t3(xt3,yt3)、t4(xt4,yt4). Setting the coordinates of four anchor points of an image to be detected as follows: f. of1(xf1,yf1)、f2(xf2,yf2)、f3(xf3,yf3)、f4(xf4,yf4)。
In implementation, if any one of the four extreme corner points of the template image is more than 1 corner point corresponding to and matching the image to be detected, the redundant matching corner points are removed by adopting parallel lines and a length principle. The parallel lines and the length principle are as follows: the connecting line of the two extreme corner points of the template image at the upper part is parallel to the connecting line of the two extreme corner points corresponding to the image to be detected (i.e. t)1t3//f1f3) The connecting line of the two extreme corner points of the template image at the lower part is parallel to the connecting line of the two extreme corner points corresponding to the image to be detected (i.e. t)2t4//f2f4) The length ratio of the connecting line of the two extreme value angular points of the template image at the upper part and the lower part is approximately equal to the length of the connecting line of the two extreme value angular points of the image to be detected at the upper part and the lower partRatio of (i.e. | t)1t3|:|t2t4|≈|f1f3|:|f2f4|). The matching which does not meet the above conditions is the redundant matching and needs to be removed.
Then, respectively taking the maximum value and the minimum value from four anchor points of the template image: xmax=max(xt1,xt2,xt3,xt4)、Ymax=max(yt1,yt2,yt3,yt4)、Xmin=min(xt1,xt2,xt3,xt4) And Ymin=min(yt1,yt2,yt3,yt4) In a similar manner to that of. From point tmin(Xmin,Ymin) And point tmax(Xmax,Ymax) Template image rectangular matching regions are constructed as two opposite vertices of a rectangle, respectively. In the present embodiment, since the coordinates of the top left vertex of the template image are (0,0), the point t ismin(Xmin,Ymin) As the top left vertex of the rectangle, point tmax(Xmax,Ymax) And constructing a template image rectangular matching region template _ img2 as the lower right vertex of the rectangle.
Respectively taking the maximum and minimum values from the four anchor points of the image to be detected: xmax=max(xf1,xf2,xf3,xf4)、Ymax=max(yf1,yf2,yf3,yf4)、Xmin=min(xf1,xf2,xf3,xf4) And Ymin=min(yf1,yf2,yf3,yf4). From point fmin(Xmin,Ymin) And point fmax(Xmax,Ymax) And respectively taking the two rectangular vertexes as two opposite vertexes of the rectangle to construct a rectangular matching area of the image to be detected. In the present embodiment, the coordinates of the top left vertex of the image to be detected are also set to (0,0), and therefore the point fmin(Xmin,Ymin) As the top left vertex of the rectangle, point fmax(Xmax,Ymax) As the right lower vertex of the rectangle, constructing the rectangular matching region format of the image to be detected_img2。
And finally, respectively scaling the template image rectangular matching region template _ img2 and the image rectangular matching region format _ img2 to be detected into a template matching image template _ img3 and an image format to be matched 3 with the same resolution. The two images should be zoomed from large to small, and the zooming from small to large is avoided as much as possible. Before scaling the two images, redundancy exists in matching, and redundant matching needs to be eliminated by using a parallel line principle. The parallel line principle is as follows: one of the anchor points (e.g. t) of the template image1) To a certain angle (set t)i) The connecting line of (a) is parallel to the corresponding anchor point (e.g. f) of the image to be detected1) To the corresponding matching corner point (let f)i) Is (i.e. t)1ti//f1fi). Due to tiOnly one true matching point is needed, so only one parallel line meeting the condition is needed, and the matching which does not meet the parallel line principle is redundant matching and needs to be removed.
The secondary corner extraction step S4 is used for performing corner feature extraction and matching on the template matching image template _ img3 and the image format _ img3 to be matched respectively. Specifically, in the secondary corner extraction step S4, corner feature extraction is performed on the template matching image and the image to be matched by an image corner extraction method. The method for extracting the image corner points is the same as the method for extracting the image corner points. After the corner features are respectively extracted, the corner points of the two images are matched with each other, and the matching mode is consistent with the method of the corner feature matching step S3. Redundancy exists in the formed matching, so the redundant matching is finally removed through a parallel line principle similar to the corner feature matching step S3: one of a top left vertex, a bottom left vertex, a top right vertex and a bottom right vertex in the template matching image corner set is selected as an anchor point, a connecting line from the anchor point to a certain corner of the template matching image is parallel to a connecting line from the corresponding anchor point of the image to be matched to the corresponding matching corner, the unmatched matching is redundant matching, and the unmatched matching is required to be eliminated.
The similarity calculation step S5 is used for calculating the similarity between the template matching image template _ img3 and the image format _ img3 to be matched. In the similarity calculation step S5, it is first determined whether the ratio of the number of corner points of the image to be matched to the number of corner points of the template matching image is smaller than 0.1, if so, it indicates that the template matching image template _ img3 is not related to the image format _ img3 to be matched or the correlation is very small, and at this time, it is determined that the two images are not matched and the whole step is exited.
In the secondary corner extraction step S4, corner matching between the template matching image and the image to be matched has been performed, and the number of corners match _ count that is successfully matched and the number of corners core _ num2 of the number of corners itself of the image to be matched can be obtained. Setting the ratio one sp as:
sp=match_count/corner_num2。
the length and the width are obtained by subtracting the upper left corner from the lower right corner of the template matching image template _ img3, and the area of the template matching image is obtained by multiplying the length and the width. The template image width height (template _ width, template _ height) is obtained in the preprocessing step, and the template image area is obtained by multiplying the template image width height and the template image height. The setting ratio of the second sa is as follows:
sa=(|x_max-x_min|*|y_max-y_min|)/(template_width*template_height)。
and finally, calculating the sum of the proportion I and the proportion II by adopting a weighting mode to obtain the Similarity between the template image and the image to be detected. Family ═ w1 sa + (1-w1) sp, where w1 and (1-w1) are the weights of sa and sp, respectively, and w1 ranges: 0< w1< 1. The weights may be adjusted according to certain scenes, mainly for design-like images and photographic-like images. The design class image will have more errors in corner matching and w1 will be set lower, thereby reducing the error rate. And the matching of the angular points of the photographic images is more accurate, and w1 is set to be higher.
In addition, in the embodiment of the invention, the electronic equipment capable of realizing the image similarity matching method is also provided.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
The electronic device is in the form of a general purpose computing device. Components of the electronic device may include, but are not limited to: the system comprises at least one processing unit, at least one storage unit, a bus for connecting different system components (comprising the storage unit and the processing unit), and a display unit.
Wherein the storage unit stores program code which is executable by the processing unit to cause the processing unit to perform steps according to various exemplary embodiments of the present invention as described in the above section "exemplary methods" of the present description. For example, the processing unit may perform all the steps of the image similarity matching method of the present invention.
The memory unit may include a readable medium in the form of a volatile memory unit, such as a random access memory unit (RAM) and/or a cache memory unit, and may further include a read only memory unit (ROM).
The storage unit may also include a program/utility having a set (at least one) of program modules including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The bus may be any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device may also communicate with one or more external devices (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface. Also, the electronic device may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via a network adapter. As shown, the network adapter communicates with other modules of the electronic device over a bus. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the image similarity matching method described above in the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above-mentioned "exemplary methods" section of the description, when the program product is run on the terminal device.
According to the program product for realizing the method, the portable compact disc read only memory (CD-ROM) can be adopted, the program code is included, and the program product can be operated on terminal equipment, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the following claims.

Claims (10)

1. An image similarity matching method is used for comparing the similarity between a template image and an image to be detected, and is characterized by at least comprising the following steps:
an angular point extraction step, which is to extract angular point characteristics in the template image and the image to be detected by an image angular point extraction method;
the method comprises the following steps of (1) corner feature matching, namely comparing the corner features of two images to obtain the mutually matched corner features between the two images, extracting rectangular areas where the mutually matched corners in the two images are located by setting anchor points, and respectively zooming the rectangular areas into template matching images and images to be matched, wherein the template matching images and the images to be matched have the same resolution;
a secondary angular point extraction step, which is to extract and match angular point characteristics of the template matching image and the image to be matched respectively;
and a similarity calculation step, wherein the similarity calculation is carried out according to the template matching image and the image to be matched.
2. The image similarity matching method according to claim 1, wherein the corner feature matching step includes:
comparing the corner features of the template image and the corner features of the image to be detected pairwise to obtain matched corner features which form a matching information table,
and finding out four extreme corner points of the template image as anchor points in the matching information table: top left vertex t1Lower left vertex t2Top right vertex t3And a lower right vertex t4Finding out the corner points which are correspondingly matched with the image to be detected in the image to be detected and using the corner points as four anchor points of the image to be detected;
the coordinates of the four anchor points of the template image are respectively as follows: t is t1(xt1,yt1)、t2(xt2,yt2)、t3(xt3,yt3)、t4(xt4,yt4) (ii) a Respectively take Xmin=min(xt1,xt2,xt3,xt4)、Ymin=min(yt1,yt2,yt3,yt4)、Xmax=max(xt1,xt2,xt3,xt4) And Ymax=max(yt1,yt2,yt3,yt4) From point tmin(Xmin,Ymin) And point tmax(Xmax,Ymax) Constructing template image rectangular matching areas as two opposite vertexes of a rectangle respectively;
the coordinates of the four anchor points of the image to be detected are respectively as follows: f. of1(xf1,yf1)、f2(xf2,yf2)、f3(xf3,yf3)、f4(xf4,yf4) (ii) a Respectively take Xmin=min(xf1,xf2,xf3,xf4)、Ymin=min(yf1,yf2,yf3,yf4)、Xmax=max(xf1,xf2,xf3,xf4) And Ymax=max(yf1,yf2,yf3,yf4) From point fmin(Xmin,Ymin) And point fmax(Xmax,Ymax) Constructing rectangular matching areas of the image to be detected as two opposite vertexes of a rectangle respectively;
and respectively zooming the template image rectangular matching area and the rectangular matching area of the image to be detected into a template matching image and an image to be detected which have the same resolution.
3. The image similarity matching method according to claim 2, wherein in the corner feature matching step, if any one of the four extreme corner points of the template image is found to have more than 1 corner point corresponding to the matching image to be detected, the redundant matching corner point is removed by using parallel lines and a length principle.
4. The image similarity matching method according to claim 3, wherein the parallel lines and length principle are: the connecting line of the two extreme angle points of the template image at the upper part is parallel to the connecting line of the two extreme angle points corresponding to the image to be detected, the connecting line of the two extreme angle points of the template image at the lower part is parallel to the connecting line of the two extreme angle points corresponding to the image to be detected, and the ratio of the connecting length of the two extreme angle points of the template image at the upper part and the lower part is approximately equal to the ratio of the connecting length of the two extreme angle points of the image to be detected.
5. The image similarity matching method according to claim 2, wherein in the corner feature matching step, redundant matching needs to be eliminated by using a parallel line principle before image scaling; the parallel line principle is as follows: the connecting line of one anchor point of the template image to a certain angular point is parallel to the connecting line of the corresponding anchor point of the image to be detected to the corresponding matching angular point; matches that do not satisfy the parallel line principle are redundant matches.
6. The image similarity matching method according to claim 1, wherein in the secondary corner extraction step, the template matching image and the image to be matched are respectively subjected to corner feature extraction by an image corner extraction method, and after extraction, the corners of the two images are matched with each other.
7. The image similarity matching method according to claim 6, wherein in the secondary corner point extraction step, redundant matching is finally eliminated by: and selecting one of a top left vertex, a bottom left vertex, a top right vertex and a bottom right vertex in the corner set of the template matching image as an anchor point, wherein a connecting line from the anchor point to a certain corner of the template matching image is parallel to a connecting line from the corresponding anchor point of the image to be matched to the corresponding matching corner, and the unmatched matching is redundancy matching.
8. The image similarity matching method according to claim 1, wherein the similarity calculating step includes:
firstly, judging whether the ratio of the number of the corner points of the image to be matched to the template matching image is less than 0.1, if so, judging that the two images are not matched and quitting;
calculating the proportion I of the number of matching angular points between the template matching image and the image to be matched and the number of angular points of the image to be matched;
calculating a second proportion of the area of the template matching image to the area of the template image;
and calculating the sum of the proportion I and the proportion II by adopting a weight mode to obtain the similarity between the template image and the image to be detected.
9. An electronic device, comprising:
a processor; and
a memory having computer readable instructions stored thereon which, when executed by the processor, implement the method of any of claims 1 to 8.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
CN202110354716.1A 2021-03-30 2021-03-30 Image similarity matching method and device Pending CN113077508A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110354716.1A CN113077508A (en) 2021-03-30 2021-03-30 Image similarity matching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110354716.1A CN113077508A (en) 2021-03-30 2021-03-30 Image similarity matching method and device

Publications (1)

Publication Number Publication Date
CN113077508A true CN113077508A (en) 2021-07-06

Family

ID=76614429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110354716.1A Pending CN113077508A (en) 2021-03-30 2021-03-30 Image similarity matching method and device

Country Status (1)

Country Link
CN (1) CN113077508A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101071505A (en) * 2007-06-18 2007-11-14 华中科技大学 Multi likeness measure image registration method
CN101097601A (en) * 2006-06-26 2008-01-02 北京航空航天大学 Image rapid edge matching method based on angle point guiding
CN104008542A (en) * 2014-05-07 2014-08-27 华南理工大学 Fast angle point matching method for specific plane figure
CN105701766A (en) * 2016-02-24 2016-06-22 网易(杭州)网络有限公司 Image matching method and device
CN106104575A (en) * 2016-06-13 2016-11-09 北京小米移动软件有限公司 Fingerprint template generates method and device
CN110148162A (en) * 2019-04-29 2019-08-20 河海大学 A kind of heterologous image matching method based on composition operators
KR20200083202A (en) * 2018-12-31 2020-07-08 경희대학교 산학협력단 Image similarity evaluation algorithm based on similarity condition of triangles

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101097601A (en) * 2006-06-26 2008-01-02 北京航空航天大学 Image rapid edge matching method based on angle point guiding
CN101071505A (en) * 2007-06-18 2007-11-14 华中科技大学 Multi likeness measure image registration method
CN104008542A (en) * 2014-05-07 2014-08-27 华南理工大学 Fast angle point matching method for specific plane figure
CN105701766A (en) * 2016-02-24 2016-06-22 网易(杭州)网络有限公司 Image matching method and device
CN106104575A (en) * 2016-06-13 2016-11-09 北京小米移动软件有限公司 Fingerprint template generates method and device
KR20200083202A (en) * 2018-12-31 2020-07-08 경희대학교 산학협력단 Image similarity evaluation algorithm based on similarity condition of triangles
CN110148162A (en) * 2019-04-29 2019-08-20 河海大学 A kind of heterologous image matching method based on composition operators

Similar Documents

Publication Publication Date Title
CN112348815B (en) Image processing method, image processing apparatus, and non-transitory storage medium
US11275961B2 (en) Character image processing method and apparatus, device, and storage medium
CN108229303B (en) Detection recognition and training method, device, equipment and medium for detection recognition network
CN110189336B (en) Image generation method, system, server and storage medium
US20160055395A1 (en) Determining distance between an object and a capture device based on captured image data
US20200175700A1 (en) Joint Training Technique for Depth Map Generation
US20120133779A1 (en) Robust recovery of transform invariant low-rank textures
CN109697689B (en) Storage medium, electronic device, video synthesis method and device
US20140204120A1 (en) Image processing device and image processing method
CN111091123A (en) Text region detection method and equipment
US20180253861A1 (en) Information processing apparatus, method and non-transitory computer-readable storage medium
CN110781823B (en) Screen recording detection method and device, readable medium and electronic equipment
US20170018106A1 (en) Method and device for processing a picture
CN106688012A (en) Depth map enhancement
CN109743566B (en) Method and equipment for identifying VR video format
CN113436222A (en) Image processing method, image processing apparatus, electronic device, and storage medium
US11055526B2 (en) Method, system and apparatus for processing a page of a document
WO2019200785A1 (en) Fast hand tracking method, device, terminal, and storage medium
CN112085842B (en) Depth value determining method and device, electronic equipment and storage medium
KR102638038B1 (en) Apparatus and method for denoising based on non-local mean
CN113077508A (en) Image similarity matching method and device
CN112528707A (en) Image processing method, device, equipment and storage medium
EP3422251A1 (en) Typesetness score for a table
CN113344957A (en) Image processing method, image processing apparatus, and non-transitory storage medium
CN114022340A (en) Method and device for adding watermark to image and detecting image, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination