CN108364024B - Image matching method and device, computer equipment and storage medium - Google Patents
Image matching method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN108364024B CN108364024B CN201810143256.6A CN201810143256A CN108364024B CN 108364024 B CN108364024 B CN 108364024B CN 201810143256 A CN201810143256 A CN 201810143256A CN 108364024 B CN108364024 B CN 108364024B
- Authority
- CN
- China
- Prior art keywords
- image
- shooting
- scanning
- password
- matching distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07D—HANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
- G07D7/00—Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
- G07D7/20—Testing patterns thereon
- G07D7/2016—Testing patterns thereon using feature extraction, e.g. segmentation, edge detection or Hough-transformation
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07D—HANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
- G07D7/00—Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
- G07D7/20—Testing patterns thereon
- G07D7/202—Testing patterns thereon using pattern matching
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The application relates to an image matching method, an image matching device, a computer device and a storage medium. The method comprises the following steps: the method comprises the steps of obtaining a plurality of shot images and a plurality of scanned images, detecting each shot image and each scanned image to obtain a corresponding password area image, extracting features of the password area image to obtain a plurality of feature points in the password area of the shot image and a plurality of feature points in the password area of the scanned image, and matching the feature points of the password area of each shot image with the feature points of the password area of each scanned image to obtain the scanned image corresponding to each shot image. By adopting the method, the matching accuracy between the plurality of shot images and the plurality of scanned images can be improved.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image matching method and apparatus, a computer device, and a storage medium.
Background
With the development of computer technology, image matching technology has emerged, and there are two main categories of gray-based matching methods and feature-based matching methods, wherein the gray-based matching method is to match by using the gray difference of two images, and the feature-based matching method is to match by extracting the stable and common structural features in the two images. However, in the current enterprise, image matching technology is used for invoice reimbursement, the traditional method generally only uses a gray-scale-based matching method or a feature-based matching method in the image matching technology, but the matching accuracy between a plurality of photographed invoice images and a plurality of scanned invoice images is low easily caused by only using one matching method in the image matching technology.
Disclosure of Invention
In view of the above, it is necessary to provide an image matching method, an apparatus, a computer device, and a storage medium capable of improving an image matching rate in view of the above technical problems.
An image matching method, the method comprising:
acquiring a plurality of input shot images and a plurality of input scanning images;
detecting each shot image and each scanned image to obtain a shot password area image in each shot image and a scanned password area image in each scanned image;
performing feature extraction on each shooting password area image and each scanning password area image to obtain shooting feature points in each shooting password area image and scanning feature points in each scanning password area image;
calculating to obtain the matching distance between the shot image corresponding to each shooting characteristic point and the scanned image corresponding to each scanning characteristic point according to each shooting characteristic point and each scanning characteristic point;
selecting a target matching distance from matching distances between the shot image corresponding to each shooting characteristic point and the scanned image corresponding to each scanning characteristic point;
and determining the shot image and the scanned image corresponding to the target matching distance as matching images.
In one embodiment, the calculating the matching distance between the captured image corresponding to each capturing feature point and the scanned image corresponding to each scanning feature point according to each capturing feature point and each scanning feature point includes: acquiring current shooting feature points, and combining the current shooting feature points with each scanning feature point in pairs to obtain a plurality of feature point combinations; calculating to obtain the matching distance between the shot image corresponding to the current shooting characteristic point and the scanned image corresponding to each scanning characteristic point according to the characteristic points in each characteristic point combination; and obtaining the next shooting feature point as the current shooting feature point, returning to the step of combining the current shooting feature point with each scanning feature point in pairs respectively to obtain a plurality of feature point combinations until obtaining the matching distance between the shooting image corresponding to each shooting feature point and the scanning image corresponding to each scanning feature point.
In one embodiment, the step of calculating the matching distance between the captured image corresponding to the current capturing feature point and the scanned image corresponding to each scanning feature point according to the feature points in each feature point combination includes: selecting a specific number of feature point combinations from the plurality of feature point combinations according to a preset rule; calculating to obtain a matching distance corresponding to the feature point combination according to the feature points in the selected feature point combination; and calculating according to the matching distance and the specific quantity to obtain the matching distance between the shot image corresponding to the current shooting characteristic point and the scanned image corresponding to each scanned characteristic point.
In one embodiment, the step of selecting a target matching distance from matching distances between the captured image corresponding to each capturing feature point and the scanned image corresponding to each scanning feature point includes: matching distances between the shot images corresponding to the shooting characteristic points and the scanned images corresponding to the scanning characteristic points form a matching distance matrix; performing first-dimension searching on the matching distance matrix, and determining the minimum matching distance in each first-dimension matching distance in the searched matching distance matrix as a first matching distance; and searching a second dimension of the matching distance matrix, and determining the first matching distance as a target matching distance if the matching distance of the first matching distance in the second dimension is the minimum.
In one embodiment, the step of determining the captured image and the scanned image corresponding to the target matching distance as the matching image is preceded by: detecting whether the target matching distance is smaller than a preset matching distance; and when the target matching distance is detected to be smaller than the preset matching distance, determining the shot image and the scanned image corresponding to the target matching distance as matching images.
In one embodiment, the step of detecting each captured image and each scanned image to obtain the captured password region image in each captured image and the scanned password region image in each scanned image comprises: carrying out coarse detection on each shot image and each scanned image to obtain a shot password coarse detection area in each shot image and a scanned password coarse detection area in each scanned image; calculating each shooting password coarse detection area and each scanning password coarse detection area to obtain each shooting password area edge image and each scanning password area edge image; carrying out contour tracking on the edge image of each shot password area and the edge image of each scanned password area to obtain each shot password contour area and each scanned password contour area; and calculating to obtain shooting password area images in the shooting images and scanning password area images in the scanning images according to the shooting password outline areas and the scanning password outline areas.
In one embodiment, the method further comprises: and respectively carrying out denoising treatment on each shooting password coarse detection area and each scanning password coarse detection area to obtain each shooting password coarse detection area and each scanning password coarse detection area after denoising.
In one embodiment, the matching distance is calculated by similarity between the capture feature points in each capture password region image and the scan feature points in each scan password region image.
In one embodiment, the shot image is a value-added tax invoice shot image, the scanned image is a value-added tax invoice scanned image, and successful reimbursement is indicated when the value-added tax invoice shot image and the value-added tax invoice scanned image corresponding to the target matching distance are determined to be matched.
An image matching apparatus, the apparatus comprising:
the image acquisition module is used for acquiring a plurality of input shot images and a plurality of input scanning images;
the image detection module is used for detecting each shot image and each scanned image to obtain a shot password area image in each shot image and a scanned password area image in each scanned image;
the characteristic extraction module is used for extracting the characteristics of each shooting password area image and each scanning password area image to obtain shooting characteristic points in each shooting password area image and scanning characteristic points in each scanning password area image;
the matching distance calculation module is used for calculating the matching distance between the shot image corresponding to each shooting characteristic point and the scanning image corresponding to each scanning characteristic point according to each shooting characteristic point and each scanning characteristic point;
the matching distance selection module is used for selecting a target matching distance from the matching distances between the shot images corresponding to the shooting characteristic points and the scanning images corresponding to the scanning characteristic points;
and the matching image detection module is used for determining the shot image and the scanned image corresponding to the target matching distance as matching images.
A computer device comprising a memory, the memory storing a computer program, a processor implementing the following steps when the processor executes the computer program:
acquiring a plurality of input shot images and a plurality of input scanning images;
detecting each shot image and each scanned image to obtain a shot password area image in each shot image and a scanned password area image in each scanned image;
performing feature extraction on each shooting password area image and each scanning password area image to obtain shooting feature points in each shooting password area image and scanning feature points in each scanning password area image;
calculating to obtain the matching distance between the shot image corresponding to each shooting characteristic point and the scanned image corresponding to each scanning characteristic point according to each shooting characteristic point and each scanning characteristic point;
selecting a target matching distance from matching distances between the shot image corresponding to each shooting characteristic point and the scanned image corresponding to each scanning characteristic point;
and determining the shot image and the scanned image corresponding to the target matching distance as matching images.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a plurality of input shot images and a plurality of input scanning images;
detecting each shot image and each scanned image to obtain a shot password area image in each shot image and a scanned password area image in each scanned image;
performing feature extraction on each shooting password area image and each scanning password area image to obtain shooting feature points in each shooting password area image and scanning feature points in each scanning password area image;
calculating to obtain the matching distance between the shot image corresponding to each shooting characteristic point and the scanned image corresponding to each scanning characteristic point according to each shooting characteristic point and each scanning characteristic point;
selecting a target matching distance from matching distances between the shot image corresponding to each shooting characteristic point and the scanned image corresponding to each scanning characteristic point;
and determining the shot image and the scanned image corresponding to the target matching distance as matching images.
According to the image matching method, the image matching device, the computer equipment and the storage medium, the terminal obtains a plurality of shot images and a plurality of scanned images, detects each shot image and each scanned image to obtain a corresponding password area image, extracts the characteristics of the password area image to obtain a plurality of characteristic points in the password area of the shot image and a plurality of characteristic points in the password area of the scanned image, and matches the characteristic points of the password area of each shot image with the characteristic points of the password area of each scanned image to obtain the scanned image corresponding to each shot image. Since the password region in the image is the region with the most distinguishing degree, the matching accuracy between the multiple photographed invoice images and the multiple scanned invoice images can be improved by increasing the password region detection.
Drawings
FIG. 1 is a diagram of an exemplary environment in which an image matching method may be implemented;
FIG. 2 is a flow diagram that illustrates a method for image matching, according to one embodiment;
fig. 2a is a schematic flow chart illustrating a step of generating matching distances between captured images corresponding to respective capturing feature points and scanned images corresponding to respective scanning feature points in one embodiment;
FIG. 3 is a flowchart illustrating the step of calculating the matching distance between the captured image corresponding to the current capture feature point and the scanned image corresponding to each scanned feature point according to the feature points in each feature point combination in one embodiment;
FIG. 4 is a schematic flowchart of the step of selecting a target matching distance from matching distances between the captured image corresponding to each capturing feature point and the scanned image corresponding to each scanning feature point in one embodiment;
FIG. 5 is a flowchart illustrating the steps of detecting each captured image and each scanned image to obtain a captured password region image in each captured image and a scanned password region image in each scanned image in one embodiment;
FIG. 6 is a diagram of a value added tax invoice image in one embodiment;
FIG. 7 is a schematic diagram of an embodiment of an image matching method;
FIG. 8 is a block diagram showing the structure of an image matching apparatus according to an embodiment;
FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The image matching method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The method comprises the steps that a server obtains a plurality of shot images and a plurality of corresponding scanned images sent by a terminal; detecting each shot image and each scanned image to obtain a shot password area image in each shot image and a scanned password area image in each scanned image; performing feature extraction on each shooting password area image and each scanning password area image to obtain shooting feature points in each shooting password area image and scanning feature points in each scanning password area image; calculating to obtain the matching distance between the shot image corresponding to each shooting characteristic point and the scanned image corresponding to each scanning characteristic point according to each shooting characteristic point and each scanning characteristic point; selecting a target matching distance from matching distances between the shot image corresponding to each shooting characteristic point and the scanned image corresponding to each scanning characteristic point; and determining the shot image and the scanned image corresponding to the target matching distance as matching images. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, an image matching method is provided, which is described by taking the method as an example applied to the server or the terminal in fig. 1, and includes the following steps:
in step 202, a plurality of input shot images and a plurality of input scan images are acquired.
The scanned image may be the same captured image, or a plurality of scanned images and a plurality of corresponding scanned images may be uploaded simultaneously, where the captured image includes, but is not limited to, an invoice value-added tax image. Specifically, the uploading of the plurality of captured images and the corresponding plurality of scanned images may be performed by a related application program of the terminal, and the application program may be, but is not limited to, various financial applications, video applications, social network applications, and the like capable of uploading images.
And 204, detecting each shot image and each scanned image to obtain a shot password area image in each shot image and a scanned password area image in each scanned image.
The password area image is the area with the most identification or the most characteristic in the shot image, the password area image can be but is not limited to the password area image in the value-added tax invoice image, and the password area image in the value-added tax invoice image is formed by encrypting the main information on the value-added tax invoice by the anti-counterfeiting tax control invoicing system. The main information includes but is not limited to invoice code, invoice number, date of making out an invoice, purchaser tax number, seller tax number, amount, tax amount, etc. Specifically, after a plurality of shot images and a plurality of corresponding scanned images are acquired, the acquired shot images and the scanned images are detected, and a shot password area image in each shot image and a scanned password area image in each scanned image are obtained.
And step 206, performing feature extraction on each shooting password area image and each scanning password area image to obtain shooting feature points in each shooting password area image and scanning feature points in each scanning password area image.
The coded region image is the most identifiable or most characteristic region in the shot image, and the main information on the coded region image in each shot image is different, so that the feature points extracted from the coded region image are more identifiable or distinguishable than the feature points in other regions in the shot image. Specifically, after a shooting password area image in each shooting image and a scanning password area image in each scanning image are obtained, feature extraction is carried out on the shooting password area image in each shooting image, and a plurality of shooting feature points with identification in each shooting password area image are obtained; similarly, feature extraction is performed on the scanned password region images in each scanned image, so that a plurality of scanned feature points with identification in each scanned password region image are obtained.
And step 208, calculating to obtain the matching distance between the shot image corresponding to each shooting characteristic point and the scanned image corresponding to each scanning characteristic point according to each shooting characteristic point and each scanning characteristic point.
Because the shooting feature points of each shooting password area image and the scanning feature points of each scanning password area image are obtained by performing feature extraction on each shooting password area image and each scanning password area image, each shooting feature point and each scanning feature point need to be matched so as to calculate the matching distance between the shooting image corresponding to each shooting feature point and the scanning image corresponding to each scanning feature point. The matching distance is a similarity between a captured image corresponding to each capturing feature point and a scanned image corresponding to each scanning feature point, which is calculated by similarity calculation from each capturing feature point and each scanning feature point. Specifically, after acquiring each shooting feature point and each scanning feature point, similarity calculation is performed on each shooting feature point and each scanning feature point to obtain a plurality of calculation results, that is, matching distances.
In step 212, the shot image and the scanned image corresponding to the target matching distance are determined as matching images.
The target matching distance is a matching distance which meets a preset requirement and is selected from the multiple matching distances. Specifically, after the matching distances between the shot images corresponding to the shooting feature points and the scanned images corresponding to the scanning feature points are obtained, the target matching distance meeting the preset requirement is selected from the matching distances. Furthermore, the selected target matching distance is obtained by calculating the shooting characteristic points and the scanning characteristic points, the shooting characteristic points have corresponding shooting images, and the scanning characteristic points have corresponding scanning characteristic points, so that the shooting images and the scanning images corresponding to the target matching distance are determined to be mutually matched images.
In the image matching method, a terminal acquires a plurality of shot images and a plurality of scanned images, detects each shot image and each scanned image to obtain a corresponding password area image, extracts the characteristics of the password area image to obtain a plurality of characteristic points in the password area of the shot image and a plurality of characteristic points in the password area of the scanned image, and matches the characteristic points of the password area of each shot image with the characteristic points of the password area of each scanned image to obtain the scanned image corresponding to each shot image. Since the password region in the image is the region with the most distinguishing degree, the matching accuracy between the multiple photographed invoice images and the multiple scanned invoice images can be improved by increasing the password region detection.
As shown in fig. 2a, in an embodiment, calculating a matching distance between the captured image corresponding to each capturing feature point and the scanned image corresponding to each scanning feature point according to each capturing feature point and each scanning feature point includes:
and 208a, acquiring current shooting feature points, and combining the current shooting feature points with each scanning feature point in pairs respectively to obtain a plurality of feature point combinations.
The shooting password region images and the scanning password region images are subjected to feature extraction to obtain shooting feature points of the shooting password region images and scanning feature points of the scanning password region images, and therefore the shooting feature points and the scanning feature points are required to be matched. The current shooting characteristic point is a shooting characteristic point randomly selected from a plurality of shooting characteristic points and used as the current shooting characteristic point. Specifically, one shooting characteristic is randomly selected from the shooting characteristic points to serve as a current shooting characteristic point, and the current shooting characteristic point is combined with each scanning characteristic point in pairs respectively to obtain a plurality of characteristic point combinations corresponding to the current shooting characteristic point. For example, the plurality of shooting feature points are: A. b, C, the plurality of scan feature points are: a. b, taking the shooting feature point A as a current shooting feature point, and combining the current shooting feature point and each scanning feature point pairwise to obtain a plurality of feature point combinations which are respectively as follows: aa. Ab and Ac. The shooting feature points A, B and C include a plurality of feature points.
And step 208b, calculating to obtain the matching distance between the shot image corresponding to the current shooting characteristic point and the scanned image corresponding to each scanned characteristic point according to the characteristic points in each characteristic point combination.
Wherein, the matching distance is the result obtained by calculating the similarity between the characteristic points. Because each feature point combination comprises a shooting feature point and a scanning feature point, the similarity between the shooting image corresponding to the current shooting feature point in each feature point combination and the scanning image corresponding to each scanning feature point, namely the matching distance, is calculated according to the shooting feature point and the scanning feature point in each feature point combination. The greater the matching distance is, the greater the similarity between the shot image corresponding to the shooting characteristic point and the scanned image corresponding to the scanned characteristic point is.
And step 208c, acquiring the next shooting feature point as the current shooting feature point, and returning to the step of combining the current shooting feature point with each scanning feature point in pairs respectively to obtain a plurality of feature point combinations until the matching distance between the shot image corresponding to each shooting feature point and the scanning image corresponding to each scanning feature point is obtained.
The shooting password region images and the scanning password region images are subjected to feature extraction to obtain a plurality of shooting feature points and a plurality of scanning feature points, and the shooting feature points and the scanning feature points are required to be matched. Therefore, one shooting feature point is randomly selected from the multiple shooting feature points to serve as a current shooting feature point, the current shooting feature point and each scanning feature point are combined in pairs to calculate matching distances corresponding to the combination, then one shooting feature point is randomly selected from the multiple shooting feature points, the selected shooting feature point serves as the current shooting feature point, the current shooting feature point is respectively combined with each scanning feature point in pairs to obtain a plurality of feature point combinations, and the matching distances between the corresponding shooting image and the corresponding scanning image of each scanning feature point are calculated by the multiple shooting feature points.
In one embodiment, as shown in fig. 3, the step of calculating, according to the feature points in each feature point combination, a matching distance between the captured image corresponding to the current capturing feature point and the scanned image corresponding to each scanned feature point includes:
The feature points are the characteristic points with identification or representativeness in the password area image in the shot image or the scanned image, but the distinguishing degree or identification of some feature points is weaker when the feature points are extracted, so that the feature point combinations with strong identification or strong distinguishing degree can be selected from a plurality of feature point combinations and a specific number of feature point combinations can be selected. For example, the specific number is 10 groups, and therefore, 10 groups of feature point combinations are selected from the plurality of feature point combinations with strong identification or strong discrimination.
And step 304, calculating to obtain a matching distance corresponding to the feature point combination according to the feature points in the selected feature point combination.
And step 306, calculating to obtain the matching distance between the shot image corresponding to the current shooting characteristic point and the scanned image corresponding to each scanned characteristic point according to the matching distance and the specific quantity.
Specifically, after a specific number of feature point combinations are selected from the plurality of feature point combinations, since the feature point combinations include the shooting feature points and the scanning feature points, similarity calculation needs to be performed on the shooting feature points and the scanning feature points in the feature point combinations to obtain matching distances corresponding to the feature point combinations. Since the average value can reflect an index of the concentration trend of the plurality of matching distances, the matching distances between the captured image corresponding to the capturing feature point and the scanned image corresponding to the scanning feature point included in the feature point combination are obtained by averaging the plurality of matching distances and the specific number obtained by calculation.
In one embodiment, as shown in fig. 4, the step of selecting a target matching distance from matching distances between the captured image corresponding to each capturing feature point and the scanned image corresponding to each scanning feature point includes:
and 402, forming a matching distance matrix by matching distances between the shot images corresponding to the shooting characteristic points and the scanned images corresponding to the scanning characteristic points.
The matching distance matrix is a set of matching distances arranged according to a rectangular array. Specifically, after the matching distances between the shot images corresponding to the shooting points and the scanned images corresponding to the scanning feature points are obtained, the matching distances are arranged into a matching distance matrix of a rectangular array according to a certain rule. Wherein, a certain rule can put the same shooting feature point into the same column in the matching distance matrix.
Where the first dimension may be, but is not limited to, the rows and columns of the matching matrix. Specifically, since the matching distance matrix is a set of matching distances arranged in a rectangular array, the matching distance matrix is referred to as an m × n matrix according to elements of the matching distance matrix, and m and n are positive integers, respectively. Further, a first dimension search is performed on the matching distance matrix, that is, elements in each row of the matching distance matrix are searched, and the matching distance with the minimum matching distance in each row is obtained and determined as the first matching distance.
And 406, performing second-dimension search on the matching distance matrix, and determining the first matching distance as the target matching distance if the matching distance of the first matching distance in the second dimension is the minimum.
Specifically, after a first dimension search is performed on the matching distance matrix and a first matching distance is found, a second dimension search is performed on the matching distance matrix similarly, that is, elements in each column of the matching distance matrix are found, whether the matching distance of the first matching distance found by the first dimension in the second dimension is the minimum or not is detected, and if yes, the first matching distance can be determined as a target matching distance meeting the requirement; and otherwise, the shot image corresponding to the first matching distance is considered not to find the matched scanned image.
It should be noted that the first dimension in step 402 may also be to search for elements in each column of the matching distance matrix, and similarly, the second dimension in step 406 may also be to search for elements in each row of the matching distance matrix.
In one embodiment, the step of determining the captured image and the scanned image corresponding to the target matching distance as the matching image is preceded by: detecting whether the target matching distance is smaller than a preset matching distance; and when the target matching distance is detected to be smaller than the preset matching distance, determining the shot image and the scanned image corresponding to the target matching distance as matching images.
Before the step of determining the shot image and the scanned image corresponding to the target matching distance as the matching image, the selected target matching distance needs to be detected. Specifically, whether the target matching distance is smaller than a preset matching distance is detected, and when the target matching distance is smaller than the preset matching distance, the shot image and the scanned image corresponding to the target matching distance can be determined to be mutually matched images. On the contrary, after the target matching distance meeting the requirement is selected from the matching distance matrix, when the target matching distance is detected to be greater than or equal to the preset matching distance, the shot image and the scanned image corresponding to the target matching distance are not mutually matched images. Through twice condition screening, the matching accuracy of the shot image and the scanned image is improved.
In one embodiment, as shown in fig. 5, the step of detecting each captured image and each scanned image to obtain a captured password region image in each captured image and a scanned password region image in each scanned image comprises:
Since the most identifiable or most characteristic region in the value-added tax invoice image is the password region image, each shot image and each scanned image need to be subjected to coarse detection to obtain a shot password coarse detection region in each shot image and a scanned password coarse detection region in each scanned image. Specifically, as shown in fig. 6, fig. 6 is a schematic diagram of an image of a value-added tax invoice in an embodiment, and rough detection is performed on each captured image and each scanned image according to comprehensive seal ellipse detection and horizontal lines in the value-added tax invoice to obtain an approximate region of a captured password region in each captured image and an approximate region of a scanned password region in each scanned image, that is, a captured password rough detection region and a scanned password rough detection region are within a dashed box in fig. 6.
And step 504, calculating each shooting password coarse detection area and each scanning password coarse detection area to obtain each shooting password area edge image and each scanning password area edge image.
The edge image is obtained by performing edge extraction on a shooting password rough detection area in a shooting image of the value-added tax invoice and a scanning password rough detection area in a scanning image of the value-added tax invoice. Specifically, after detecting a shooting password coarse detection area in each shooting image and a scanning password coarse detection area in each scanning image, in order to further accurately find a shooting password area and a scanning password area, big law algorithm OTSU binary segmentation needs to be performed on the shooting password coarse detection area and the scanning password coarse detection area to obtain each shooting password area edge image and each scanning password area edge image. The OTSU is an algorithm used for determining a binary segmentation threshold value of a shooting password coarse detection area and a scanning password coarse detection area.
And step 506, carrying out contour tracking on the edge images of the shooting password areas and the scanning password areas to obtain contour areas of the shooting passwords and the scanning passwords.
Wherein the contour tracing traces the boundary by sequentially finding edge points in the edge image. Specifically, after the edge images of the shooting password areas and the scanning password areas are obtained through calculation, morphological closing and opening operations are required to be performed on the edge images of the shooting password areas and the edge images of the scanning password areas successively to obtain corresponding edge images of the shooting password areas and corresponding edge images of the scanning password areas, then the boundaries of the shooting password areas and the boundaries of the scanning password areas are tracked by finding out edge points in the edge images of the shooting password areas and the edge images of the scanning password areas sequentially, and a shooting password outline area in each shooting image and a scanning password outline area in each scanning image are obtained.
And step 508, calculating to obtain shooting password area images in the shooting images and scanning password area images in the scanning images according to the shooting password outline areas and the scanning password outline areas.
Specifically, after obtaining the shooting password profile area in each shooting image and the scanning password profile area in each scanning image, in order to further obtain the shooting password area in each shooting image and the scanning password area in each scanning image, the shooting password profile area in each shooting image and the scanning password profile area in each scanning image need to be calculated to obtain the minimum profile bounding matrix in each shooting password profile area and each scanning password profile area. Furthermore, perspective transformation is required to be performed on each minimum outline bounding matrix to obtain corrected shooting password area images in each shooting image and scanning password area images in each scanning image. Wherein the perspective transformation is a correction commonly used for each shot password profile area and each scanned password profile area.
In one embodiment, the image matching method further comprises the step of denoising each shooting password coarse detection area and each scanning password coarse detection area respectively to obtain each shooting password coarse detection area and each scanning password coarse detection area after denoising.
In this embodiment, after obtaining the shooting password coarse detection areas in each shooting image and the scanning password coarse detection areas in each scanning image, in order to further accurately obtain the shooting password areas in each shooting image and the scanning password areas in each scanning image, since the shooting images and the scanning images are often influenced by noise interference of imaging equipment and external environment in the digitization and transmission processes under real conditions, image denoising processing needs to be performed on the shooting password areas in each detected shooting image and the scanning password areas in each scanning image, and the corresponding shooting password coarse detection areas and the corresponding scanning coarse detection areas are obtained after denoising processing is performed on the shooting password areas in each shooting image and the scanning password areas in each scanning image. Where image denoising is the process of reducing noise in a digital image.
In one embodiment, the matching distance is calculated by similarity of the capture feature points in each capture coderegion image to the scan feature points in each scan coderegion image.
In this embodiment, the matching distance may be, but is not limited to, characterized by similarity. The similarity is obtained by calculating the similarity of each shooting password area image and each scanning password area image, and the similarity can be called as similarity. Specifically, after the shooting password area images in the shooting images and the scanning password area images in the scanning images are obtained, feature point extraction is required to be performed on the shooting password area images and the scanning password area images, and feature point matching is performed to obtain shooting feature points of the shooting password area images and scanning feature points of the scanning password area images. Further, the corresponding similarity, i.e. the matching distance, can be calculated by using, but not limited to, a gaussian distance calculation formula for each shooting feature point and each scanning feature point.
In one embodiment, the photographed image is a value added tax invoice photographed image, the scanned image is a value added tax invoice scanned image, and successful reimbursement is indicated when the value added tax invoice photographed image and the value added tax invoice scanned image corresponding to the target matching distance are determined to be matched. In specific application, as shown in fig. 7, the value-added tax invoice photographed image and the value-added tax invoice scanned image which are determined to be matched with each other are regarded as successful reimbursement.
In one embodiment, an image matching method is provided, which is exemplified by the application of the method to the server or the terminal in fig. 1, and includes the following steps:
step 702, a plurality of input shot images and a plurality of input scanned images are acquired.
As shown in fig. 7, fig. 7 shows a schematic diagram of image matching in one embodiment. After acquiring the paper files of the value-added tax invoices, the reimbursement personnel take pictures of the paper files of the value-added tax invoices and upload corresponding value-added tax invoice shooting images. Further, the value-added tax invoice shooting images are uploaded through related application programs, the application programs carry out identification and inspection on the value-added tax invoice shooting images, if the identification and inspection are passed, the reimbursement personnel submit paper files of value-added tax invoices to financial personnel, and the financial personnel scan the paper files of the value-added tax invoices submitted by the reimbursement personnel to obtain scanning images. And uploading the scanned image corresponding to each value-added tax invoice by the financial staff through a related application program.
Step 704, performing coarse detection on each shot image and each scanned image to obtain a shot password coarse detection area in each shot image and a scanned password coarse detection area in each scanned image.
And step 706, calculating each shooting password coarse detection area and each scanning password coarse detection area to obtain each shooting password area edge image and each scanning password area edge image.
As shown in fig. 7, after the photographed image and the scanned image of each value-added tax invoice are acquired, the photographed image and the scanned image need to be automatically matched. Specifically, the rough detection is carried out on each shot image and each scanned image according to the comprehensive seal ellipse detection and the transverse line in each shot image and each scanned image of each value-added tax invoice, and the rough area of the shot password area in each shot image and the rough area of the scanned password area in each scanned image are obtained. Then, the approximate area of the shooting password area in each shooting image and the approximate area of the scanning password area in each scanning image can be calculated according to, but not limited to, OTSU to obtain the corresponding edge image.
And 708, carrying out contour tracking on the edge images of the shooting password areas and the scanning password areas to obtain contour areas of the shooting passwords and the scanning passwords.
And step 710, calculating to obtain a shooting password area image in each shooting image and a scanning password area image in each scanning image according to each shooting password outline area and each scanning password outline area.
Specifically, in order to accurately find the shooting password area of the shot image and the scanning password area of the scanned image, morphological closing operation and morphological opening operation are required to be performed on each shot image and the edge image of each scanned image, and the shooting password area image of the edge image corresponding to each shot image is communicated with the scanning password area of the edge image corresponding to each scanned image. And then carrying out contour tracking on the edge image corresponding to each shot image and the edge image corresponding to each scanned image, and screening out a shot password contour region in each shot image and a scanned password contour region in each scanned image. Further, the shooting password profile areas in each shooting image and the scanning password profile areas in each scanning image need to be calculated to obtain the minimum profile bounding matrix of each shooting password profile area and each scanning password profile area. Furthermore, perspective transformation is required to be performed on each minimum outline bounding matrix to obtain corrected shooting password area images in each shooting image and scanning password area images in each scanning image.
And 712, extracting the characteristics of each shooting password area image and each scanning password area image to obtain shooting characteristic points in each shooting password area image and scanning characteristic points in each scanning password area image.
Specifically, after acquiring a shooting password area image in each shooting image and a scanning password area image in each scanning image, performing feature extraction on the shooting password area image in each shooting image to obtain a shooting feature point with identification in each shooting password area image; similarly, feature extraction is performed on the scanned password region images in each scanned image, so as to obtain scanned feature points with identification in each scanned password region image.
And 714, acquiring the current shooting feature points, and combining the current shooting feature points with each scanning feature point in pairs respectively to obtain a plurality of feature point combinations.
The shooting password region images and the scanning password region images are subjected to feature extraction to obtain shooting feature points of the shooting password region images and scanning feature points of the scanning password region images, and therefore the shooting feature points and the scanning feature points are required to be matched. Specifically, one shooting characteristic is randomly selected from the shooting characteristic points to serve as a current shooting characteristic point, and the current shooting characteristic point is combined with each scanning characteristic point in pairs respectively to obtain a plurality of characteristic point combinations corresponding to the current shooting characteristic point. For example, the plurality of shooting feature points are: A. b, C, the plurality of scan feature points are: a. b, c and d, taking the shooting feature point A as a current shooting feature point, and combining the current shooting feature point and each scanning feature point in pairs to obtain a plurality of feature point combinations which are respectively: aa. Ab, Ac and Ad.
Step 716, selecting a specific number of feature point combinations from the plurality of feature point combinations according to a preset rule.
Specifically, the feature points are features having identification or representativeness in the password region image in the captured image or the scanned image, but since the discrimination of some feature points is small or weak when the feature points are extracted, a feature point combination having strong identification or strong discrimination needs to be selected from a plurality of feature point combinations to select a specific number of feature point combinations. For example, the feature point combinations obtained by combining the current shooting feature point and each scanning feature point in pairs are totally: and 20 groups, selecting 10 groups of feature point combinations with strong identification or high discrimination from the 20 groups of feature point combinations, and taking the 10 groups of feature point combinations as the feature point combinations corresponding to the current shooting feature points.
Step 718, calculating a matching distance corresponding to the feature point combination according to the feature points in the selected feature point combination.
And 720, calculating to obtain the matching distance between the shot image corresponding to the current shooting characteristic point and the scanned image corresponding to each scanned characteristic point according to the matching distance and the specific number.
Specifically, after a specific number of feature combinations are selected according to a specific number from a total feature point combination obtained by pairwise combination of the current shooting feature point and each scanning feature point, similarity calculation needs to be performed on the shooting feature points and the scanning feature points in the selected feature point combinations to obtain matching distances corresponding to each selected feature point combination. Since the average value can reflect an index of the concentrated trend of the matching distance corresponding to each feature point combination, it is necessary to perform average calculation according to the calculated matching distance corresponding to each feature point combination and a specific number to obtain the matching distance between the captured image corresponding to the captured feature point and the scanned image corresponding to the scanned feature point included in the feature point combination. Such as: the feature point combinations obtained by pairwise combination of the current shooting feature point and each scanning feature point are totally: and 20 groups, selecting 10 groups of feature point combinations with strong identification or high discrimination from the 20 groups of feature point combinations, and taking the 10 groups of feature point combinations as the feature point combinations corresponding to the current shooting feature points. Further, the matching distances corresponding to the 10 groups of feature point combinations are added and divided by 10 to obtain the matching distances corresponding to the current shooting feature points and the respective scanning feature points.
And step 722, acquiring the next shooting feature point as the current shooting feature point, and returning to step 714 until the matching distance between the shooting image corresponding to each shooting feature point and the scanning image corresponding to each scanning feature point is obtained.
Specifically, a plurality of shooting feature points and a plurality of scanning feature points are obtained by performing feature extraction on each shooting password region image and each scanning password region image, and each shooting feature point and each scanning feature point need to be matched. Therefore, one shooting feature point is randomly selected from the multiple shooting feature points to serve as a current shooting feature point, the current shooting feature point and each scanning feature point are combined in pairs to calculate matching distances corresponding to the combination, then one shooting feature point is randomly selected from the multiple shooting feature points, the selected shooting feature point serves as the current shooting feature point, the current shooting feature point is respectively combined with each scanning feature point in pairs to obtain a plurality of feature point combinations, and the matching distances between the corresponding shooting image and the corresponding scanning image of each scanning feature point are calculated by the multiple shooting feature points.
And 724, forming a matching distance matrix by matching distances between the shot images corresponding to the shooting characteristic points and the scanned images corresponding to the scanning characteristic points.
In step 726, the first dimension search is performed on the matching distance matrix, and the minimum matching distance in each first dimension matching distance in the searched matching distance matrix is determined as the first matching distance.
Step 728, a second dimension search is performed on the matching distance matrix, and if the matching distance of the first matching distance in the second dimension is the minimum, the first matching distance is determined as the target matching distance.
Specifically, after the matching distances between the shot images corresponding to the respective shooting points and the scanned images corresponding to the respective scanning feature points are obtained, the matching distances are arranged into a matching distance matrix of a rectangular array according to a certain rule, wherein the matching distance matrix is shown in table 1:
TABLE 1 Bi-directional matching matrix
Image of a person | S1 | S2 | S3 | S4 | S5 |
P1 | 22 | 13 | 22 | 28 | 26 |
P2 | 11 | 26 | 24 | 23 | 21 |
P3 | 20 | 22 | 15 | 21 | 23 |
P4 | 29 | 23 | 27 | 18 | 25 |
P5 | 23 | 22 | 25 | 7 | 26 |
As above tables S1-S5 represent scanned invoice images, P1-P5 are photographed invoice images. The matching rule is as follows: the matching distance between the photographed invoice image and the scanned invoice image is the minimum. The matching process is exemplified as follows:
(a) the minimum distance is firstly searched transversely for P1, and the matching distance between P1 and S2 is 13 at minimum. Then, the vertical search of S2 shows that the matching distance between S2 and P1 is the minimum, and it is determined that 13 is the target matching distance.
(b) For P2, the minimum distance is searched laterally, and the matching distance between P2 and S1 is 11. Then, the vertical search is performed at S1, and it is found that the matching distance between S1 and P2 is the minimum, and it is determined that 11 is the target matching distance.
(c) For P3, the minimum distance is searched first, and the matching distance between P3 and S3 is 15. Then, the vertical search is performed at S3, and it is found that the matching distance between S3 and P3 is the minimum, and it is determined that 15 is the target matching distance.
(d) For P4, the minimum distance is searched first, and the matching distance between P4 and S4 is 18 at minimum. Then, the longitudinal search of S4 shows that the matching distance between S4 and P5 is 7 at minimum. Then the P4 match is deemed to have failed at this point.
(e) For P5, the minimum distance is searched first, and it can be seen that the matching distance between P5 and S4 is 7 at minimum. Then, the vertical search is performed at S4, and it is found that the matching distance between S4 and P5 is the minimum, and it is determined that 7 is the target matching distance.
Step 730, detecting whether the target matching distance is smaller than a preset matching distance, if so, entering step 732.
In order to more accurately find the mutually matched images from the plurality of captured images and the plurality of scanned images, it is required to detect whether the target matching distance is smaller than a preset matching distance, and if the target matching distance is smaller than the preset matching distance, the process proceeds to step 732. Otherwise, the matching between the shot image and the scanned image corresponding to the target matching distance is considered to fail.
In step 732, the captured image and the scanned image corresponding to the target matching distance are determined as matching images.
Specifically, after the matching distances between the shot images corresponding to the shooting feature points and the scanned images corresponding to the scanning feature points are obtained, the target matching distance meeting the preset requirement is selected from the matching distances. Furthermore, the selected target matching distance is obtained by calculating the shooting characteristic points and the scanning characteristic points, the shooting characteristic points have corresponding shooting images, and the scanning characteristic points have corresponding scanning characteristic points, so that the shooting images and the scanning images corresponding to the target matching distance are determined to be mutually matched images. For example, the 1 st captured image and the 2 nd scanned image corresponding to the target matching distance are determined as matching images. As shown in fig. 7, the value-added tax invoice capture image and the value-added tax invoice scan image determined to match each other are considered to be successful in reimbursement.
In this embodiment, through carrying out password district detection and two-way matching strategy to shooting image and scanning image, not only can improve the matching accuracy rate between each shooting image and each scanning image, can reduce financial staff's work rate through a plurality of shooting images and scanning image automatic matching moreover, promote the reimbursement efficiency of reimbursement value-added tax invoice effectively.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in the above-described flowcharts may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or the stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 8, there is provided an image matching apparatus 900 including: an image acquisition module 902, an image detection module 904, a feature extraction module 906, a matching distance calculation module 908, a matching distance selection module 910, and a matching distance selection module 912, wherein:
an image acquiring module 902 is configured to acquire a plurality of input captured images and a plurality of input scanned images.
And an image detection module 904, configured to detect each captured image and each scanned image to obtain a captured password region image in each captured image and a scanned password region image in each scanned image.
And a feature extraction module 906, configured to perform feature extraction on each captured password region image and each scanned password region image to obtain a captured feature point in each captured password region image and a scanned feature point in each scanned password region image.
A matching distance calculation module 908, configured to calculate, according to each of the shooting feature points and each of the scanning feature points, a matching distance between the shot image corresponding to each of the shooting feature points and the scanned image corresponding to each of the scanning feature points;
a matching distance selecting module 910, configured to select a target matching distance from matching distances between the captured image corresponding to each capturing feature point and the scanned image corresponding to each scanning feature point.
A matching image detection module 912, configured to determine the captured image and the scanned image corresponding to the target matching distance as matching images.
In one embodiment, the matching distance calculation module 908 comprises:
and a current shooting feature point obtaining unit (not shown in the figure) configured to obtain a current shooting feature point, and combine the current shooting feature point with each scanning feature point in pairs to obtain a plurality of feature point combinations.
And a matching distance calculating unit (not shown in the figure) for calculating a matching distance between the shot image corresponding to the current shooting feature point and the scanned image corresponding to each scanned feature point according to the feature points in each feature point combination.
The current shooting feature point obtaining unit is further used for obtaining the next shooting feature point as the current shooting feature point, and the matching distance calculating unit is further used for combining the current shooting feature point with each scanning feature point in pairs respectively to obtain a plurality of feature point combinations until obtaining the matching distance between the shooting image corresponding to each shooting feature point and the scanning image corresponding to each scanning feature point.
In one embodiment, the matching distance calculation module 908 is further configured to select a specific number of feature point combinations from the plurality of feature point combinations according to a preset rule; calculating to obtain a matching distance corresponding to the feature point combination according to the feature points in the selected feature point combination; and calculating according to the matching distance and the specific quantity to obtain the matching distance between the shot image corresponding to the current shooting characteristic point and the scanned image corresponding to each scanned characteristic point.
In one embodiment, the matching distance selection module 910 includes:
and a matching distance matrix generating unit (not shown in the figure) for forming a matching distance matrix by matching distances between the shot images corresponding to the shooting feature points and the scanned images corresponding to the scanning feature points.
A first searching unit (not shown in the figure) configured to perform first-dimension searching on the matching distance matrix, and determine a minimum matching distance in each first-dimension matching distance in the searched matching distance matrix as the first matching distance.
And a second searching unit (not shown in the figure) configured to perform a second-dimension search on the matching distance matrix, and determine the first matching distance as the target matching distance if the matching distance of the first matching distance in the second dimension is the minimum.
In one embodiment, the image matching apparatus 900 further includes a target matching distance detection module (not shown in the figure), wherein:
and the target matching distance detection module is used for detecting whether the target matching distance is smaller than a preset matching distance. When the target matching distance is smaller than the preset matching distance, the method enters a matching image detection module 912.
In one embodiment, the image detection module 904 comprises a coarse detection unit (not shown), a calculation unit (not shown), a tracking unit (not shown), and a password region image acquisition unit (not shown), wherein:
and the rough detection unit is used for carrying out rough detection on each shot image and each scanned image to obtain a shot password rough detection area in each shot image and a scanned password rough detection area in each scanned image.
And the calculation unit is used for calculating each shooting password coarse detection area and each scanning password coarse detection area to obtain each shooting password area edge image and each scanning password area edge image.
And the tracking unit is used for carrying out contour tracking on the edge image of each shooting password area and the edge image of each scanning password area to obtain each shooting password contour area and each scanning password contour area.
And the password area image acquisition unit is used for calculating and obtaining a shooting password area image in each shooting image and a scanning password area image in each scanning image according to each shooting password outline area and each scanning password outline area.
In an embodiment, the image matching apparatus 900 is further configured to perform denoising processing on each shooting password coarse detection region and each scanning password coarse detection region respectively, so as to obtain each shooting password coarse detection region and each scanning password coarse detection region after denoising.
In one embodiment, the matching distance is calculated by similarity of the capture feature points in each capture coderegion image to the scan feature points in each scan coderegion image.
In one embodiment, the photographed image is a value added tax invoice photographed image, the scanned image is a value added tax invoice scanned image, and successful reimbursement is indicated when the value added tax invoice photographed image and the value added tax invoice scanned image corresponding to the target matching distance are determined to be matched.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, which are not described herein again. The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image matching method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like. Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program: acquiring a plurality of input shot images and a plurality of input scanning images; detecting each shot image and each scanned image to obtain a shot password area image in each shot image and a scanned password area image in each scanned image; performing feature extraction on each shooting password area image and each scanning password area image to obtain shooting feature points in each shooting password area image and scanning feature points in each scanning password area image; calculating to obtain the matching distance between the shot image corresponding to each shooting characteristic point and the scanned image corresponding to each scanning characteristic point according to each shooting characteristic point and each scanning characteristic point; selecting a target matching distance from matching distances between the shot image corresponding to each shooting characteristic point and the scanned image corresponding to each scanning characteristic point; and determining the shot image and the scanned image corresponding to the target matching distance as matching images.
In one embodiment, calculating a matching distance between the captured image corresponding to each capturing feature point and the scanned image corresponding to each scanning feature point according to each capturing feature point and each scanning feature point includes: acquiring current shooting feature points, and combining the current shooting feature points with each scanning feature point in pairs to obtain a plurality of feature point combinations; calculating to obtain the matching distance between the shot image corresponding to the current shooting characteristic point and the scanned image corresponding to each scanning characteristic point according to the characteristic points in each characteristic point combination; and obtaining the next shooting feature point as the current shooting feature point, returning to the step of combining the current shooting feature point with each scanning feature point in pairs respectively to obtain a plurality of feature point combinations until obtaining the matching distance between the shooting image corresponding to each shooting feature point and the scanning image corresponding to each scanning feature point.
In one embodiment, the step of calculating the matching distance between the captured image corresponding to the current capturing feature point and the scanned image corresponding to each scanning feature point according to the feature points in each feature point combination includes: selecting a specific number of feature point combinations from the plurality of feature point combinations according to a preset rule; calculating to obtain a matching distance corresponding to the feature point combination according to the feature points in the selected feature point combination; and calculating according to the matching distance and the specific quantity to obtain the matching distance between the shot image corresponding to the current shooting characteristic point and the scanned image corresponding to each scanned characteristic point.
In one embodiment, the step of selecting a target matching distance from matching distances between the captured image corresponding to each capturing feature point and the scanned image corresponding to each scanning feature point includes: matching distances between the shot images corresponding to the shooting characteristic points and the scanned images corresponding to the scanning characteristic points form a matching distance matrix; performing first-dimension searching on the matching distance matrix, and determining the minimum matching distance in each first-dimension matching distance in the searched matching distance matrix as a first matching distance; and searching a second dimension of the matching distance matrix, and determining the first matching distance as a target matching distance if the matching distance of the first matching distance in the second dimension is the minimum.
In one embodiment, the step of determining the captured image and the scanned image corresponding to the target matching distance as the matching image is preceded by: detecting whether the target matching distance is smaller than a preset matching distance; and when the target matching distance is detected to be smaller than the preset matching distance, determining the shot image and the scanned image corresponding to the target matching distance as matching images.
In one embodiment, the step of detecting each captured image and each scanned image to obtain the captured password region image in each captured image and the scanned password region image in each scanned image comprises: carrying out coarse detection on each shot image and each scanned image to obtain a shot password coarse detection area in each shot image and a scanned password coarse detection area in each scanned image; calculating each shooting password coarse detection area and each scanning password coarse detection area to obtain each shooting password area edge image and each scanning password area edge image; carrying out contour tracking on the edge image of each shot password area and the edge image of each scanned password area to obtain each shot password contour area and each scanned password contour area; and calculating to obtain shooting password area images in the shooting images and scanning password area images in the scanning images according to the shooting password outline areas and the scanning password outline areas.
In one embodiment, the method further comprises: and respectively carrying out denoising treatment on each shooting password coarse detection area and each scanning password coarse detection area to obtain each shooting password coarse detection area and each scanning password coarse detection area after denoising.
In one embodiment, the matching distance is calculated by similarity of the capture feature points in each capture coderegion image to the scan feature points in each scan coderegion image.
In one embodiment, the photographed image is a value added tax invoice photographed image, the scanned image is a value added tax invoice scanned image, and successful reimbursement is indicated when the value added tax invoice photographed image and the value added tax invoice scanned image corresponding to the target matching distance are determined to be matched.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring a plurality of input shot images and a plurality of input scanning images; detecting each shot image and each scanned image to obtain a shot password area image in each shot image and a scanned password area image in each scanned image; performing feature extraction on each shooting password area image and each scanning password area image to obtain shooting feature points in each shooting password area image and scanning feature points in each scanning password area image; calculating to obtain the matching distance between the shot image corresponding to each shooting characteristic point and the scanned image corresponding to each scanning characteristic point according to each shooting characteristic point and each scanning characteristic point; selecting a target matching distance from matching distances between the shot image corresponding to each shooting characteristic point and the scanned image corresponding to each scanning characteristic point; and determining the shot image and the scanned image corresponding to the target matching distance as matching images.
In one embodiment, calculating a matching distance between the captured image corresponding to each capturing feature point and the scanned image corresponding to each scanning feature point according to each capturing feature point and each scanning feature point includes: acquiring current shooting feature points, and combining the current shooting feature points with each scanning feature point in pairs to obtain a plurality of feature point combinations; calculating to obtain the matching distance between the shot image corresponding to the current shooting characteristic point and the scanned image corresponding to each scanning characteristic point according to the characteristic points in each characteristic point combination; and obtaining the next shooting feature point as the current shooting feature point, returning to the step of combining the current shooting feature point with each scanning feature point in pairs respectively to obtain a plurality of feature point combinations until obtaining the matching distance between the shooting image corresponding to each shooting feature point and the scanning image corresponding to each scanning feature point.
In one embodiment, the step of calculating the matching distance between the captured image corresponding to the current capturing feature point and the scanned image corresponding to each scanning feature point according to the feature points in each feature point combination includes: selecting a specific number of feature point combinations from the plurality of feature point combinations according to a preset rule; calculating to obtain a matching distance corresponding to the feature point combination according to the feature points in the selected feature point combination; and calculating according to the matching distance and the specific quantity to obtain the matching distance between the shot image corresponding to the current shooting characteristic point and the scanned image corresponding to each scanned characteristic point.
In one embodiment, the step of selecting a target matching distance from matching distances between the captured image corresponding to each capturing feature point and the scanned image corresponding to each scanning feature point includes: matching distances between the shot images corresponding to the shooting characteristic points and the scanned images corresponding to the scanning characteristic points form a matching distance matrix; performing first-dimension searching on the matching distance matrix, and determining the minimum matching distance in each first-dimension matching distance in the searched matching distance matrix as a first matching distance; and searching a second dimension of the matching distance matrix, and determining the first matching distance as a target matching distance if the matching distance of the first matching distance in the second dimension is the minimum.
In one embodiment, the step of determining the captured image and the scanned image corresponding to the target matching distance as the matching image is preceded by: detecting whether the target matching distance is smaller than a preset matching distance; and when the target matching distance is detected to be smaller than the preset matching distance, determining the shot image and the scanned image corresponding to the target matching distance as matching images.
In one embodiment, the step of detecting each captured image and each scanned image to obtain the captured password region image in each captured image and the scanned password region image in each scanned image comprises: carrying out coarse detection on each shot image and each scanned image to obtain a shot password coarse detection area in each shot image and a scanned password coarse detection area in each scanned image; calculating each shooting password coarse detection area and each scanning password coarse detection area to obtain each shooting password area edge image and each scanning password area edge image; carrying out contour tracking on the edge image of each shot password area and the edge image of each scanned password area to obtain each shot password contour area and each scanned password contour area; and calculating to obtain shooting password area images in the shooting images and scanning password area images in the scanning images according to the shooting password outline areas and the scanning password outline areas.
In one embodiment, the method further comprises: and respectively carrying out denoising treatment on each shooting password coarse detection area and each scanning password coarse detection area to obtain each shooting password coarse detection area and each scanning password coarse detection area after denoising.
In one embodiment, the matching distance is calculated by similarity of the capture feature points in each capture coderegion image to the scan feature points in each scan coderegion image.
In one embodiment, the photographed image is a value added tax invoice photographed image, the scanned image is a value added tax invoice scanned image, and successful reimbursement is indicated when the value added tax invoice photographed image and the value added tax invoice scanned image corresponding to the target matching distance are determined to be matched.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (15)
1. An image matching method, characterized in that the method comprises:
acquiring a plurality of input shot images and a plurality of input scanning images, wherein the shot images are invoice shot images, and the scanning images are invoice scanning images; the invoice shooting image and the invoice scanning image both comprise a seal and a transverse line;
detecting each shot image and each scanned image according to the seal and the transverse line to obtain a shot password area image in each shot image and a scanned password area image in each scanned image;
performing feature extraction on each shooting password area image and each scanning password area image to obtain a shooting feature point in each shooting password area image and a scanning feature point in each scanning password area image;
calculating to obtain a matching distance between a shot image corresponding to each shooting characteristic point and a scanned image corresponding to each scanning characteristic point according to each shooting characteristic point and each scanning characteristic point;
matching distances between the shot images corresponding to the shooting characteristic points and the scanning images corresponding to the scanning characteristic points form a matching distance matrix;
performing first-dimension searching on the matching distance matrix, and determining the minimum matching distance in each first-dimension matching distance in the searched matching distance matrix as a first matching distance;
performing second-dimension searching on the matching distance matrix, and determining the first matching distance as a target matching distance if the matching distance of the first matching distance in the second dimension is the minimum;
and determining the shot image and the scanned image corresponding to the target matching distance as matching images.
2. The method according to claim 1, wherein the calculating a matching distance between the captured image corresponding to each of the capturing feature points and the scanned image corresponding to each of the scanning feature points according to each of the capturing feature points and each of the scanning feature points comprises:
acquiring current shooting feature points, and combining the current shooting feature points with each scanning feature point in pairs to obtain a plurality of feature point combinations;
calculating to obtain a matching distance between the shot image corresponding to the current shooting characteristic point and the scanning image corresponding to each scanning characteristic point according to the characteristic points in each characteristic point combination;
and acquiring the next shooting feature point as the current shooting feature point, and returning to the step of combining the current shooting feature point with each scanning feature point in pairs respectively to acquire a plurality of feature point combinations until acquiring the matching distance between the shooting image corresponding to each shooting feature point and the scanning image corresponding to each scanning feature point.
3. The method according to claim 2, wherein the step of calculating the matching distance between the captured image corresponding to the current capturing feature point and the scanned image corresponding to each scanning feature point according to the feature points in each feature point combination comprises:
selecting a specific number of characteristic point combinations from the plurality of characteristic point combinations according to a preset rule;
calculating to obtain a matching distance corresponding to the feature point combination according to the feature points in the selected feature point combination;
and calculating the matching distance between the shot image corresponding to the current shooting characteristic point and the scanned image corresponding to each scanning characteristic point according to the matching distance and the specific number.
4. The method according to claim 2, wherein the step of obtaining a current shooting feature point, and combining the current shooting feature point with each scanning feature point in pairs to obtain a plurality of feature point combinations comprises:
randomly selecting current shooting feature points from the shooting feature points;
and combining the current shooting characteristic points with each scanning characteristic point in pairs respectively to obtain a plurality of characteristic point combinations.
5. The method according to claim 1, wherein the step of determining the photographed image and the scanned image corresponding to the target matching distance as matching images is preceded by:
detecting whether the target matching distance is smaller than a preset matching distance;
and when the target matching distance is detected to be smaller than a preset matching distance, determining the shot image and the scanned image corresponding to the target matching distance as matching images.
6. The method according to claim 1, wherein said step of detecting each of said captured images and each of said scanned images based on said stamp and said cross-bar to obtain a captured password region image in each of said captured images and a scanned password region image in each of said scanned images comprises:
carrying out coarse detection on each shot image and each scanned image according to the seal and the transverse line to obtain a shot password coarse detection area in each shot image and a scanned password coarse detection area in each scanned image;
calculating each shooting password coarse detection area and each scanning password coarse detection area to obtain each shooting password area edge image and each scanning password area edge image;
carrying out contour tracking on each shooting password area edge image and each scanning password area edge image to obtain each shooting password contour area and each scanning password contour area;
and calculating to obtain shooting password area images in the shooting images and scanning password area images in the scanning images according to the shooting password outline areas and the scanning password outline areas.
7. The method of claim 6, further comprising:
and denoising each shooting password coarse detection area and each scanning password coarse detection area respectively to obtain each shooting password coarse detection area and each scanning password coarse detection area after denoising.
8. The method of claim 1 wherein the matching distance is calculated by similarity of captured feature points in each of the captured coderegion images to scanned feature points in each of the scanned coderegion images.
9. The method according to claim 1, wherein the photographed image is a value-added tax invoice photographed image, and the scanned image is a value-added tax invoice scanned image, and when the value-added tax invoice photographed image and the value-added tax invoice scanned image corresponding to the target matching distance are determined to be matched, successful reimbursement is indicated.
10. An image matching apparatus, characterized in that the apparatus comprises:
the system comprises an image acquisition module, a storage module and a display module, wherein the image acquisition module is used for acquiring a plurality of input shooting images and a plurality of scanning images, the shooting images are invoice shooting images, and the scanning images are invoice scanning images; the invoice shooting image and the invoice scanning image both comprise a seal and a transverse line;
the image detection module is used for detecting each shot image and each scanned image according to the seal and the transverse line to obtain a shot password area image in each shot image and a scanned password area image in each scanned image;
the characteristic extraction module is used for extracting the characteristics of each shooting password area image and each scanning password area image to obtain shooting characteristic points in each shooting password area image and scanning characteristic points in each scanning password area image;
the matching distance calculation module is used for calculating the matching distance between the shot image corresponding to each shooting characteristic point and the scanning image corresponding to each scanning characteristic point according to each shooting characteristic point and each scanning characteristic point;
the matching distance selection module is used for forming a matching distance matrix from matching distances between the shot images corresponding to the shooting characteristic points and the scanned images corresponding to the scanning characteristic points;
performing first-dimension searching on the matching distance matrix, and determining the minimum matching distance in each first-dimension matching distance in the searched matching distance matrix as a first matching distance;
performing second-dimension searching on the matching distance matrix, and determining the first matching distance as a target matching distance if the matching distance of the first matching distance in the second dimension is the minimum; and the matching image detection module is used for determining the shot image and the scanned image corresponding to the target matching distance as matching images.
11. The apparatus of claim 10, wherein the matching distance calculation module comprises:
a current shooting feature point obtaining unit, configured to obtain a current shooting feature point, and combine each of the current shooting feature points with each of the scanning feature points in pairs to obtain a plurality of feature point combinations;
a matching distance calculation unit, configured to calculate, according to feature points in each feature point combination, a matching distance between the captured image corresponding to the current capturing feature point and the scanned image corresponding to each scanning feature point;
the current shooting feature point obtaining unit is further used for obtaining the next shooting feature point as the current shooting feature point, and the matching distance calculating unit is further used for combining the current shooting feature point with each scanning feature point in pairs to obtain a plurality of feature point combinations until obtaining the matching distance between the shooting image corresponding to each shooting feature point and the scanning image corresponding to each scanning feature point.
12. The apparatus according to claim 11, wherein the current capturing feature point obtaining unit is further configured to:
randomly selecting current shooting feature points from the shooting feature points;
and combining the current shooting characteristic points with each scanning characteristic point in pairs respectively to obtain a plurality of characteristic point combinations.
13. The apparatus of claim 10, wherein the image detection module comprises:
the rough detection unit is used for carrying out rough detection on each shot image and each scanned image to obtain a shot password rough detection area in each shot image and a scanned password rough detection area in each scanned image;
the calculation unit is used for calculating each shooting password coarse detection area and each scanning password coarse detection area to obtain each shooting password area edge image and each scanning password area edge image;
the tracking unit is used for carrying out contour tracking on the edge image of each shooting password area and the edge image of each scanning password area to obtain each shooting password contour area and each scanning password contour area;
and the password area image acquisition unit is used for calculating to obtain shooting password area images in the shooting images and scanning password area images in the scanning images according to the shooting password outline areas and the scanning password outline areas.
14. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 9 when executing the computer program.
15. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810143256.6A CN108364024B (en) | 2018-02-11 | 2018-02-11 | Image matching method and device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810143256.6A CN108364024B (en) | 2018-02-11 | 2018-02-11 | Image matching method and device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108364024A CN108364024A (en) | 2018-08-03 |
CN108364024B true CN108364024B (en) | 2021-05-07 |
Family
ID=63006010
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810143256.6A Active CN108364024B (en) | 2018-02-11 | 2018-02-11 | Image matching method and device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108364024B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111860608B (en) * | 2020-06-28 | 2024-06-18 | 浙江大华技术股份有限公司 | Invoice image registration method, invoice image registration equipment and computer storage medium |
CN112257712B (en) * | 2020-10-29 | 2024-02-27 | 湖南星汉数智科技有限公司 | Train ticket image alignment method and device, computer device and computer readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3032459A1 (en) * | 2014-12-10 | 2016-06-15 | Ricoh Company, Ltd. | Realogram scene analysis of images: shelf and label finding |
CN106023182A (en) * | 2016-05-13 | 2016-10-12 | 广州视源电子科技股份有限公司 | printed circuit board image matching method and system |
CN107067044A (en) * | 2017-05-31 | 2017-08-18 | 北京空间飞行器总体设计部 | A kind of finance reimbursement unanimous vote is according to intelligent checks system |
CN107358490A (en) * | 2017-06-19 | 2017-11-17 | 北京奇艺世纪科技有限公司 | A kind of image matching method, device and electronic equipment |
-
2018
- 2018-02-11 CN CN201810143256.6A patent/CN108364024B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3032459A1 (en) * | 2014-12-10 | 2016-06-15 | Ricoh Company, Ltd. | Realogram scene analysis of images: shelf and label finding |
CN106023182A (en) * | 2016-05-13 | 2016-10-12 | 广州视源电子科技股份有限公司 | printed circuit board image matching method and system |
CN107067044A (en) * | 2017-05-31 | 2017-08-18 | 北京空间飞行器总体设计部 | A kind of finance reimbursement unanimous vote is according to intelligent checks system |
CN107358490A (en) * | 2017-06-19 | 2017-11-17 | 北京奇艺世纪科技有限公司 | A kind of image matching method, device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN108364024A (en) | 2018-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109753838B (en) | Two-dimensional code identification method, device, computer equipment and storage medium | |
CN109859227B (en) | Method and device for detecting flip image, computer equipment and storage medium | |
CN109285105B (en) | Watermark detection method, watermark detection device, computer equipment and storage medium | |
CN110674712A (en) | Interactive behavior recognition method and device, computer equipment and storage medium | |
CN110751149B (en) | Target object labeling method, device, computer equipment and storage medium | |
WO2021012382A1 (en) | Method and apparatus for configuring chat robot, computer device and storage medium | |
CN109840524B (en) | Text type recognition method, device, equipment and storage medium | |
US10127681B2 (en) | Systems and methods for point-based image alignment | |
CN110751500B (en) | Processing method and device for sharing pictures, computer equipment and storage medium | |
CN111178203B (en) | Signature verification method and device, computer equipment and storage medium | |
US20210124978A1 (en) | Automatic ruler detection | |
CN110738222B (en) | Image matching method and device, computer equipment and storage medium | |
CN111144372A (en) | Vehicle detection method, device, computer equipment and storage medium | |
CN112036232A (en) | Image table structure identification method, system, terminal and storage medium | |
CN111047496A (en) | Threshold determination method, watermark detection device and electronic equipment | |
CN108364024B (en) | Image matching method and device, computer equipment and storage medium | |
CN111192392A (en) | Identity verification method and device, computer equipment and computer-readable storage medium | |
CN112580499A (en) | Text recognition method, device, equipment and storage medium | |
CN112052702A (en) | Method and device for identifying two-dimensional code | |
US20170140206A1 (en) | Symbol Detection for Desired Image Reconstruction | |
CN114463763A (en) | Bank flow table extraction method and device, computer equipment and storage medium | |
CN112163110B (en) | Image classification method and device, electronic equipment and computer-readable storage medium | |
CN116485858B (en) | Heterogeneous image registration method and device based on multi-scale cross-modal neighborhood descriptor | |
CN110490020B (en) | Bar code identification method and device, computer equipment and storage medium | |
CN111709422A (en) | Image identification method and device based on neural network and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |