CN113111212A - Image matching method, device, equipment and storage medium - Google Patents
Image matching method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN113111212A CN113111212A CN202110357278.4A CN202110357278A CN113111212A CN 113111212 A CN113111212 A CN 113111212A CN 202110357278 A CN202110357278 A CN 202110357278A CN 113111212 A CN113111212 A CN 113111212A
- Authority
- CN
- China
- Prior art keywords
- template
- image
- target
- points
- feature point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000005259 measurement Methods 0.000 claims abstract description 35
- 238000011524 similarity measure Methods 0.000 claims description 36
- 238000005070 sampling Methods 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 9
- 238000004422 calculation algorithm Methods 0.000 description 19
- 238000001514 detection method Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000003708 edge detection Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 238000001914 filtration Methods 0.000 description 5
- 238000013507 mapping Methods 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000002146 bilateral effect Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000005311 autocorrelation function Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000000593 degrading effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
Landscapes
- Engineering & Computer Science (AREA)
- Library & Information Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image matching method, an image matching device, image matching equipment and a storage medium. The method comprises the following steps: acquiring a template feature point set of a template image and a target feature point set of an image to be searched, wherein the feature point set comprises salient angular points and edge feature points; traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity measurement of the template feature point set and the target feature point set corresponding to the template image area; and determining the position of the matched feature point according to the target similarity measurement, and displaying the matched feature point in the image to be searched. According to the technical scheme, the image matching can be performed based on the corner information and the edge feature information of the image, the problems that the traditional feature point matching method is low in matching accuracy for images with few feature points and certain shapes are solved, and the matching accuracy of the images with certain shapes is improved.
Description
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to an image matching method, device, equipment and storage medium.
Background
The image matching method is a method for searching similar image targets through analysis of similarity and consistency according to corresponding relations of image contents, characteristics, structures, relations, textures, gray levels and the like. At present, the image matching method is widely applied to the fields of target recognition, accurate workpiece positioning, video tracking and the like.
Commonly used image matching methods are a gray-scale-based image matching method and a feature-based image matching method. Two key steps of the feature-based image matching method are feature extraction and matching, the Harris algorithm is adopted to extract angular points in the traditional method, and the angular points are used as feature points to perform matching, so that for images with certain shapes and less angular point information, the matching result is not accurate enough, and the problem of large calculated amount caused by feature point redundancy exists.
Disclosure of Invention
The embodiment of the invention provides an image matching method, an image matching device, image matching equipment and a storage medium, which aim to realize image matching based on corner information and edge characteristic information of an image, solve the problems of low matching accuracy and improvement of matching accuracy of the image with a certain shape of a few characteristic points in the traditional characteristic point matching method.
In a first aspect, an embodiment of the present invention provides an image matching method, including:
acquiring a template feature point set of a template image and a target feature point set of an image to be searched, wherein the feature point set comprises salient angular points and edge feature points;
traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity measurement of the template feature point set and the target feature point set corresponding to the template image area;
and determining the position of the matched feature point according to the target similarity measurement, and displaying the matched feature point in the image to be searched.
In a second aspect, an embodiment of the present invention further provides an image matching apparatus, where the apparatus includes:
the acquisition module is used for acquiring a template feature point set of a template image and a target feature point set of an image to be searched, wherein the feature point set comprises salient angular points and edge feature points;
the calculation module is used for traversing each pixel point of the image to be searched according to the template image and calculating the target similarity measurement of the template feature point set and the target feature point set corresponding to the template image area;
and the determining module is used for determining the position of the matched feature point according to the target similarity measurement and displaying the matched feature point in the image to be searched.
In a third aspect, an embodiment of the present invention further provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the image matching method according to any one of the embodiments of the present invention.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the image matching method according to any one of the embodiments of the present invention.
The method comprises the steps of obtaining a template feature point set of a template image and a target feature point set of an image to be searched, wherein the feature point set comprises salient angular points and edge feature points; traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity measurement of the template feature point set and the target feature point set corresponding to the template image area; the positions of the matched feature points are determined according to the target similarity measurement, the matched feature points are displayed in the image to be searched, image matching can be performed based on corner point information and edge feature information of the image, the problem that the matching accuracy of a traditional feature point matching method for images with few feature points and certain shapes is low is solved, and the matching accuracy of the images with certain shapes is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flowchart of an image matching method according to a first embodiment of the present invention;
FIG. 2a is a flowchart of an image matching method according to a second embodiment of the present invention;
FIG. 2b is a schematic diagram of a template image library according to a second embodiment of the present invention;
FIG. 2c is a flowchart of another image matching method according to the second embodiment of the present invention;
FIG. 3 is a schematic structural diagram of an image matching apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer device in the fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Example one
Fig. 1 is a flowchart of an image matching method according to an embodiment of the present invention, where the embodiment is applicable to a case of matching an image based on feature points and shapes, and the method may be executed by an image matching apparatus according to an embodiment of the present invention, where the apparatus may be implemented in a software and/or hardware manner, as shown in fig. 1, the method specifically includes the following steps:
s110, acquiring a template feature point set of a template image and a target feature point set of an image to be searched, wherein the feature point set comprises salient corner points and edge feature points.
The template image is an image sample used for matching with other images, the template feature point set is a set of feature points with key information in the template image, such as edge information, corner information and gray information, and the template feature point set of the template image includes: template salient corner points and template edge feature points.
The images to be searched can be all images in the image library, or images of a specified type in the image library. The target characteristic point set of the image to be searched comprises the following steps: object salient corner points and object edge feature points.
For example, the method for obtaining the template feature point set of the template image may be to extract template candidate corners of the template image by a corner detection algorithm, remove dummy feature points from the template candidate corners to obtain template salient corners, extract template edge feature points in the neighborhood of the template salient corners by an edge detection algorithm, and determine the template feature point set of the template image according to the template salient corners and the template edge feature points. Similarly, the manner of obtaining the target feature point set of the image to be searched may be to extract a target corner of the image to be searched through a corner detection algorithm, remove a pseudo feature point from the target corner to obtain a target salient corner, extract a target edge feature point in a neighborhood of the target salient corner through an edge detection algorithm, and determine the target feature point set of the image to be searched according to the target salient corner and the target edge feature point. The corner detection algorithm may be a Harris corner detection algorithm, and the edge detection algorithm may be an edge detection algorithm based on a Sobel operator, which is not limited in this embodiment of the present invention.
Optionally, before obtaining the template feature point set of the template image and the target feature point set of the image to be searched, the method further includes:
and denoising the template image and the image to be searched.
Illustratively, a separate acceleration bilateral filter adopting adaptive parameter estimation is adopted to carry out denoising processing on the template image and the image to be searched. The filter is a nonlinear filter designed on the basis of a classical Gaussian filtering algorithm, and has the characteristics of non-iteration, locality, simplicity and the like.
The specific process of denoising the template image and the image to be searched may be as follows: firstly, a local weighted average bilateral filtering method is adopted to obtain the pixel value of the image after denoising treatment, and the formula is as follows:
wherein the content of the first and second substances,for de-noised images, Sx,yRepresenting a neighborhood of pixels (x, y), g (i, j) being each pixel in the neighborhood, and ω (i, j) being a weighting coefficient.
The weighting coefficient ω (i, j) is defined by a spatial proximity factor ωs(i, j) and a luminance similarity factor ωrThe product of (i, j), i.e., ω (i, j) ═ ωs(i,j)ωr(i, j). Based on the interaction of these two weighting factors, the bilateral filter both smoothly filters the image and preserves the image edges.
For ω, it is noted thatrFor further improvement of brightness similarity factor, one-dimensional weighting factors in horizontal and vertical directions are used to replace weighting factor omega in two-dimensional neighborhoodrThe amount of computation can be effectively reduced without degrading the performance.
The luminance similarity factor in the horizontal direction is:
the luminance similarity factor in the vertical direction is:
wherein σrThe parameter is a filtering parameter, which has a great influence on the filtering effect and can be obtained by self-adaptive calculation according to the image size, and the formula is as follows:
wherein HD ═ (1, -2,1) is a high pass filter associated with the laplacian filter, which represents the convolution followed by the 1/2 downsampling calculation; h and W represent the height and width, respectively, of an image comprising: a template image and an image to be searched.
And S120, traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity measurement of the template feature point set and the target feature point set corresponding to the template image area.
The target similarity measure may be a maximum value of the similarity measure, or may be a similarity measure greater than a preset threshold; the target similarity measure may be set according to actual requirements.
For example, traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity measure of the target feature point set corresponding to the template feature point set and the template image region may be to traverse the template image on the image to be searched starting from the upper left corner of the image to be searched, and calculate the target similarity measure one by one pixel point; the method can also be used for respectively carrying out down-sampling layering on the template image and the image to be searched to obtain a level template image and a level image to be searched, and sequentially calculating the similarity measurement of each level template image and the image to be searched of the corresponding level from top to bottom from rough to fine until the target similarity measurement of the bottommost level image of the template image and the bottommost level image of the image to be searched is obtained, namely the target similarity measurement of the original image of the template image and the original image of the image to be searched.
S130, determining the position of the matched feature point according to the target similarity measurement, and displaying the matched feature point in the image to be searched.
Illustratively, if the similarity measure obtained by calculating the template feature points of the template image and the target feature points of the image to be searched is the target similarity measure, determining the target feature points as matching feature points, acquiring the position coordinates of the matching feature points, and displaying the matching feature points in the image to be searched.
According to the technical scheme of the embodiment, a template feature point set of a template image and a target feature point set of an image to be searched are obtained, wherein the feature point set comprises salient corner points and edge feature points; traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity measurement of the template feature point set and the target feature point set corresponding to the template image area; the positions of the matched feature points are determined according to the target similarity measurement, the matched feature points are displayed in the image to be searched, image matching can be performed based on corner point information and edge feature information of the image, the problem that the matching accuracy of a traditional feature point matching method for images with few feature points and certain shapes is low is solved, and the matching accuracy of the images with certain shapes is improved.
Example two
Fig. 2a is a flowchart of an image matching method in a second embodiment of the present invention, and this embodiment is optimized based on the above embodiment, and in this embodiment, acquiring a template feature point set of a template image includes: acquiring template characteristic points and a rotation angle of a template image; if the number of the template feature points is larger than a first preset number, carrying out pyramid downsampling layering on the template image to obtain a hierarchical template image; and changing the angle of the template feature points of each level template image according to the rotation angle to obtain a template feature point set of the template image. The method for acquiring the target feature point set of the image to be searched comprises the following steps: acquiring an image to be searched and the number of layers of a hierarchical template image corresponding to the template image; carrying out pyramid downsampling layering on the image to be searched according to the number of layers to obtain a hierarchical image to be searched; extracting target characteristic points of the image to be searched of each level; and determining a target characteristic point set of the image to be searched according to the target characteristic points.
As shown in fig. 2a, the method of this embodiment specifically includes the following steps:
s210, acquiring template characteristic points and rotation angles of the template image.
The template feature points of the template image comprise template salient corner points and template edge feature points.
The rotation angle may be 360 degrees in a unit of a preset angle, and may be, for example, 1 degree or 2 degrees. Or may be 2 degrees or 4 degrees. The rotation angle is set to show that the angle of the template image is consistent with the angle of the image to be searched.
Specifically, the manner of obtaining the template feature points of the template image may be to extract template candidate corners of the template image, extract template edge feature points of the template salient corners in the neighborhood, and determine the template feature points of the template image according to the template salient corners and the template edge feature points.
Optionally, obtaining template feature points of the template image includes:
extracting template candidate angular points of the template image;
acquiring the neighborhood of the template candidate corner points;
if the template candidate corner is a maximum value point in the neighborhood, determining the template candidate corner as a template salient corner;
extracting template edge characteristic points of the template salient corner points in the neighborhood;
sampling the template edge points to obtain template characteristic edge points;
and determining the template feature points of the template image according to the template salient corner points and the template edge feature points.
The size of the template image may be (2m +1) × (2m +1), where m is 1,2,3 …, and if the template size is 3 × 3, the neighborhood is 8 pixels around the pixel.
Illustratively, template candidate corners of a template image are extracted through a Harris corner detection algorithm, pseudo feature points are removed from the template candidate corners to obtain template salient corners, template edge points are extracted in the neighborhood of the template salient corners through an edge detection algorithm based on a Sobel operator, the template edge points are sampled to obtain template feature edge points, the sampling mode can be equal-interval random sampling or non-equal-interval random sampling, and the template feature points of the template image are formed according to the template salient corners and the template edge feature points. The template feature points of the template image integrate the corner features and the edge features of the image, and can fully embody the corner information and the edge feature information of the image.
Illustratively, the specific steps of extracting the template candidate corner of the template image by the Harris corner detection algorithm are as follows: calculating the gradient I of the template image I (x, y) in the x directionxAnd a gradient I in the y-directiony(ii) a The self-similarity of the template image after translation (Δ x, Δ y) at point (x, y) can be calculated by the formula of the autocorrelation function:
wherein β (u, v) is a window function with a point (u, v) as a center, generally a gaussian weighting function, w (x, y) is a pixel point of the template image, and M (x, y) is a gradient covariance matrix of the corner point. The formula for the matrix M (x, y) and the window function β (u, v) is as follows:
let λ1And λ2The two eigenvalues of the matrix M (x, y) respectively, and the positions of the plane, the edge and the angular point in the image can be judged according to the magnitude of the eigenvalue. Calculating the responsivity of the characteristic points when the candidate angular points are actually detected, wherein the points with the responsivity larger than a preset responsivity threshold value are the candidate angular points, and the calculation formula is as follows:
H=det M-k·(traceM)2;
det M=λ1λ2=AB-C2;
traceM=λ1+λ2=A+B;
h is the responsivity of the characteristic point, detM is a determinant of the matrix, and traceM is a trace of the matrix; k is a constant weight coefficient, generally 0.04-0.06.
Optionally, if the template candidate corner is a maximum point in a neighborhood, determining the template candidate corner as a template salient corner includes:
acquiring a first gradient of the candidate corner points of the template;
acquiring a second gradient of the template candidate corner in the target gradient direction in the neighborhood;
and if the first gradient is larger than the second gradient, the template candidate corner is a maximum value point in the neighborhood, and the template candidate corner is determined as a template salient corner.
The target gradient direction may include a horizontal gradient direction, a vertical gradient direction, -45 ° gradient direction, and a 45 ° gradient direction in a neighborhood range of the template candidate angle point; or may include other gradient directions.
Exemplarily, for a 3 × 3 template image, acquiring a first gradient of a template candidate corner and a second gradient of the template candidate corner in a horizontal gradient direction, a vertical gradient direction, a-45 ° gradient direction and a 45 ° gradient direction of the neighborhood, and if the gradient value of the template candidate corner is greater than the gradient values of two pixel points in the gradient direction of the neighborhood, determining the template candidate corner as a significant feature point, wherein the template candidate corner is a maximum value point in the neighborhood of 8 pixel points; and if the gradient value of the template candidate corner is less than or equal to the gradient values of two pixel points in the gradient direction in the neighborhood, removing the template candidate corner from the template candidate corner.
S220, if the number of the template feature points is larger than a first preset number, pyramid downsampling layering is carried out on the template image, and a hierarchical template image is obtained.
The first preset number of points may be set according to actual requirements, which is not limited in the embodiment of the present invention, and may be determined according to the size of the template image.
Specifically, if the number of the template feature points is greater than a first preset number, pyramid adaptive downsampling layering is performed on the template image until the number of the template feature points is less than or equal to the first preset number, a hierarchical template image is obtained, and the number of pyramid layers at the moment, namely the number of layers of the template image, is recorded.
And S230, changing the angle of the template feature points of the template image of each level according to the rotation angle to obtain a template feature point set of the template image.
Specifically, the feature points of each level template image are rotated according to the rotation angle to obtain level template feature points at different angles, and the level template feature points at different angles form a template feature point set. And if the number of the rotation angles is E and the layer number of the template images is F, the template image feature point set comprises E multiplied by F corresponding feature points of the template images.
For example, the manner of changing the angle of the template feature point of each level of the template image according to the rotation angle may be:
wherein, (x, y) is the pixel point coordinate of the level template image before the angle change, (x ', y') is the pixel point coordinate of the level template image after the angle change, l is the length of the introduced intermediate variable vector, alpha is the horizontal included angle of the intermediate variable vector, and theta is the rotation angle.
It should be noted that the construction of the template image library can be completed by storing the information of the template feature point sets into the multiple nested high-efficiency data. The data structure is a bridge formed by connecting a constructed template image and a subsequent matching process, the data structure is constructed in a mode of nesting containers by using a disorder diagram, the outermost layer is the pyramid layer, the next outer layer is different angles, the innermost layer is a feature point set, the overall structure is shown in fig. 2b, the data access speed in the algorithm operation process can be further improved by designing the data structure, and the algorithm efficiency is improved.
S240, acquiring a target feature point set of the image to be searched.
Optionally, obtaining a target feature point set of an image to be searched includes:
acquiring an image to be searched and the number of layers of a hierarchical template image corresponding to the template image;
carrying out pyramid downsampling layering on the image to be searched according to the number of layers to obtain a hierarchical image to be searched;
extracting target characteristic points of the image to be searched of each level;
and determining a target characteristic point set of the image to be searched according to the target characteristic points.
Exemplarily, conducting pyramid downsampling layering on the image to be searched according to the number of layers of the hierarchical template image corresponding to the template image to obtain the hierarchical image to be searched, obtaining target feature points of the image to be searched of each hierarchy, and forming a target feature point set according to the target feature points of the images to be searched of different hierarchies. The manner of obtaining the target feature point of the image to be searched at each level is the same as the manner of obtaining the template feature point of the template image, which is not described in detail herein.
Optionally, the extracting the target feature point of the image to be searched at each level includes:
extracting target candidate corner points of the image to be searched in the hierarchy;
if the target candidate corner point is a maximum value point in the neighborhood, determining the target candidate corner point as a target salient corner point;
extracting target edge points of the target salient corner points in the neighborhood;
sampling the target edge points to obtain target edge feature points;
and determining the target characteristic points of the image to be searched according to the target salient corner points and the target edge characteristic points.
Illustratively, a Harris corner detection algorithm is used for extracting target candidate corners of an image to be searched, pseudo feature points are removed from the target candidate corners to obtain target salient corners, target edge points are extracted from the neighborhood of the target salient corners through an edge detection algorithm based on a Sobel operator, the target edge points are sampled to obtain target feature edge points, the sampling mode can be equal-interval random sampling or unequal-interval random sampling, and the target feature points of the image to be searched are formed according to the target salient corners and the target edge feature points. The target feature points of the template image integrate the corner features and the edge features of the image, and can fully embody the corner information and the edge feature information of the image.
And S250, traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity measurement of the template feature point set and the target feature point set corresponding to the template image area.
The target similarity measurement is the maximum similarity measurement of the template feature point set of the image at the lowest level of the template image and the target feature point set of the image at the lowest level of the image to be searched.
Specifically, traversing each pixel point of the image to be searched according to the template image may be traversing each pixel point of a corresponding hierarchical image of the image to be searched according to each hierarchical image of the template image, calculating a target similarity measure of the template feature point set and the target feature point set corresponding to the template image region, implementing a process from coarse matching to fine matching, and finally determining a maximum similarity measure of the template feature point set of the lowest hierarchical image of the template image and the target feature point set of the lowest hierarchical image of the image to be searched.
Optionally, traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity metric of the target feature point set corresponding to the template feature point set and the template image region, including:
traversing each pixel point of the highest-level image of the image to be searched according to the highest-level image of the template image;
calculating a template feature point set of a highest-level image of the template images and a first similarity measure of the target feature point set corresponding to the highest-level template image region, and determining a matching point position corresponding to the maximum value of the first similarity measure;
sequentially determining the areas to be matched of the next-level images of the images to be searched according to the positions of the matching points until the areas to be matched of the images at the lowest level of the images to be searched are determined;
and calculating a template feature point set of the lowest-level image of the template images and a second similarity measure of the target feature point set corresponding to the region to be matched, and determining the maximum value of the second similarity measure as the target similarity measure.
Illustratively, traversing each pixel point of the highest-level image of the image to be searched according to the highest-level image of the template image, calculating a template feature point set of the highest-level image of the template image and a first similarity measure of the target feature point set corresponding to the template image region, determining a matching point position corresponding to the maximum value of the first similarity measure, mapping the matching point position to the next-higher-level image of the image to be searched according to a preset mapping strategy, and obtaining the region to be matched of the next-higher-level image of the image to be searched. Traversing each pixel point of the image to be searched in the next higher level according to the image in the next higher level of the template image, calculating the similarity measurement of the template feature point set of the image in the next higher level of the template image and the target feature point set corresponding to the template image area, determining the area to be matched in the next level according to the maximum value of the similarity measurement, and so on until the area to be matched of the image in the lowest level of the image to be searched is determined. And calculating a template feature point set of the lowest-level image of the template images and a second similarity measure of the target feature point set corresponding to the region to be matched, and determining the maximum value of the second similarity measure as the target similarity measure.
For example, the mapping policy for sequentially determining the areas to be matched of the next-level images of the images to be searched according to the positions of the matching points may be:
wherein (x)1,y1) Is the match point location of the current layer level, (x'2,y′2) The coordinate of the upper left corner of the field to be matched of the image of the next layer of the image to be searched, (x ″)2,y″2) The upper left corner coordinate, img, of the field to be matched for the next layer image of the image to be searchednext.colsThe number of the column pixels of the lower template image is set; imgnext.rowsIs the number of line pixels of the lower layer template image.
Optionally, if the similarity measure of the template feature points and the target feature points corresponding to the hierarchical template image region meets a first termination condition, terminating the similarity measure calculation of the current pixel point;
the first termination condition includes:
sj<smin-1+j/n;
wherein s isjThe similarity measure of the top j of the target feature points corresponding to the template feature points and the hierarchical template image area, and n is the total of the template feature points and the target feature points participating in calculationThe number of the first and second groups is,the direction vector of the ith target feature point of the image to be searched for is taken as the hierarchy,direction vector of template characteristic point corresponding to ith target characteristic point, sminIs a preset threshold value, t'iThe x-direction gradient of the target feature point of the image to be searched for the hierarchy,is the gradient of the template feature points of the hierarchical template image in the x direction, u'iThe y-direction gradient of the target feature point of the image to be searched for the hierarchy,is the y-direction gradient of the template feature points of the hierarchical template image.
In the image matching process, in order to quickly locate the actual matching position between the template image and the image to be searched, the similarity measurement value at the non-possible target position does not need to be completely calculated, and if the cutoff condition is met, the similarity measurement calculation of the current pixel point is ended in advance, so that the matching speed can be accelerated.
Optionally, if the similarity measure between the first feature point in the hierarchical template image region and the second feature point of the image to be searched corresponding to the hierarchical template image region satisfies a second termination condition, terminating the similarity measure calculation of the current pixel point:
the second termination condition includes:
sj<min(smin-1+fj/n,sminj/n);
wherein, f is (1-gs)min)/(1-smin) When the greedy coefficient g is 1, all the pixels are arrangedAll points adopt strict threshold values to judge termination conditions, sminIs a preset threshold.
The advantages of such an arrangement are: for the images to be searched with shielding and hiding, termination judgment is carried out through different thresholds, the fact that the front n-j terms adopt strict thresholds to judge termination conditions is achieved through setting greedy coefficients g in advance, and the back j terms adopt loose thresholds to judge termination conditions is achieved. Preferably, the greedy coefficient g is set to 0.9.
And S260, determining the position of the matched feature point according to the target similarity measurement, and displaying the matched feature point in the image to be searched.
As shown in fig. 2c, the specific steps of the technical solution of this embodiment are: firstly, extracting characteristic points of a template image. After the template image is subjected to filtering and denoising processing, candidate corners of the template image are extracted through a Harris corner detection algorithm, and the candidate corners with the maximum responsivity are reserved as salient corners according to a non-maximum suppression method of local regions of the candidate corners; extracting edge feature points in the neighborhood of the salient corner points through an edge detection algorithm, screening the edge feature points by adopting an equal-interval sampling strategy, and determining a feature point set of the image to be searched according to the salient corner points and the edge feature points.
And secondly, carrying out scale change on the template image to obtain a feature point set of the template image and form a template library. And layering and angle rotating the corresponding template image by a pyramid self-adaptive layering method to obtain a multi-scale and multi-angle template image feature point set.
And thirdly, extracting a characteristic point set of the image to be searched. And layering the images to be searched according to the number of layers of the template images, acquiring a feature point set of each layer of the images to be searched, calculating the similarity measurement of the template feature point set of each layer of the template images and the feature point set of the images to be searched corresponding to the template image area of the corresponding hierarchy, and if a termination condition is met, finishing the similarity measurement calculation of the current pixel point in advance.
And fourthly, determining the matching position according to the mapping strategy. And gradually determining the matching position points of the bottom template image and the bottom image to be searched according to the mapping strategy from rough matching to fine matching, and displaying the positions of the matching points in the image to be searched.
According to the technical scheme of the embodiment, a template feature point set of a template image and a target feature point set of an image to be searched are obtained, wherein the feature point set comprises salient corner points and edge feature points; traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity measurement of the template feature point set and the target feature point set corresponding to the template image area; the positions of the matched feature points are determined according to the target similarity measurement, the matched feature points are displayed in the image to be searched, image matching can be carried out based on the corner point information and the edge feature information of the image, the problems that a traditional feature point matching method is low in matching accuracy for images with few feature points and certain shapes are solved, the matching accuracy of the images with the certain shapes is improved, and the matching efficiency can be improved by further extracting the significant corner points from the corner point information.
EXAMPLE III
Fig. 3 is a schematic structural diagram of an image matching apparatus according to a third embodiment of the present invention. The embodiment may be applicable to the case of matching an image based on feature points and shapes, the apparatus may be implemented in software and/or hardware, and the apparatus may be integrated in any device providing the function of image matching, as shown in fig. 3, where the apparatus for image matching specifically includes: an acquisition module 310, a calculation module 320, and a determination module 330.
The acquiring module 310 is configured to acquire a template feature point set of a template image and a target feature point set of an image to be searched, where the feature point set includes salient corner points and edge feature points;
a calculating module 320, configured to traverse each pixel point of the image to be searched according to the template image, and calculate a target similarity metric of the template feature point set and the target feature point set corresponding to the template image region;
a determining module 330, configured to determine, according to the target similarity metric, a position of a matching feature point, and display the matching feature point in the image to be searched.
Optionally, the obtaining module includes:
the first acquisition unit is used for acquiring template characteristic points and a rotation angle of the template image;
the first layering unit is used for conducting pyramid downsampling layering on the template image to obtain a hierarchical template image if the number of the template feature points is larger than a first preset number;
and the rotating unit is used for carrying out angle change on the template feature points of the template image of each level according to the rotating angle to obtain a template feature point set of the template image.
Optionally, the first obtaining unit includes:
the first extraction subunit extracts the template candidate corner points of the template image;
a first determining subunit, configured to determine the template candidate corner as a template salient corner if the template candidate corner is a maximum point in a neighborhood;
the second extraction subunit is used for extracting template edge points of the template salient corner points in the neighborhood;
the first sampling subunit is used for sampling the template edge points to obtain template edge feature points;
and the second determining subunit is used for determining the template feature points of the template image according to the template salient corner points and the template edge feature points.
Optionally, the first determining subunit is specifically configured to:
acquiring a first gradient of the candidate corner points of the template;
acquiring a second gradient of the template candidate corner in the target gradient direction in the neighborhood;
and if the first gradient is larger than the second gradient, the template candidate corner is a maximum value point in the neighborhood, and the template candidate corner is determined as a template salient corner.
Optionally, the obtaining module includes:
the second acquisition subunit is used for acquiring the image to be searched and the layer number of the hierarchical template image corresponding to the template image;
the second layering unit is used for conducting pyramid downsampling layering on the image to be searched according to the layer number to obtain a hierarchical image to be searched;
the extraction unit is used for extracting target characteristic points of the image to be searched of each hierarchy;
and the determining unit is used for determining the target characteristic point set of the image to be searched according to the target characteristic points.
Optionally, the extracting unit is specifically configured to:
extracting target candidate corner points of the image to be searched in the hierarchy;
if the target candidate corner point is a maximum value point in the neighborhood, determining the target candidate corner point as a target salient corner point;
extracting target edge points of the target salient corner points in the neighborhood;
sampling the target edge points to obtain target edge feature points;
and determining the target characteristic points of the image to be searched according to the target salient corner points and the target edge characteristic points.
Optionally, the calculation module includes:
traversing each pixel point of the highest-level image of the image to be searched according to the highest-level image of the template image;
calculating a first similarity measure of a template feature point set of a highest-level image of the template image and the target feature point set corresponding to the template image region, and determining a matching point position corresponding to the maximum value of the first similarity measure;
sequentially determining the areas to be matched of the next-level images of the images to be searched according to the positions of the matching points until the areas to be matched of the images at the lowest level of the images to be searched are determined;
and calculating a template feature point set of the lowest-level image of the template images and a second similarity measure of the target feature point set corresponding to the region to be matched, and determining the maximum value of the second similarity measure as the target similarity measure.
The product can execute the method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of a computer device in the fourth embodiment of the present invention. FIG. 4 illustrates a block diagram of an exemplary computer device 12 suitable for use in implementing embodiments of the present invention. The computer device 12 shown in FIG. 4 is only one example and should not bring any limitations to the functionality or scope of use of embodiments of the present invention.
As shown in FIG. 4, computer device 12 is in the form of a general purpose computing device. The components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, and commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
The processing unit 16 executes various functional applications and data processing by running a program stored in the system memory 28, for example, implementing an image matching method provided by an embodiment of the present invention: acquiring a template feature point set of a template image and a target feature point set of an image to be searched, wherein the feature point set comprises salient angular points and edge feature points; traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity measurement of the template feature point set and the target feature point set corresponding to the template image area; and determining the position of the matched feature point according to the target similarity measurement, and displaying the matched feature point in the image to be searched.
EXAMPLE five
Fifth embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the image matching method provided in all the embodiments of the present invention: acquiring a template feature point set of a template image and a target feature point set of an image to be searched, wherein the feature point set comprises salient angular points and edge feature points; traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity measurement of the template feature point set and the target feature point set corresponding to the template image area; and determining the position of the matched feature point according to the target similarity measurement, and displaying the matched feature point in the image to be searched.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (10)
1. An image matching method, comprising:
acquiring a template feature point set of a template image and a target feature point set of an image to be searched, wherein the feature point set comprises salient angular points and edge feature points;
traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity measurement of the template feature point set and the target feature point set corresponding to the template image area;
and determining the position of the matched feature point according to the target similarity measurement, and displaying the matched feature point in the image to be searched.
2. The method of claim 1, wherein obtaining a template feature point set for a template image comprises:
acquiring template characteristic points and a rotation angle of a template image;
if the number of the template feature points is larger than a first preset number, carrying out pyramid downsampling layering on the template image to obtain a hierarchical template image;
and changing the angle of the template feature points of the template image of each level according to the rotation angle to obtain a template feature point set of the template image.
3. The method of claim 2, wherein the obtaining of the template feature points of the template image comprises:
extracting template candidate angular points of the template image;
if the template candidate corner is a maximum value point in the neighborhood, determining the template candidate corner as a template salient corner;
extracting template edge points of the template salient corner points in the neighborhood;
sampling the template edge points to obtain template edge feature points;
and determining the template feature points of the template image according to the template salient corner points and the template edge feature points.
4. The method according to claim 3, wherein determining the template candidate corner as a template salient corner if the template candidate corner is a maximum point in a neighborhood comprises:
acquiring a first gradient of the candidate corner points of the template;
acquiring a second gradient of the template candidate corner in the target gradient direction in the neighborhood;
and if the first gradient is larger than the second gradient, the template candidate corner is a maximum value point in the neighborhood, and the template candidate corner is determined as a template salient corner.
5. The method of claim 1, wherein obtaining a target feature point set of an image to be searched comprises:
acquiring an image to be searched and the number of layers of a hierarchical template image corresponding to the template image;
carrying out pyramid downsampling layering on the image to be searched according to the number of layers to obtain a hierarchical image to be searched;
extracting target characteristic points of the image to be searched of each level;
and determining a target characteristic point set of the image to be searched according to the target characteristic points.
6. The method according to claim 5, wherein the extracting the target feature point of the image to be searched for in each hierarchy comprises:
extracting target candidate corner points of the image to be searched in the hierarchy;
if the target candidate corner point is a maximum value point in the neighborhood, determining the target candidate corner point as a target salient corner point;
extracting target edge points of the target salient corner points in the neighborhood;
sampling the target edge points to obtain target edge feature points;
and determining the target characteristic points of the image to be searched according to the target salient corner points and the target edge characteristic points.
7. The method of claim 1, wherein traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity measure of the target feature point set corresponding to the template feature point set and the template image region comprises:
traversing each pixel point of the highest-level image of the image to be searched according to the highest-level image of the template image;
calculating a first similarity measure of a template feature point set of a highest-level image of the template image and the target feature point set corresponding to the template image region, and determining a matching point position corresponding to the maximum value of the first similarity measure;
sequentially determining the areas to be matched of the next-level images of the images to be searched according to the positions of the matching points until the areas to be matched of the images at the lowest level of the images to be searched are determined;
and calculating a template feature point set of the lowest-level image of the template images and a second similarity measure of the target feature point set corresponding to the region to be matched, and determining the maximum value of the second similarity measure as the target similarity measure.
8. An image matching apparatus, characterized by comprising:
the acquisition module is used for acquiring a template feature point set of a template image and a target feature point set of an image to be searched, wherein the feature point set comprises salient angular points and edge feature points;
the calculation module is used for traversing each pixel point of the image to be searched according to the template image and calculating the target similarity measurement of the template feature point set and the target feature point set corresponding to the template image area;
and the determining module is used for determining the position of the matched feature point according to the target similarity measurement and displaying the matched feature point in the image to be searched.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the image matching method according to any of claims 1-7 when executing the program.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the image matching method according to any one of claims 1 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110357278.4A CN113111212B (en) | 2021-04-01 | 2021-04-01 | Image matching method, device, equipment and storage medium |
PCT/CN2021/098001 WO2022205611A1 (en) | 2021-04-01 | 2021-06-02 | Image matching method and apparatus, and device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110357278.4A CN113111212B (en) | 2021-04-01 | 2021-04-01 | Image matching method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113111212A true CN113111212A (en) | 2021-07-13 |
CN113111212B CN113111212B (en) | 2024-05-17 |
Family
ID=76713814
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110357278.4A Active CN113111212B (en) | 2021-04-01 | 2021-04-01 | Image matching method, device, equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113111212B (en) |
WO (1) | WO2022205611A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113537351A (en) * | 2021-07-16 | 2021-10-22 | 重庆邮电大学 | Remote sensing image coordinate matching method for mobile equipment shooting |
CN113689397A (en) * | 2021-08-23 | 2021-11-23 | 湖南视比特机器人有限公司 | Workpiece circular hole feature detection method and workpiece circular hole feature detection device |
CN113743423A (en) * | 2021-09-08 | 2021-12-03 | 浙江云电笔智能科技有限公司 | Intelligent temperature monitoring method and system |
CN113744133A (en) * | 2021-09-13 | 2021-12-03 | 烟台艾睿光电科技有限公司 | Image splicing method, device and equipment and computer readable storage medium |
CN116030280A (en) * | 2023-02-22 | 2023-04-28 | 青岛创新奇智科技集团股份有限公司 | Template matching method, device, storage medium and equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104915949A (en) * | 2015-04-08 | 2015-09-16 | 华中科技大学 | Image matching algorithm of bonding point characteristic and line characteristic |
US20160012312A1 (en) * | 2014-07-08 | 2016-01-14 | Canon Kabushiki Kaisha | Image processing method, image processing apparatus, program, and recording medium |
CN111444948A (en) * | 2020-03-21 | 2020-07-24 | 哈尔滨工程大学 | Image feature extraction and matching method |
CN111753119A (en) * | 2020-06-28 | 2020-10-09 | 中国建设银行股份有限公司 | Image searching method and device, electronic equipment and storage medium |
CN112396640A (en) * | 2020-11-11 | 2021-02-23 | 广东拓斯达科技股份有限公司 | Image registration method and device, electronic equipment and storage medium |
CN112508037A (en) * | 2020-11-23 | 2021-03-16 | 北京配天技术有限公司 | Image template matching method, device and storage device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10699156B2 (en) * | 2016-01-13 | 2020-06-30 | Peking University Shenzhen Graduate School | Method and a device for image matching |
CN110197232B (en) * | 2019-06-05 | 2021-09-03 | 中科新松有限公司 | Image matching method based on edge direction and gradient features |
-
2021
- 2021-04-01 CN CN202110357278.4A patent/CN113111212B/en active Active
- 2021-06-02 WO PCT/CN2021/098001 patent/WO2022205611A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160012312A1 (en) * | 2014-07-08 | 2016-01-14 | Canon Kabushiki Kaisha | Image processing method, image processing apparatus, program, and recording medium |
CN104915949A (en) * | 2015-04-08 | 2015-09-16 | 华中科技大学 | Image matching algorithm of bonding point characteristic and line characteristic |
CN111444948A (en) * | 2020-03-21 | 2020-07-24 | 哈尔滨工程大学 | Image feature extraction and matching method |
CN111753119A (en) * | 2020-06-28 | 2020-10-09 | 中国建设银行股份有限公司 | Image searching method and device, electronic equipment and storage medium |
CN112396640A (en) * | 2020-11-11 | 2021-02-23 | 广东拓斯达科技股份有限公司 | Image registration method and device, electronic equipment and storage medium |
CN112508037A (en) * | 2020-11-23 | 2021-03-16 | 北京配天技术有限公司 | Image template matching method, device and storage device |
Non-Patent Citations (1)
Title |
---|
张志强等: "一种改进的双边滤波算法", 《中国图象图形学报》, vol. 14, no. 03, pages 443 - 447 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113537351A (en) * | 2021-07-16 | 2021-10-22 | 重庆邮电大学 | Remote sensing image coordinate matching method for mobile equipment shooting |
CN113689397A (en) * | 2021-08-23 | 2021-11-23 | 湖南视比特机器人有限公司 | Workpiece circular hole feature detection method and workpiece circular hole feature detection device |
CN113743423A (en) * | 2021-09-08 | 2021-12-03 | 浙江云电笔智能科技有限公司 | Intelligent temperature monitoring method and system |
CN113744133A (en) * | 2021-09-13 | 2021-12-03 | 烟台艾睿光电科技有限公司 | Image splicing method, device and equipment and computer readable storage medium |
CN116030280A (en) * | 2023-02-22 | 2023-04-28 | 青岛创新奇智科技集团股份有限公司 | Template matching method, device, storage medium and equipment |
Also Published As
Publication number | Publication date |
---|---|
WO2022205611A1 (en) | 2022-10-06 |
CN113111212B (en) | 2024-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113111212B (en) | Image matching method, device, equipment and storage medium | |
US11321593B2 (en) | Method and apparatus for detecting object, method and apparatus for training neural network, and electronic device | |
CN108230292B (en) | Object detection method, neural network training method, device and electronic equipment | |
CN110334762B (en) | Feature matching method based on quad tree combined with ORB and SIFT | |
CN112396640B (en) | Image registration method, device, electronic equipment and storage medium | |
CN111539428A (en) | Rotating target detection method based on multi-scale feature integration and attention mechanism | |
CN111767960A (en) | Image matching method and system applied to image three-dimensional reconstruction | |
CN111444807B (en) | Target detection method, device, electronic equipment and computer readable medium | |
Igbinosa | Comparison of edge detection technique in image processing techniques | |
WO2022179002A1 (en) | Image matching method and apparatus, electronic device, and storage medium | |
CN112419372A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
Liu et al. | Microscopic 3D reconstruction based on point cloud data generated using defocused images | |
CN112614167A (en) | Rock slice image alignment method combining single-polarization and orthogonal-polarization images | |
CN110796108A (en) | Method, device and equipment for detecting face quality and storage medium | |
CN113159103B (en) | Image matching method, device, electronic equipment and storage medium | |
Malarvel et al. | Edge and region segmentation in high-resolution aerial images using improved kernel density estimation: a hybrid approach | |
CN112465050B (en) | Image template selection method, device, equipment and storage medium | |
CN111724326B (en) | Image processing method and device, electronic equipment and storage medium | |
US20080267506A1 (en) | Interest point detection | |
CN114463856B (en) | Method, device, equipment and medium for training attitude estimation model and attitude estimation | |
Deng et al. | Texture edge-guided depth recovery for structured light-based depth sensor | |
Bhatia et al. | Accurate corner detection methods using two step approach | |
CN116703958B (en) | Edge contour detection method, system, equipment and storage medium for microscopic image | |
CN113570667B (en) | Visual inertial navigation compensation method and device and storage medium | |
Cogranne et al. | A new edge detector based on parametric surface model: Regression surface descriptor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |