WO2022205611A1 - 一种图像匹配方法、装置、设备及存储介质 - Google Patents

一种图像匹配方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2022205611A1
WO2022205611A1 PCT/CN2021/098001 CN2021098001W WO2022205611A1 WO 2022205611 A1 WO2022205611 A1 WO 2022205611A1 CN 2021098001 W CN2021098001 W CN 2021098001W WO 2022205611 A1 WO2022205611 A1 WO 2022205611A1
Authority
WO
WIPO (PCT)
Prior art keywords
template
image
target
feature point
searched
Prior art date
Application number
PCT/CN2021/098001
Other languages
English (en)
French (fr)
Inventor
王月
张翔
刘吉刚
王升
孙仲旭
章登极
吴丰礼
Original Assignee
广东拓斯达科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东拓斯达科技股份有限公司 filed Critical 广东拓斯达科技股份有限公司
Publication of WO2022205611A1 publication Critical patent/WO2022205611A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Definitions

  • the embodiments of the present application relate to the technical field of image processing, for example, to an image matching method, apparatus, device, and storage medium.
  • the image matching method is a method of seeking similar image targets through the analysis of similarity and consistency through the corresponding relationship between image content, features, structure, relationship, texture and gray level.
  • image matching methods are widely used in target recognition, precise workpiece positioning, video tracking and other fields.
  • image matching methods include gray-based image matching method and feature-based image matching method.
  • the two key steps of the feature-based image matching method are feature extraction and matching.
  • the Harris algorithm is traditionally used to extract corner points, and the corner points are used as feature points for matching. For images with a certain shape with less corner information, The matching results are not accurate enough, and there is a situation where the redundancy of feature points leads to a large amount of calculation.
  • Embodiments of the present application provide an image matching method, apparatus, device, and storage medium, so as to realize image matching based on corner point information and edge feature information of an image, and avoid the traditional feature point matching method that has fewer feature points and has a certain shape. If the matching accuracy is not high, improve the matching accuracy of images with a certain shape.
  • an embodiment of the present application provides an image matching method, including:
  • the template feature point set of the template image and the target feature point set of the image to be searched wherein the feature point set includes significant corner points and edge feature points;
  • the positions of the matching feature points are determined according to the target similarity measure, and the matching feature points are displayed in the to-be-searched image.
  • an embodiment of the present application further provides an image matching device, the device comprising:
  • an acquisition module configured to acquire the template feature point set of the template image and the target feature point set of the image to be searched, wherein the feature point set includes significant corner points and edge feature points;
  • a calculation module configured to traverse each pixel of the to-be-searched image according to the template image, and calculate the target similarity measure of the template feature point set and the target feature point set corresponding to the template image area ;
  • the determining module is configured to determine the position of the matching feature point according to the target similarity measure, and display the matching feature point in the image to be searched.
  • an embodiment of the present application further provides a computer device, including a memory, a processor, and a computer program stored in the memory and running on the processor, the processor implementing the program as described in the present application when the processor executes the program.
  • a computer device including a memory, a processor, and a computer program stored in the memory and running on the processor, the processor implementing the program as described in the present application when the processor executes the program.
  • an embodiment of the present application further provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the image matching method described in any one of the embodiments of the present application.
  • FIG. 2a is a flowchart of an image matching method in another embodiment of the present application.
  • 2b is a schematic structural diagram of a template image library in another embodiment of the present application.
  • 2c is a flowchart of an image matching method in another embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of an image matching apparatus in another embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a computer device in an embodiment of the present application.
  • FIG. 1 is a flowchart of an image matching method provided by an embodiment of the present application. This embodiment is applicable to the case of matching images based on feature points and shapes.
  • the method can be performed by the image matching apparatus in the embodiment of the present application.
  • the device can be implemented in software and/or hardware. As shown in Figure 1, the method includes the following steps:
  • S110 Obtain a template feature point set of the template image and a target feature point set of the image to be searched, where the feature point set includes significant corner points and edge feature points.
  • the template image is an image sample used for matching with other images
  • the template feature point set is a collection of feature points with key information in the template image, such as edge information, corner information, and grayscale information, and template features of the template image.
  • the point set includes: template salient corner points and template edge feature points.
  • the images to be searched may be all images in the image library, or may be images of a specified type in the image library.
  • the target feature point set of the image to be searched includes: target salient corner points and target edge feature points.
  • the method of obtaining the template feature point set of the template image may be to extract the template candidate corner points of the template image through a corner detection algorithm, remove the pseudo-feature points from the template candidate corner points to obtain the template significant corner points, and use the edge
  • the detection algorithm extracts template edge feature points in the neighborhood of the template salient corner points, and determines the template feature point set of the template image according to the template salient corner points and the template edge feature points.
  • the method of acquiring the target feature point set of the image to be searched may be to extract the target corner points of the image to be searched through a corner detection algorithm, remove the pseudo feature points from the target corner points to obtain the target significant corner points, and detect the target corner points through edge detection.
  • the algorithm extracts target edge feature points in the neighborhood of the target salient corner points, and determines the target feature point set of the image to be searched according to the target salient corner points and the target edge feature points.
  • the corner detection algorithm may be a Harris corner detection algorithm, and the edge detection algorithm may be an edge detection algorithm based on a Sobel operator, which is not limited in this embodiment of the present application.
  • the template image and the image to be searched are denoised by using a separate accelerated bilateral filter with adaptive parameter estimation.
  • the filter is a nonlinear filter designed on the basis of the classical Gaussian filtering algorithm, and has the characteristics of non-iterative, local and simple.
  • the process of denoising the template image and the image to be searched can be as follows: first, the pixel value of the denoised image is obtained by using the local weighted average bilateral filtering method, and the formula is as follows:
  • S x, y represents the neighborhood of the pixel (x, y), g(i, j) is each pixel in the neighborhood, and ⁇ (i, j) is the weighting coefficient.
  • the luminance similarity factor in the horizontal direction is:
  • the luminance similarity factor in the vertical direction is:
  • ⁇ r is a filtering parameter, which has a great influence on the filtering effect and can be calculated adaptively according to the image size.
  • the formula is as follows:
  • the target similarity measure may be the maximum value of the similarity measure, or may be a similarity measure greater than a preset threshold; the target similarity measure may be set according to actual requirements.
  • the method can be starting from the upper left corner of the image to be searched, traversing the template image on the image to be searched, and calculating the target similarity measure pixel by pixel; it can also be obtained by performing downsampling and layering on the template image and the image to be searched respectively.
  • the hierarchical template image and the hierarchical image to be searched from top to bottom, from coarse to precise, calculate the similarity measure of each layer in the hierarchical template image and the corresponding layer image in the hierarchical image to be searched, until the bottom layer of the template image is obtained.
  • the target similarity measure of the first-level image and the lowest-level image of the image to be searched that is, the target similarity measure of the original image of the template image and the original image of the image to be searched.
  • S130 Determine the position of the matching feature point according to the target similarity measure, and display the matching feature point in the image to be searched.
  • the similarity measure obtained by calculating the template feature point of the template image and the target feature point of the image to be searched is the target similarity measure, then it is determined that the target feature point is a matching feature point, and the position coordinates of the matching feature point are obtained, And display matching feature points in the image to be searched.
  • the technical solution of this embodiment is to obtain the template feature point set of the template image and the target feature point set of the image to be searched, wherein the feature point set includes significant corner points and edge feature points; Each pixel of the image is traversed, and the target similarity measure of the target feature point set corresponding to the template feature point set and the template image area is calculated; the position of the matching feature point is determined according to the target similarity measure, Displaying the matching feature points in the image to be searched can perform image matching based on the corner point information and edge feature information of the image, avoiding the traditional feature point matching method for images with fewer feature points and a certain shape, and the matching accuracy If it is not high, improve the matching accuracy of images with a certain shape.
  • acquiring a template feature point set of a template image includes: Obtain the template feature points and the rotation angle of the template image; if the number of the template feature points is greater than the preset number of points, then the template image is subjected to pyramid downsampling and layering to obtain a hierarchical template image; The template feature points of each hierarchical template image are angularly changed to obtain the template feature point set of the template image.
  • Obtaining the target feature point set of the image to be searched includes: obtaining the number of layers of the image to be searched and the template image corresponding to the template image; performing pyramid downsampling and layering on the image to be searched according to the number of layers to obtain the layer to be searched. Searching for images; extracting target feature points of the images to be searched at each level; determining a target feature point set of the images to be searched according to the target feature points.
  • the method of this embodiment includes the following steps:
  • the template feature points of the template image include template salient corner points and template edge feature points.
  • the rotation angle may be 360 degrees in units of preset angle steps, and the rotation angle may be, for example, 1 degree, 2 degrees...360 degrees; or may be 2 degrees, 4 degrees...360 degrees, and the rotation angle and The preset angle step size may be set according to actual requirements, which is not limited in this embodiment of the present application.
  • the purpose of setting the rotation angle is to make the angle of the template image consistent with the angle of the image to be searched.
  • the method of obtaining the template feature points of the template image may be to extract the template candidate corner points of the template image, and extract the template edge feature points whose salient corner points of the template are in the neighborhood, and determine the salient corner points and the template edge feature points according to the template salient corner points and the template edge feature points.
  • Template feature points of the template image may be to extract the template candidate corner points of the template image, and extract the template edge feature points whose salient corner points of the template are in the neighborhood, and determine the salient corner points and the template edge feature points according to the template salient corner points and the template edge feature points.
  • Get the template feature points of the template image including:
  • the template candidate corner point is the maximum value point in the neighborhood, then the template candidate corner point is determined as the template significant corner point
  • the template feature points of the template image are determined.
  • the template candidate corners of the template image are extracted by the Harris corner detection algorithm
  • the pseudo-feature points are removed from the template candidate corners to obtain the template significant corners
  • the template significant corners are obtained by the edge detection algorithm based on the Sobel operator.
  • the template edge points are extracted from the neighborhood of the point, and the template edge points are sampled to obtain template feature edge points.
  • the sampling method can be random sampling at equal intervals or random sampling at unequal intervals.
  • the edge feature points constitute the template feature points of the template image.
  • the template feature point of the template image integrates the corner point feature and edge feature of the image, and can fully reflect the corner point information and edge feature information of the image.
  • the steps of extracting the template candidate corner points of the template image by the Harris corner detection algorithm are: calculating the gradient I x of the template image I(x, y) in the x direction and the gradient I y in the y direction;
  • the self-similarity after translation ( ⁇ x, ⁇ y) at point (x, y) can be calculated by the autocorrelation function formula:
  • ⁇ (u,v) is a window function centered on point (u,v), generally a Gaussian weighting function
  • W(x,y) is the pixel point of the template image
  • M(x,y) is the corner point
  • the formulas for the matrix M(x,y) and the window function ⁇ (u,v) are as follows:
  • the plane, edge and corner positions in the image can be determined according to the size of the eigenvalues.
  • the responsivity of the feature point is calculated, and the point whose responsivity is greater than the preset responsivity threshold is the candidate corner point, and the calculation formula is as follows:
  • H is the responsivity of the feature point
  • detM is the determinant of the matrix
  • traceM is the trace of the matrix
  • k is a constant weight coefficient, generally taking 0.04 to 0.06.
  • the template candidate corner point is the maximum value point in the neighborhood, then the template candidate corner point is determined as the template significant corner point, including:
  • the template candidate corner point is a maximum value point in the neighborhood, and the template candidate corner point is determined as a template significant corner point.
  • the target gradient direction may include a horizontal gradient direction, a vertical gradient direction, a -45° gradient direction, and a 45° gradient direction within the neighborhood of the template candidate corner point; or may include other gradient directions.
  • the second gradient if the gradient value of the template candidate corner point is greater than the gradient value of the two pixels in the gradient direction in the neighborhood, the template candidate corner point is the maximum value point in the neighborhood of 9 pixel points, and it is determined that the The template candidate corner points are significant feature points; if the gradient value of the template candidate corner point is less than or equal to the gradient value of two pixels in the gradient direction in the neighborhood, the template candidate corner point is eliminated from the template candidate corner points.
  • the preset number of points may be set according to actual needs, which is not limited in this embodiment of the present application, for example, it may be determined according to the size of the template image.
  • the number of template feature points is greater than the preset number, perform pyramid adaptive downsampling and layering on the template image until the number of template feature points is less than or equal to the preset number of points, obtain a hierarchical template image, and record the current The number of pyramid layers, that is, the number of layers of the template image.
  • S230 Perform an angle change on the template feature points of the template image at each level according to the rotation angle, to obtain a template feature point set of the template image.
  • the feature points of each layer image in the hierarchical template image are rotated according to the rotation angle to obtain hierarchical template feature points of different angles, and the hierarchical template feature points of different angles constitute a template feature point set. If the number of rotation angles is E and the number of layers of the template image is F, the template image feature point set includes E ⁇ F feature points corresponding to the template images.
  • the method of performing the angle change on the template feature points of each layer image in the layer template image may be:
  • (x, y) is the pixel coordinates of the template image of the level before the angle change
  • (x', y') is the pixel coordinates of the template image of the level after the angle change
  • l is the length of the introduced intermediate variable vector
  • is The horizontal included angle of the intermediate variable vector
  • is the rotation angle.
  • the template image library can be constructed by storing the information of these template feature point sets into multiple nested high-efficiency data.
  • This data structure is a bridge between the constructed template image and the subsequent matching process.
  • the data structure is constructed by means of unordered graph nested containers. The outermost layer is the number of pyramid layers, the second outer layer is different angles, and the innermost layer is the feature.
  • the overall structure of the point set is shown in Figure 2b.
  • the hierarchical template image is transformed at the angle level and scale level, respectively.
  • the first layer (the bottom layer) is the initial scale, and the template images of the initial scale and the initial angle are rotated according to the initial angle + the first rotation angle, the initial angle + the second rotation angle, and the first rotation angle can be a preset angle.
  • Step size, the second rotation angle can be 2 times the preset angle step size; the second layer scales down on the basis of the initial scale, and then rotates the template image at this scale; and so on, always Go to the top-level image, and finally get the hierarchical template image library.
  • Obtain the target feature point set of the image to be searched including:
  • the target feature point set of the to-be-searched image is determined according to the target feature points.
  • the image to be searched perform pyramid downsampling and layering on the image to be searched according to the number of layers of the template image corresponding to the template image to obtain the image to be searched at the hierarchical level, and obtain the target feature points of the image to be searched at each level, according to the different
  • the target feature points of the image to be searched in the hierarchy constitute a target feature point set.
  • the manner of acquiring the target feature points of the images to be searched at each level is the same as the manner of acquiring the template feature points of the template image, and details are not described herein again.
  • the extraction of the target feature points of the images to be searched for each level includes:
  • target candidate corner point is the maximum value point in the neighborhood, then determine the target candidate corner point as the target significant corner point;
  • the target candidate corners of the image to be searched are extracted by the Harris corner detection algorithm
  • the pseudo-feature points are removed from the target candidate corners to obtain the target significant corners
  • the target significant corners are obtained by the edge detection algorithm based on the Sobel operator.
  • the target edge points are extracted from the neighborhood of the corner points, and the target feature edge points are obtained by sampling the target edge points.
  • the sampling method can be random sampling at equal intervals or random sampling at non-equal intervals.
  • the target edge feature points constitute the target feature points of the image to be searched.
  • the target feature point of the template image integrates the corner feature and edge feature of the image, and can fully reflect the corner point information and edge feature information of the image.
  • the target similarity measure is the maximum similarity measure of the template feature point set of the lowest-level image of the template image and the target feature point set of the lowest-level image of the image to be searched.
  • traversing each pixel of the image to be searched according to the template image may be traversing each pixel of the image of the corresponding layer of the image to be searched according to each layer of the template image, and calculating the template feature The target similarity measure of the target feature point set corresponding to the point set and the template image area, realizes the process from rough matching to fine matching, and finally determines the template feature point set of the bottom-level image of the template image and the image to be searched The maximum similarity measure of the target feature point set of the lowest level image.
  • the matching point sequentially determine the to-be-matched area of the next layer of the image to be searched, until the to-be-matched area of the lowest level image of the to-be-searched image is determined;
  • each pixel of the highest-level image of the to-be-searched image is traversed, and the template feature point set of the highest-level image of the template image and the template image area corresponding to the template image area are calculated.
  • the first similarity measure of the target feature point set determining the matching point position corresponding to the maximum value of the first similarity measure, and mapping the matching point position to the next-highest level image of the image to be searched according to a preset mapping strategy, to obtain the to-be-searched image
  • each pixel point of the next-higher-level image of the image to be searched is traversed, and the template feature point set of the next-higher-level image of the template image and the target feature point corresponding to the template image area are calculated.
  • the similarity measure of the set is determined according to the maximum value of the similarity measure to determine the to-be-matched area of the next layer, and so on, until the to-be-matched area of the lowest-level image of the to-be-searched image is determined.
  • Calculate the second similarity measure of the template feature point set of the lowest level image of the template image and the target feature point set corresponding to the region to be matched and determine the maximum value of the second similarity measure as the target similarity measure.
  • mapping strategy for sequentially determining the to-be-matched area of the next-layer image of the to-be-searched image according to the position of the matching point may be:
  • (x 1 , y 1 ) is the position of the matching point at the current level
  • (x′ 2 , y′ 2 ) is the upper left corner coordinate of the to-be-matched domain of the next layer of the image to be searched
  • (x′′ 2 , y " 2 ) is the coordinate of the lower right corner of the to-be-matched domain of the image of the next layer of the image to be searched
  • img next.cols is the number of column pixels of the lower-layer template image
  • img next.rows is the number of row pixels of the lower-layer template image.
  • the first termination condition includes:
  • s j is the first j similarity measures of the target feature points corresponding to the template feature point and the hierarchical template image area
  • n is the total number of template feature points and target feature points involved in the calculation
  • s min is the preset threshold
  • t′ i is the x-direction gradient of the target feature point of the image to be searched at the level
  • u′ i is the y-direction gradient of the target feature point of the hierarchical template image to be searched
  • the similarity metric value at the impossible target position does not need to be completely calculated. If the cut-off condition is met, the similarity metric calculation of the current pixel is terminated in advance. , which can speed up the matching speed.
  • the second termination condition includes:
  • the greedy coefficient g By presetting the greedy coefficient g, the first n-j items use strict thresholds to judge the termination conditions, and the latter j items use loose thresholds to judge the termination conditions.
  • the greedy degree coefficient g is set to 0.9.
  • S260 Determine the position of the matching feature point according to the target similarity measure, and display the matching feature point in the image to be searched.
  • the steps of the technical solution of this embodiment are as follows: the first step is to extract the feature points of the template image. After filtering and denoising the template image, the candidate corners of the template image are extracted by the Harris corner detection algorithm. According to the non-maximum suppression method of the local area of the candidate corners, the candidate corners with the maximum responsivity are reserved as the salient corners. ; Extract edge feature points in the neighborhood of significant corner points by edge detection algorithm, and use equal interval sampling strategy to filter edge feature points, and determine the feature point set of the image to be searched according to the significant corner points and edge feature points.
  • the second step is to scale the template image to obtain the feature point set of the template image and form a template library.
  • the corresponding template image is layered and angularly rotated, and a multi-scale and multi-angle template image feature point set is obtained.
  • the third step is to extract the feature point set of the image to be searched.
  • the image to be searched is layered, and the feature point set of each layer of the image to be searched is obtained, and the template feature point set of the template image of each layer and the template image area of the corresponding level are calculated.
  • the similarity measure of the feature point set If the termination condition is met, the similarity measure calculation of the current pixel point is terminated in advance.
  • the fourth step is to determine the matching position according to the mapping strategy.
  • the mapping strategy the matching positions of the bottommost template image and the bottommost image to be searched are gradually determined from rough matching to precise matching, and the positions of the matching points are displayed in the image to be searched.
  • the feature points of the image to be searched are calculated with the points in the template library to calculate the similarity measurement value, and the results are accumulated; when the accumulated sum reaches the minimum similarity measurement value S min , the calculation is stopped.
  • the position and angle are the top-level mapping center; mapping and result display
  • the module determines the mapping strategy; obtains the image matching angle and position coordinates of the L-1 layer; outputs the final matching angle and position coordinates; and displays the matching position of the template image in the image to be searched according to the matching position.
  • the technical solution of this embodiment is to obtain the template feature point set of the template image and the target feature point set of the image to be searched, wherein the feature point set includes significant corner points and edge feature points; Each pixel of the image is traversed, and the target similarity measure of the target feature point set corresponding to the template feature point set and the template image area is calculated; the position of the matching feature point is determined according to the target similarity measure, Displaying the matching feature points in the image to be searched can perform image matching based on the corner point information and edge feature information of the image, avoiding the traditional feature point matching method for images with fewer feature points and a certain shape, and the matching accuracy If it is not high, improving the matching accuracy of images with a certain shape, and further extracting significant corner points from the corner point information can improve the matching efficiency.
  • FIG. 3 is a schematic structural diagram of an image matching apparatus according to another embodiment of the present application. This embodiment can be applied to the case of matching images based on feature points and shapes, the apparatus can be implemented in software and/or hardware, and the apparatus can be integrated in any device that provides the function of image matching, as shown in FIG. 3 ,
  • the image matching apparatus includes: an acquisition module 310 , a calculation module 320 and a determination module 330 .
  • the acquisition module 310 is configured to acquire the template feature point set of the template image and the target feature point set of the image to be searched, wherein the feature point set includes significant corner points and edge feature points;
  • the calculation module 320 is configured to traverse each pixel of the to-be-searched image according to the template image, and calculate the target similarity of the template feature point set and the target feature point set corresponding to the template image area measure;
  • the determining module 330 is configured to determine the position of the matching feature point according to the target similarity measure, and display the matching feature point in the to-be-searched image.
  • the acquisition module includes:
  • the first acquisition unit is set to acquire the template feature point and the rotation angle of the template image
  • the first layering unit is configured to perform pyramid downsampling and layering on the template image if the number of the template feature points is greater than the preset number of points to obtain a hierarchical template image;
  • the rotation unit is configured to perform an angle change on the template feature points of each layer image in the hierarchical template image according to the rotation angle, so as to obtain a template feature point set of the template image.
  • the first obtaining unit includes:
  • a first extraction subunit configured to extract template candidate corners of the template image
  • the first determination subunit is set to determine the template candidate corner point as a template significant corner point if the template candidate corner point is a maximum value point in the neighborhood;
  • a second extraction subunit configured to extract the template edge points whose significant corner points of the template are in the neighborhood
  • a first sampling subunit configured to sample the template edge points to obtain template edge feature points
  • the second determination subunit is configured to determine the template feature points of the template image according to the template salient corner points and the template edge feature points.
  • the first determining subunit is set to:
  • the template candidate corner point is a maximum value point in the neighborhood, and the template candidate corner point is determined as a template significant corner point.
  • the acquisition module includes:
  • a second acquisition subunit configured to acquire the number of layers of the layered template image corresponding to the image to be searched and the template image
  • a second layering unit configured to perform pyramid downsampling and layering on the to-be-searched image according to the layer number to obtain a hierarchical to-be-searched image
  • the extraction unit is set to extract the target feature points of the images to be searched at each level;
  • the determining unit is configured to determine the target feature point set of the to-be-searched image according to the target feature points.
  • the extraction unit is set to:
  • target candidate corner point is the maximum value point in the neighborhood, then determine the target candidate corner point as the target significant corner point;
  • the computing module is set to:
  • the matching point sequentially determine the to-be-matched area of the next layer of the image to be searched, until the to-be-matched area of the lowest level image of the to-be-searched image is determined;
  • the above product can execute the method provided by any embodiment of the present application, and has functional modules and beneficial effects corresponding to the execution method.
  • FIG. 4 is a schematic structural diagram of a computer device in another embodiment of the present application.
  • FIG. 4 shows a block diagram of an exemplary computer device 12 suitable for use in implementing embodiments of the present application.
  • the computer device 12 shown in FIG. 4 is only an example, and should not impose any limitations on the functions and scope of use of the embodiments of the present application.
  • computer device 12 takes the form of a general-purpose computing device.
  • Components of computer device 12 may include, but are not limited to, one or more processors or processing units 16 , system memory 28 , and a bus 18 connecting various system components including system memory 28 and processing unit 16 .
  • Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a graphics acceleration port, a processor, or a local bus using any of a variety of bus structures.
  • these architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, Enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect ( PCI) bus.
  • Computer device 12 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by computer device 12, including both volatile and nonvolatile media, removable and non-removable media.
  • System memory 28 may include computer system readable media in the form of volatile memory, such as random access memory (RAM) 30 and/or cache memory 32 .
  • Computer device 12 may further include other removable/non-removable, volatile/non-volatile computer system storage media.
  • storage system 34 may be configured to read and write to non-removable, non-volatile magnetic media (not shown in FIG. 4, commonly referred to as a "hard drive”).
  • a disk drive may be provided for reading and writing to removable non-volatile magnetic disks (eg "floppy disks"), as well as removable non-volatile optical disks (eg CD-ROM, DVD-ROM) or other optical media) to read and write optical drives.
  • each drive may be connected to bus 18 through one or more data media interfaces.
  • Memory 28 may include at least one program product having a set (eg, at least one) of program modules configured to perform the functions of various embodiments of the present application.
  • a program/utility 40 having a set (at least one) of program modules 42, which may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other programs Modules and program data, each or some combination of these examples may include an implementation of a network environment.
  • Program modules 42 generally perform the functions and/or methods of the embodiments described herein.
  • Computer device 12 may also communicate with one or more external devices 14 (eg, keyboard, pointing device, display 24, etc.), may also communicate with one or more devices that enable a user to interact with computer device 12, and/or communicate with Any device (eg, network card, modem, etc.) that enables the computer device 12 to communicate with one or more other computing devices. Such communication may take place through input/output (I/O) interface 22 .
  • the display 24 does not exist as an independent entity, but is embedded in the mirror surface. When the display surface of the display 24 is not displayed, the display surface of the display 24 and the mirror surface are visually integrated.
  • the computer device 12 may communicate with one or more networks (eg, a local area network (LAN), a wide area network (WAN), and/or a public network such as the Internet) through a network adapter 20 .
  • network adapter 20 communicates with other modules of computer device 12 via bus 18 .
  • bus 18 It should be understood that, although not shown, other hardware and/or software modules may be used in conjunction with computer device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives and data backup storage systems.
  • the processing unit 16 executes various functional applications and data processing by running the programs stored in the system memory 28, such as implementing the image matching method provided by the embodiments of the present application: obtaining the template feature point set of the template image and the A target feature point set, wherein the feature point set includes significant corner points and edge feature points; traverse each pixel of the image to be searched according to the template image, and calculate the template feature point set and the template image The target similarity measure of the target feature point set corresponding to the region; the position of the matching feature point is determined according to the target similarity measure, and the matching feature point is displayed in the to-be-searched image.
  • the embodiments of the present application provide a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the image matching methods provided by all the application embodiments of the present application: obtaining a template feature point set of a template image and the target feature point set of the image to be searched, wherein the feature point set includes significant corner points and edge feature points; traverse each pixel of the image to be searched according to the template image, and calculate the template feature point set The target similarity measure of the target feature point set corresponding to the template image area; the position of the matched feature point is determined according to the target similarity measure, and the matched feature point is displayed in the to-be-searched image.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples (a non-exhaustive list) of computer readable storage media include: electrical connections having one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), Erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a propagated data signal in baseband or as part of a carrier wave, with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out the operations of the present application may be written in one or more programming languages, including object-oriented programming languages, such as Java, Smalltalk, C++, and conventional A procedural programming language, such as the "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network

Landscapes

  • Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种图像匹配方法、装置、设备及存储介质。该方法包括:获取模板图像的模板特征点集和待搜索图像的目标特征点集,其中,特征点集包括显著角点和边缘特征点,所述特征点集包括所述模板特征点集和所述目标特征点集;根据所述模板图像对所述待搜索图像的每个像素点进行遍历,计算所述模板特征点集与所述模板图像区域对应的所述目标特征点集的目标相似度度量;根据所述目标相似度度量确定匹配特征点的位置,并在所述待搜索图像中显示所述匹配特征点。

Description

一种图像匹配方法、装置、设备及存储介质
本申请要求在2021年4月1日提交中国专利局、申请号为202110357278.4的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及图像处理技术领域,例如涉及一种图像匹配方法、装置、设备及存储介质。
背景技术
图像匹配方法是通过对图像内容、特征、结构、关系、纹理及灰度等的对应关系,通过相似性和一致性的分析,寻求相似图像目标的方法。目前,图像匹配方法在目标识别、工件精确定位、视频跟踪等领域中得到广泛应用。
常用的图像匹配方法有基于灰度的图像匹配方法和基于特征的图像匹配方法。基于特征的图像匹配方法的两个关键步骤是特征提取和匹配,传统采用Harris算法提取角点,并将角点作为特征点进行匹配的方法,对于角点信息较少的具有一定形状的图像,匹配结果不够准确,且存在特征点冗余导致计算量大的情况。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本申请实施例提供一种图像匹配方法、装置、设备及存储介质,以实现能够基于图像的角点信息和边缘特征信息进行图像匹配,避免传统特征点匹配方法对特征点较少且具有一定形状的图像,匹配准确度不高的情况,提升具有一定形状的图像的匹配准确度。
第一方面,本申请实施例提供了一种图像匹配方法,包括:
获取模板图像的模板特征点集和待搜索图像的目标特征点集,其中,特征点集包括显著角点和边缘特征点;
根据所述模板图像对所述待搜索图像的每一像素点进行遍历,计算所述模板特征点集和所述模板图像区域对应的所述目标特征点集的目标相似度度量;
根据所述目标相似度度量确定匹配特征点的位置,在所述待搜索图像中显示所述匹配特征点。
第二方面,本申请实施例还提供了一种图像匹配装置,该装置包括:
获取模块,设置为获取模板图像的模板特征点集和待搜索图像的目标特征点集,其中,特征点集包括显著角点和边缘特征点;
计算模块,设置为根据所述模板图像对所述待搜索图像的每一像素点进行遍历,计算所述模板特征点集和所述模板图像区域对应的所述目标特征点集的目标相似度度量;
确定模块,设置为根据所述目标相似度度量确定匹配特征点的位置,在所述待搜索图像中显示所述匹配特征点。
第三方面,本申请实施例还提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现如本申请实施例中任一所述的图像匹配方法。
第四方面,本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本申请实施例中任一所述的图像匹配方法。
附图说明
图1是本申请一实施例中的图像匹配方法的流程图;
图2a是本申请另一实施例中的图像匹配方法的流程图;
图2b是本申请另一实施例中的模板图像库的结构示意图;
图2c是本申请另一实施例中的图像匹配方法的流程图;
图3是本申请另一实施例中的图像匹配装置的结构示意图;
图4是本申请一实施例中的计算机设备的结构示意图。
具体实施方式
下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的示例实施例仅仅用于解释本申请,而非对本申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本申请相关的部分而非全部结 构。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。同时,在本申请的描述中,术语“第一”、“第二”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
图1为本申请一实施例提供的一种图像匹配方法的流程图,本实施例可适用于基于特征点和形状匹配图像的情况,该方法可以由本申请实施例中的图像匹配装置来执行,该装置可采用软件和/或硬件的方式实现,如图1所示,该方法包括如下步骤:
S110,获取模板图像的模板特征点集和待搜索图像的目标特征点集,其中,特征点集包括显著角点和边缘特征点。
其中,模板图像是用于和其他图像进行匹配的图像样本,模板特征点集为模板图像中具有关键信息的特征点的集合,例如边缘信息、角点信息和灰度信息,模板图像的模板特征点集包括:模板显著角点和模板边缘特征点。
其中,待搜索图像可以是图像库中的所有图像,也可以是图像库中指定类型的图像。待搜索图像的目标特征点集包括:目标显著角点和目标边缘特征点。
示例性的,获取模板图像的模板特征点集的方式可以为通过角点检测算法提取模板图像的模板候选角点,从所述模板候选角点中去除伪特征点得到模板显著角点,通过边缘检测算法在模板显著角点的邻域内提取模板边缘特征点,根据所述模板显著角点和所述模板边缘特征点确定模板图像的模板特征点集。类似的,获取待搜索图像的目标特征点集的方式可以为通过角点检测算法提取待搜索图像的目标角点,从所述目标角点中去除伪特征点得到目标显著角点,通过边缘检测算法在目标显著角点的邻域内提取目标边缘特征点,根据所述目标显著角点和所述目标边缘特征点确定待搜索图像的目标特征点集。其中,角点检测算法可以为Harris角点检测算法,所述边缘检测算法可以为基于Sobel算子的边缘检测算法,本申请实施例对此不设限制。
在获取模板图像的模板特征点集和待搜索图像的目标特征点集之前,还包括:
对模板图像和待搜索图像进行去噪处理。
示例性的,采用自适应参数估计的分离式加速双边滤波器对模板图像和待 搜索图像进行去噪处理。该滤波器是在经典高斯滤波算法的基础上设计出来的非线性滤波器,具有非迭代、局部和简单等特性。
对模板图像和待搜索图像进行去噪处理的过程可以为:首先,采用局部加权平均的双边滤波方法获取去噪处理后的图像的像素值,公式如下:
Figure PCTCN2021098001-appb-000001
其中,
Figure PCTCN2021098001-appb-000002
为去噪处理后的图像,S x,y表示像素点(x,y)的邻域,g(i,j)为该邻域内的每一个像素点,ω(i,j)为加权系数。
加权系数ω(i,j)由空间邻近度因子ω s(i,j)和亮度相似度因子ω r(i,j)的乘积组成,即ω(i,j)=ω s(i,j)ω r(i,j)。基于这两个加权因子的相互作用,双边滤波器既平滑滤波了图像,又保持了图像边缘。
需要说明的是,对ω r为亮度相似度因子作进一步改进,采用水平和垂直两个方向上的一维加权因子替代二维邻域内的加权因子ω r,能够在其性能不下降的情况下,有效减少运算量。
水平方向上的亮度相似度因子为:
Figure PCTCN2021098001-appb-000003
垂直方向上的亮度相似度因子为:
Figure PCTCN2021098001-appb-000004
其中,σ r为滤波参数,该参数对滤波效果影响很大,可根据图像大小自适应计算得到,公式如下:
Figure PCTCN2021098001-appb-000005
其中,HD=(1,-2,1)是与拉普拉斯滤波器相关的高通滤波器,*表示先卷积后
Figure PCTCN2021098001-appb-000006
下采样计算;H和W分别表示图像的高和宽,所述图像包括:模板图像和待搜索图像。
S120,根据所述模板图像对所述待搜索图像的每一像素点进行遍历,计算所述模板特征点集和所述模板图像区域对应的所述目标特征点集的目标相似度 度量。
其中,目标相似度度量可以为相似度度量的最大值,也可以为大于预设阈值的相似度度量;目标相似度度量可以根据实际需求设定。
示例性的,根据所述模板图像对所述待搜索图像的每一像素点进行遍历,计算所述模板特征点集和所述模板图像区域对应的所述目标特征点集的目标相似度度量的方式可以为从待搜索图像的左上角开始,将所述模板图像在待搜索图像上遍历,逐个像素点计算目标相似度度量;还可以为对模板图像和待搜索图像分别进行降采样分层得到层级模板图像和层级待搜索图像,由上至下,由粗至精依次计算层级模板图像中的每一层图像和层级待搜索图像中对应层图像的相似度度量,直到得到模板图像的最底层级图像和待搜索图像的最底层级图像的目标相似度度量,即模板图像的原始图像和待搜索图像的原始图像的目标相似度度量。
S130,根据所述目标相似度度量确定匹配特征点的位置,在所述待搜索图像中显示所述匹配特征点。
示例性的,若模板图像的模板特征点和待搜索图像的目标特征点计算得到相似度度量为目标相似度度量,则确定所述目标特征点为匹配特征点,获取匹配特征点的位置坐标,并在待搜索图像中显示匹配特征点。
本实施例的技术方案,通过获取模板图像的模板特征点集和待搜索图像的目标特征点集,其中,特征点集包括显著角点和边缘特征点;根据所述模板图像对所述待搜索图像的每一像素点进行遍历,计算所述模板特征点集和所述模板图像区域对应的所述目标特征点集的目标相似度度量;根据所述目标相似度度量确定匹配特征点的位置,在所述待搜索图像中显示所述匹配特征点,能够基于图像的角点信息和边缘特征信息进行图像匹配,避免传统特征点匹配方法对特征点较少且具有一定形状的图像,匹配准确度不高的情况,提升具有一定形状的图像的匹配准确度。
图2a为本申请另一实施例中的一种图像匹配方法的流程图,本实施例以上述实施例为基础进行细化,在本实施例中,获取模板图像的模板特征点集,包括:获取模板图像的模板特征点和旋转角度;若所述模板特征点的个数大于预设点数,则将所述模板图像进行金字塔降采样分层,得到层级模板图像;根据所述旋转角度对每个层级模板图像的模板特征点进行角度变化,得到模板图像 的模板特征点集。获取待搜索图像的目标特征点集,包括:获取待搜索图像和所述模板图像对应的层级模板图像的层数;根据所述层数对所述待搜索图像进行金字塔降采样分层得到层级待搜索图像;提取每个层级待搜索图像的目标特征点;根据所述目标特征点确定所述待搜索图像的目标特征点集。
如图2a所示,本实施例的方法包括如下步骤:
S210,获取模板图像的模板特征点和旋转角度。
其中,模板图像的模板特征点包括模板显著角点和模板边缘特征点。
其中,旋转角度可以是以预设角度步长为单位旋转360度,旋转角度例如可以是为1度、2度……360度;或者可以是2度、4度……360度,旋转角度和预设角度步长可以根据实际需求设定,本申请实施例对此不设限制。设定旋转角度的作用是为了使模板图像的角度和待搜索图像的角度一致。
例如,获取模板图像的模板特征点的方式可以为提取模板图像的模板候选角点,并提取所述模板显著角点在邻域内的模板边缘特征点,根据模板显著角点和模板边缘特征点确定模板图像的模板特征点。
获取模板图像的模板特征点,包括:
提取所述模板图像的模板候选角点;
获取所述模板候选角点的邻域;
若所述模板候选角点为邻域内的极大值点,则将所述模板候选角点确定为模板显著角点;
提取所述模板显著角点在所述邻域内的模板边缘特征点;
对所述模板边缘点进行采样得到模板特征边缘点;
根据所述模板显著角点和所述模板边缘特征点,确定所述模板图像的模板特征点。
其中,模板图像候选角点的邻域大小可以是(2m+1)×(2m+1),m=1,2,3…,若邻域大小为3×3,则所述邻域包括候选角点的像素点,以及以候选角点的像素点为中心的周围8个像素点,即所述邻域共包括9个像素点。
示例性的,通过Harris角点检测算法提取模板图像的模板候选角点,从所述模板候选角点中去除伪特征点得到模板显著角点,通过基于Sobel算子的边缘检测算法在模板显著角点的邻域内提取模板边缘点,对所述模板边缘点进行采样得到模板特征边缘点,采样的方式可以为等间隔随机采样或非等间隔随机采样,根据所述模板显著角点和所述模板边缘特征点构成所述模板图像的模板 特征点。所述模板图像的模板特征点综合了图像的角点特征和边缘特征,可以充分体现图像的角点信息和边缘特征信息。
示例性的,通过Harris角点检测算法提取模板图像的模板候选角点的步骤为:计算模板图像I(x,y)在x方向的梯度I x和在y方向的梯度I y;模板图像在点(x,y)处平移(Δx,Δy)后的自相似性可以通过自相关函数公式计算得到:
Figure PCTCN2021098001-appb-000007
其中,β(u,v)是以点(u,v)为中心的窗口函数,一般取高斯加权函数,W(x,y)为模板图像的像素点,M(x,y)是角点的梯度协方差矩阵。矩阵M(x,y)和窗口函数β(u,v)的公式如下所示:
Figure PCTCN2021098001-appb-000008
Figure PCTCN2021098001-appb-000009
设λ 1和λ 2分别为矩阵M(x,y)的两个特征值,则根据特征值的大小即可判断图像中的平面、边缘及角点位置。实际检测候选角点时计算特征点的响应度,所述响应度大于预设响应度阈值的点即为候选角点,计算公式如下:
H=detM-k·(traceM) 2
detM=λ 1λ 2=AB-C 2
traceM=λ 12=A+B;
其中,H为特征点的响应度,detM为矩阵的行列式,traceM为矩阵的迹;k为常值权重系数,一般取0.04~0.06。
若所述模板候选角点为邻域内的极大值点,则将所述模板候选角点确定为模板显著角点,包括:
获取所述模板候选角点的第一梯度;
获取所述模板候选角点在邻域内的目标梯度方向上的第二梯度;
若所述第一梯度大于所述第二梯度,则所述模板候选角点为所述邻域内的极大值点,将所述模板候选角点确定为模板显著角点。
其中,目标梯度方向可以包括模板候选角点的邻域范围内水平梯度方向、垂直梯度方向、-45°梯度方向和45°梯度方向;或者可以包括其他梯度方向。
示例性的,对于3×3的模板图像,获取模板候选角点的第一梯度和模板候选角点在邻域内的水平梯度方向、垂直梯度方向、-45°梯度方向和45°梯度方向上的第二梯度,若模板候选角点梯度值大于邻域内梯度方向上的两个像素点的梯度值时,该模板候选角点即为9个像素点的邻域内的极大值点,确定所述模板候选角点为显著特征点;若模板候选角点梯度值小于或等于邻域内梯度方向上的两个像素点的梯度值时,则将所述模板候选角点从模板候选角点中剔除。
S220,若所述模板特征点的个数大于预设点数,则将所述模板图像进行金字塔降采样分层,得到层级模板图像。
其中,预设点数可以根据实际需求设定,本申请实施例对此不设限制,如可以根据模板图像尺寸确定。
例如,若模板特征点的个数大于预设点数,则对模板图像进行金字塔自适应降采样分层,直到模板特征点的个数小于或等于预设点数,得到层级模板图像,记录此时的金字塔层数,即模板图像的层数。
S230,根据所述旋转角度对每一层级模板图像的模板特征点进行角度变化,得到模板图像的模板特征点集。
例如,根据旋转角度对层级模板图像中每一层图像的特征点进行旋转,得到不同角度的层级模板特征点,所述不同角度的层级模板特征点构成模板特征点集。若旋转角度的个数为E,模板图像的层数为F,则模板图像特征点集包括E×F个模板图像对应的特征点。
示例性的,根据所述旋转角度对层级模板图像中每一层图像的模板特征点进行角度变化的方式可以:
Figure PCTCN2021098001-appb-000010
其中,(x,y)为角度变化前层级模板图像的像素点坐标,(x′,y′)为角度变化后层级模板图像的像素点坐标,l为引入的中间变量向量的长度,α为中间变量向量的水平夹角,θ为旋转角度。
需要说明的是,将这些模板特征点集的信息存储到多重嵌套高效数据中即可完成模板图像库的构建。此数据结构是构建的模板图像与后续匹配过程连接的桥梁,采用无序图嵌套容器的方式构建数据结构,最外层为金字塔层数,次 外层为不同的角度,最内层为特征点集,整体结构如图2b所示,通过设计所述数据结构可进一步提高算法运算过程的数据访问速度,提升算法效率。
在图2b中,层级模板图像分别在角度层级和尺度层级进行了变换。第一层(最底层)是初始尺度,对初始尺度和初始角度的模板图像分别按照初始角度+第一旋转角度,初始角度+第二旋转角度进行旋转,其中第一旋转角度可以是预设角度步长,第二旋转角度可以是2倍预设角度步长;第二层是在初始尺度的基础上进行了尺度缩小,然后在这一尺度下,对模板图像进行旋转;以此类推,一直到最高层图像,最终获得层级模板图像库。
S240,获取待搜索图像的目标特征点集。
获取待搜索图像的目标特征点集,包括:
获取待搜索图像和所述模板图像对应的层级模板图像的层数;
根据所述层数对所述待搜索图像进行金字塔降采样分层得到层级待搜索图像;
提取每个层级待搜索图像的目标特征点;
根据所述目标特征点确定所述待搜索图像的目标特征点集。
示例性的,根据模板图像对应的层级模板图像的层数对所述待搜索图像进行金字塔降采样分层得到层级待搜索图像,并获取每个层级待搜索图像的目标特征点,根据所述不同层级的待搜索图像的目标特征点构成目标特征点集。其中,获取每个层级待搜索图像的目标特征点的方式和获取模板图像的模板特征点的方式相同,对此不作赘述。
所述提取每个层级待搜索图像的目标特征点,包括:
提取所述层级待搜索图像的目标候选角点;
若所述目标候选角点为邻域内的极大值点,则将所述目标候选角点确定为目标显著角点;
提取所述目标显著角点在所述邻域内的目标边缘点;
对所述目标边缘点进行采样得到目标边缘特征点;
根据所述目标显著角点和所述目标边缘特征点,确定所述待搜索图像的目标特征点。
示例性的,通过Harris角点检测算法提取待搜索图像的目标候选角点,从所述目标候选角点中去除伪特征点得到目标显著角点,通过基于Sobel算子的边缘检测算法在目标显著角点的邻域内提取目标边缘点,对所述目标边缘点进 行采样得到目标特征边缘点,采样的方式可以为等间隔随机采样或非等间隔随机采样,根据所述目标显著角点和所述目标边缘特征点构成所述待搜索图像的目标特征点。所述模板图像的目标特征点综合了图像的角点特征和边缘特征,可以充分体现图像的角点信息和边缘特征信息。
S250,根据所述模板图像对所述待搜索图像的每一像素点进行遍历,计算所述模板特征点集和所述模板图像区域对应的所述目标特征点集的目标相似度度量。
其中,目标相似度度量为模板图像的最底层级图像的模板特征点集和待搜索图像的最底层级图像的目标特征点集的最大相似度度量。
例如,根据所述模板图像对所述待搜索图像的每一像素点进行遍历可以为根据模板图像的每一层图像对待搜索图像的对应层图像的每一像素点进行遍历,计算所述模板特征点集和所述模板图像区域对应的所述目标特征点集的目标相似度度量,实现由粗匹配到精匹配的过程,最终确定模板图像的最底层级图像的模板特征点集和待搜索图像的最底层级图像的目标特征点集的最大相似度度量。
根据模板图像对待搜索图像的每一像素点进行遍历,计算所述模板特征点集和所述模板图像区域对应的所述目标特征点集的目标相似度度量,包括:
根据模板图像的最高层级图像对待搜索图像的最高层级图像的每一像素点进行遍历;
计算所述模板图像的最高层级图像的模板特征点集和所述最高层级模板图像区域对应的所述目标特征点集的第一相似度度量,确定所述第一相似度度量最大值对应的匹配点位置;
根据所述匹配点位置依次确定待搜索图像的下一层图像的待匹配区域,直到确定待搜索图像的最底层级图像的待匹配区域;
计算所述模板图像的最底层级图像的模板特征点集和所述待匹配区域对应的所述目标特征点集的第二相似度度量,将所述第二相似度度量的最大值确定为目标相似度度量。
示例性的,根据模板图像的最高层级图像对待搜索图像的最高层级图像的每一像素点进行遍历,计算所述模板图像的最高层级图像的模板特征点集和所述模板图像区域对应的所述目标特征点集的第一相似度度量,确定所述第一相似度度量最大值对应的匹配点位置,根据预设映射策略将匹配点位置映射到待 搜索图像的次高层级图像,得到待搜索图像的次高层级图像的待匹配区域。根据模板图像的次高层级图像对待搜索图像的次高层级图像的每一像素点进行遍历,计算模板图像的次高层级图像的模板特征点集和所述模板图像区域对应的所述目标特征点集的相似度度量,根据相似度度量的最大值确定下一层的待匹配区域,以此类推,直到确定待搜索图像的最底层级图像的待匹配区域。计算所述模板图像的最底层级图像的模板特征点集和所述待匹配区域对应的所述目标特征点集的第二相似度度量,将所述第二相似度度量的最大值确定为目标相似度度量。
示例性的,根据所述匹配点位置依次确定待搜索图像的下一层图像的待匹配区域的映射策略可以为:
Figure PCTCN2021098001-appb-000011
其中,(x 1,y 1)为当前层级的匹配点位置,(x′ 2,y′ 2)为待搜索图像的下一层图像的待匹配域的左上角坐标,(x″ 2,y″ 2)为待搜索图像的下一层图像的待匹配域的右下角坐标,img next.cols为下层模板图像的列像素数;img next.rows为下层模板图像的行像素数。
若所述模板特征点和层级模板图像区域对应的目标特征点的相似度度量满足第一终止条件,则终止当前像素点的相似度量计算;
所述第一终止条件包括:
Figure PCTCN2021098001-appb-000012
s j<s min-1+j/n;
其中,s j为所述模板特征点和层级模板图像区域对应的所述的目标特征点的前j个相似度度量,n为参与计算的模板特征点和目标特征点的总数,
Figure PCTCN2021098001-appb-000013
为层级待搜索图像的第i个目标特征点的方向向量,
Figure PCTCN2021098001-appb-000014
为第i个目标特征点对应的模板特征点的方向向量,s min为预设阈值,t′ i为层级待搜索图像的目标特征点的x方向梯度,
Figure PCTCN2021098001-appb-000015
为层级模板图像的模板特征点的x方向梯度,u′ i为层级待搜索图像的目标特征点的y方向梯度,
Figure PCTCN2021098001-appb-000016
为层级模板图像的模板特征点 的y方向梯度。
在图像匹配的过程中,为了快速定位到模板图像和待搜索图像的实际匹配位置,在非可能目标位置相似度量值可以不必完全计算,在满足截止条件是,提前结束当前像素点的相似度量计算,可以加快匹配速度。
若层级模板图像区域内的所述第一特征点和所述层级模板图像区域对应的待搜索图像的第二特征点的相似度度量满足第二终止条件,则终止当前像素点的相似度量计算:
所述第二终止条件包括:
Figure PCTCN2021098001-appb-000017
s j<min(s min-1+fj/n,s minj/n);
其中,f=(1-gs min)/(1-s min),当贪婪度系数g=1时,所有的像素点均采用严格的阈值判断终止条件,s min为预设阈值。
这样对于存在遮挡和隐藏的待搜索图像,通过不同的阈值进行终止判断,通过预先设定贪婪度系数g实现前n-j项采用严格的阈值判断终止条件,后j项采用宽松的阈值判断终止条件。例如,贪婪度系数g设置为0.9。
S260,根据所述目标相似度度量确定匹配特征点的位置,在所述待搜索图像中显示所述匹配特征点。
如图2c所示,本实施例的技术方案的步骤为:第一步,提取模板图像的特征点。对模板图像进行滤波去噪处理后,通过Harris角点检测算法提取模板图像的候选角点,根据候选角点局部区域的非极大值抑制方法,保留最大响应度的候选角点作为显著角点;通过边缘检测算法在显著角点的邻域内提取边缘特征点,并采用等间隔采样策略筛选边缘特征点,根据显著角点和边缘特征点确定待搜索图像的特征点集。
第二步,对模板图像进行尺度变化得到模板图像的特征点集并构成模板库。通过金字塔自适应分层方法对应模板图像进行分层和角度旋转,得到多尺度多角度模板图像特征点集。
第三步,提取待搜索图像的特征点集。依据模板图像的层数对待搜索图像进行分层,并获取每层待搜索图像的特征点集,计算所述每层模板图像的模板特征点集和对应层级的模板图像区域对应的待搜索图像的特征点集的相似度度量,若满足终止条件,则提前结束当前像素点的相似度度量计算。
第四步,根据映射策略确定匹配位置。根据映射策略由粗匹配至精匹配逐步确定最底层模板图像和最底层待搜索图像的匹配位置点,在待搜索图像中显示匹配点的位置。
图2c所示的流程如下:
输入模板图像;采用自适应参数估计的分离加速双边滤波算法去噪;特征点提取模块计算图像的x、y方向梯度I x和I y并进行高斯加权,得到梯度协方差矩阵M(x,y);特征点提取模块设定阈值T,计算H=detM-k·(traceM) 2;特征点提取模块选择H>T的点为候选特征点;特征点提取模块对候选特征点局部邻域非极大值抑制,保留响应度最大的点为显著角点;采用等间隔随机采样策略对显著点邻域范围点进行筛选,获取边缘特征点;模板库构建模块设定最小特征点数N,指导金字塔自适应分层,获得层数L;模板库构建模块旋转每层图像点集,获得多尺度多角度图像特征点集合;模板库构建模块设计多层嵌套高效数据结构存储各个角度图像的特征点集,得到模板库。
输入待搜索图像;采用自适应参数估计的分离加速双边滤波算法去噪;匹配与提前截止模块根据L,将待搜索图像下采样,获得待搜索图像的金字塔结构;匹配与提前截止模块提取L层待搜索图像的特征点,与模板库中的点计算相似度度量值,结果累加;当累加和达到最小相似度量值S min,停止计算,此时位置及角度为顶层映射中心;映射及结果显示模块确定映射策略;得到第L-1层图像匹配角度和位置坐标;输出最终匹配角度和位置坐标;根据匹配位置,在待搜索图像中显示模板图像的匹配位置。
本实施例的技术方案,通过获取模板图像的模板特征点集和待搜索图像的目标特征点集,其中,特征点集包括显著角点和边缘特征点;根据所述模板图像对所述待搜索图像的每一像素点进行遍历,计算所述模板特征点集和所述模板图像区域对应的所述目标特征点集的目标相似度度量;根据所述目标相似度度量确定匹配特征点的位置,在所述待搜索图像中显示所述匹配特征点,能够基于图像的角点信息和边缘特征信息进行图像匹配,避免传统特征点匹配方法对特征点较少且具有一定形状的图像,匹配准确度不高的情况,提升具有一定形状的图像的匹配准确度,并且从角点信息中进一步提取显著角点可以提高匹配效率。
图3为本申请另一实施例提供的一种图像匹配装置的结构示意图。本实施 例可适用于基于特征点和形状匹配图像的情况,该装置可采用软件和/或硬件的方式实现,该装置可集成在任何提供图像匹配的功能的设备中,如图3所示,所述图像匹配的装置包括:获取模块310、计算模块320和确定模块330。
其中,获取模块310,设置为获取模板图像的模板特征点集和待搜索图像的目标特征点集,其中,特征点集包括显著角点和边缘特征点;
计算模块320,设置为根据所述模板图像对所述待搜索图像的每一像素点进行遍历,计算所述模板特征点集和所述模板图像区域对应的所述目标特征点集的目标相似度度量;
确定模块330,设置为根据所述目标相似度度量确定匹配特征点的位置,在所述待搜索图像中显示所述匹配特征点。
所述获取模块,包括:
第一获取单元,设置为获取模板图像的模板特征点和旋转角度;
第一分层单元,设置为若所述模板特征点的个数大于预设点数,则将所述模板图像进行金字塔降采样分层,得到层级模板图像;
旋转单元,设置为根据所述旋转角度对层级模板图像中每一层图像的模板特征点进行角度变化,得到模板图像的模板特征点集。
所述第一获取单元,包括:
第一提取子单元,设置为提取所述模板图像的模板候选角点;
第一确定子单元,设置为若所述模板候选角点为邻域内的极大值点,则将所述模板候选角点确定为模板显著角点;
第二提取子单元,设置为提取所述模板显著角点在所述邻域内的模板边缘点;
第一采样子单元,设置为对所述模板边缘点进行采样得到模板边缘特征点;
第二确定子单元,设置为根据所述模板显著角点和所述模板边缘特征点,确定所述模板图像的模板特征点。
所述第一确定子单元,设置为:
获取所述模板候选角点的第一梯度;
获取所述模板候选角点在邻域内的目标梯度方向上的第二梯度;
若所述第一梯度大于所述第二梯度,则所述模板候选角点为所述邻域内的极大值点,将所述模板候选角点确定为模板显著角点。
所述获取模块,包括:
第二获取子单元,设置为获取待搜索图像和所述模板图像对应的层级模板图像的层数;
第二分层单元,设置为根据所述层数对所述待搜索图像进行金字塔降采样分层得到层级待搜索图像;
提取单元,设置为提取每个层级待搜索图像的目标特征点;
确定单元,设置为根据所述目标特征点确定所述待搜索图像的目标特征点集。
所述提取单元,设置为:
提取所述层级待搜索图像的目标候选角点;
若所述目标候选角点为邻域内的极大值点,则将所述目标候选角点确定为目标显著角点;
提取所述目标显著角点在所述邻域内的目标边缘点;
对所述目标边缘点进行采样得到目标边缘特征点;
根据所述目标显著角点和所述目标边缘特征点,确定所述待搜索图像的目标特征点。
所述计算模块,设置为:
根据模板图像的最高层级图像对待搜索图像的最高层级图像的每一像素点进行遍历;
计算所述模板图像的最高层级图像的模板特征点集和所述模板图像区域对应的所述目标特征点集的第一相似度度量,确定所述第一相似度度量最大值对应的匹配点位置;
根据所述匹配点位置依次确定待搜索图像的下一层图像的待匹配区域,直到确定待搜索图像的最底层级图像的待匹配区域;
计算所述模板图像的最底层级图像的模板特征点集和所述待匹配区域对应的所述目标特征点集的第二相似度度量,将所述第二相似度度量的最大值确定为目标相似度度量。
上述产品可执行本申请任意实施例所提供的方法,具备执行方法相应的功能模块和有益效果。
图4为本申请另一实施例中的一种计算机设备的结构示意图。图4示出了适于用来实现本申请实施方式的示例性计算机设备12的框图。图4显示的计算 机设备12仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图4所示,计算机设备12以通用计算设备的形式表现。计算机设备12的组件可以包括但不限于:一个或者多个处理器或者处理单元16,系统存储器28,连接不同系统组件(包括系统存储器28和处理单元16)的总线18。
总线18表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(ISA)总线,微通道体系结构(MAC)总线,增强型ISA总线、视频电子标准协会(VESA)局域总线以及外围组件互连(PCI)总线。
计算机设备12典型地包括多种计算机系统可读介质。这些介质可以是任何能够被计算机设备12访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
系统存储器28可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(RAM)30和/或高速缓存存储器32。计算机设备12可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统34可以设置为读写不可移动的、非易失性磁介质(图4未显示,通常称为“硬盘驱动器”)。尽管图4中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如CD-ROM,DVD-ROM或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线18相连。存储器28可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本申请各实施例的功能。
具有一组(至少一个)程序模块42的程序/实用工具40,可以存储在例如存储器28中,这样的程序模块42包括——但不限于——操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块42通常执行本申请所描述的实施例中的功能和/或方法。
计算机设备12也可以与一个或多个外部设备14(例如键盘、指向设备、显示器24等)通信,还可与一个或者多个使得用户能与该计算机设备12交互的设备通信,和/或与使得该计算机设备12能与一个或多个其它计算设备进行 通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口22进行。另外,本实施例中的计算机设备12,显示器24不是作为独立个体存在,而是嵌入镜面中,在显示器24的显示面不予显示时,显示器24的显示面与镜面从视觉上融为一体。并且,计算机设备12还可以通过网络适配器20与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器20通过总线18与计算机设备12的其它模块通信。应当明白,尽管图中未示出,可以结合计算机设备12使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
处理单元16通过运行存储在系统存储器28中的程序,从而执行各种功能应用以及数据处理,例如实现本申请实施例所提供的图像匹配方法:获取模板图像的模板特征点集和待搜索图像的目标特征点集,其中,特征点集包括显著角点和边缘特征点;根据所述模板图像对所述待搜索图像的每一像素点进行遍历,计算所述模板特征点集和所述模板图像区域对应的所述目标特征点集的目标相似度度量;根据所述目标相似度度量确定匹配特征点的位置,在所述待搜索图像中显示所述匹配特征点。
本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本申请所有申请实施例提供的图像匹配方法:获取模板图像的模板特征点集和待搜索图像的目标特征点集,其中,特征点集包括显著角点和边缘特征点;根据所述模板图像对所述待搜索图像的每一像素点进行遍历,计算所述模板特征点集和所述模板图像区域对应的所述目标特征点集的目标相似度度量;根据所述目标相似度度量确定匹配特征点的位置,在所述待搜索图像中显示所述匹配特征点。
可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、 光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言,诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
注意,上述仅为本申请的示例实施例及所运用技术原理。本领域技术人员会理解,本申请不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本申请的保护范围。因此,虽然通过以上实施例对本申请进行了较为详细的说明,但是本申请不仅仅限于以上实施例,在不脱离本申请构思的情况下,还可以包括更多其他等效实施例,而本申请的范围由所附的权利要求范围决定。

Claims (10)

  1. 一种图像匹配方法,包括:
    获取模板图像的模板特征点集和待搜索图像的目标特征点集,其中,特征点集包括显著角点和边缘特征点,所述特征点集包括所述模板特征点集和所述目标特征点集;
    根据所述模板图像对所述待搜索图像的所有像素点进行遍历,计算所述模板特征点集与所述模板图像区域对应的所述目标特征点集的目标相似度度量;
    根据所述目标相似度度量确定匹配特征点的位置,并在所述待搜索图像中显示所述匹配特征点。
  2. 根据权利要求1所述的方法,其中,所述获取模板图像的模板特征点集,包括:
    获取模板图像的模板特征点和旋转角度;
    响应于确定所述模板特征点的个数大于预设点数,将所述模板图像进行金字塔降采样分层,得到层级模板图像,所述层级模板图像包括多层图像;
    根据所述旋转角度对所述层级模板图像中每一层图像的模板特征点进行角度变化,得到模板图像的模板特征点集。
  3. 根据权利要求2所述的方法,其中,所述获取模板图像的模板特征点,包括:
    提取所述模板图像的模板候选角点;
    响应于确定所述模板候选角点为邻域内的极大值点,将所述模板候选角点确定为模板显著角点;
    提取所述模板显著角点在所述邻域内的模板边缘点;
    对所述模板边缘点进行采样得到模板边缘特征点;
    根据所述模板显著角点和所述模板边缘特征点,确定所述模板图像的模板特征点。
  4. 根据权利要求3所述的方法,其中,所述响应于确定所述模板候选角点为邻域内的极大值点,将所述模板候选角点确定为模板显著角点,包括:
    获取所述模板候选角点的第一梯度;
    获取所述模板候选角点在邻域内的目标梯度方向上的第二梯度;
    响应于确定所述第一梯度大于所述第二梯度,确定所述模板候选角点为所述邻域内的极大值点,并将所述模板候选角点确定为模板显著角点。
  5. 根据权利要求1所述的方法,其中,所述获取待搜索图像的目标特征点集,包括:
    获取待搜索图像和所述模板图像对应的层级模板图像的层数;
    根据所述层数对所述待搜索图像进行金字塔降采样分层得到层级待搜索图像,所述层级待搜索图像包括多层图像;
    提取所述层级待搜索图像中每一层图像的目标特征点;
    根据所述目标特征点确定所述待搜索图像的目标特征点集。
  6. 根据权利要求5所述的方法,其中,所述提取所述层级待搜索图像中每一层图像的目标特征点,包括:
    提取所述层级待搜索图像中每一层图像的目标候选角点;
    响应于确定所述目标候选角点为邻域内的极大值点,将所述目标候选角点确定为目标显著角点;
    提取所述目标显著角点在所述邻域内的目标边缘点;
    对所述目标边缘点进行采样得到目标边缘特征点;
    根据所述目标显著角点和所述目标边缘特征点,确定所述待搜索图像的目标特征点。
  7. 根据权利要求1所述的方法,其中,所述根据模板图像对待搜索图像的所有像素点进行遍历,计算所述模板特征点集与所述模板图像区域对应的所述目标特征点集的目标相似度度量,包括:
    根据模板图像的最高层图像对待搜索图像的最高层图像的所有像素点进行遍历;
    计算所述模板图像的最高层图像的模板特征点集与所述模板图像区域对应的所述目标特征点集的第一相似度度量,确定所述第一相似度度量最大值对应的匹配点位置;
    根据所述匹配点位置依次确定待搜索图像的下一层图像的待匹配区域,直到确定待搜索图像的最底层图像的待匹配区域;
    计算所述模板图像的最底层图像的模板特征点集与所述待匹配区域对应的所述目标特征点集的第二相似度度量,将所述第二相似度度量的最大值确定为目标相似度度量。
  8. 一种图像匹配装置,包括:
    获取模块,设置为获取模板图像的模板特征点集和待搜索图像的目标特征 点集,其中,特征点集包括显著角点和边缘特征点,所述特征点集包括所述模板特征点集和所述目标特征点集;
    计算模块,设置为根据所述模板图像对所述待搜索图像的每个像素点进行遍历,计算所述模板特征点集与所述模板图像区域对应的所述目标特征点集的目标相似度度量;
    确定模块,设置为根据所述目标相似度度量确定匹配特征点的位置,并在所述待搜索图像中显示所述匹配特征点。
  9. 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现如权利要求1-7中任一所述的图像匹配方法。
  10. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-7中任一所述的图像匹配方法。
PCT/CN2021/098001 2021-04-01 2021-06-02 一种图像匹配方法、装置、设备及存储介质 WO2022205611A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110357278.4A CN113111212B (zh) 2021-04-01 2021-04-01 一种图像匹配方法、装置、设备及存储介质
CN202110357278.4 2021-04-01

Publications (1)

Publication Number Publication Date
WO2022205611A1 true WO2022205611A1 (zh) 2022-10-06

Family

ID=76713814

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/098001 WO2022205611A1 (zh) 2021-04-01 2021-06-02 一种图像匹配方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN113111212B (zh)
WO (1) WO2022205611A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537351B (zh) * 2021-07-16 2022-06-24 重庆邮电大学 面向移动设备拍摄的遥感图像坐标匹配方法
CN113689397A (zh) * 2021-08-23 2021-11-23 湖南视比特机器人有限公司 工件圆孔特征检测方法和工件圆孔特征检测装置
CN113743423A (zh) * 2021-09-08 2021-12-03 浙江云电笔智能科技有限公司 一种温度智能监测方法及系统
CN113744133A (zh) * 2021-09-13 2021-12-03 烟台艾睿光电科技有限公司 一种图像拼接方法、装置、设备及计算机可读存储介质
CN116030280A (zh) * 2023-02-22 2023-04-28 青岛创新奇智科技集团股份有限公司 一种模板匹配方法、装置、存储介质及设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915949A (zh) * 2015-04-08 2015-09-16 华中科技大学 一种结合点特征和线特征的图像匹配算法
US20180307940A1 (en) * 2016-01-13 2018-10-25 Peking University Shenzhen Graduate School A method and a device for image matching
CN110197232A (zh) * 2019-06-05 2019-09-03 中科新松有限公司 基于边缘方向和梯度特征的图像匹配方法
CN112396640A (zh) * 2020-11-11 2021-02-23 广东拓斯达科技股份有限公司 图像配准方法、装置、电子设备及存储介质
CN112508037A (zh) * 2020-11-23 2021-03-16 北京配天技术有限公司 图像模板匹配方法、装置及存储装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6422250B2 (ja) * 2014-07-08 2018-11-14 キヤノン株式会社 画像処理方法、画像処理装置、プログラム及び記録媒体
CN111444948B (zh) * 2020-03-21 2022-11-18 哈尔滨工程大学 一种图像特征提取与匹配方法
CN111753119A (zh) * 2020-06-28 2020-10-09 中国建设银行股份有限公司 一种图像搜索方法、装置、电子设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915949A (zh) * 2015-04-08 2015-09-16 华中科技大学 一种结合点特征和线特征的图像匹配算法
US20180307940A1 (en) * 2016-01-13 2018-10-25 Peking University Shenzhen Graduate School A method and a device for image matching
CN110197232A (zh) * 2019-06-05 2019-09-03 中科新松有限公司 基于边缘方向和梯度特征的图像匹配方法
CN112396640A (zh) * 2020-11-11 2021-02-23 广东拓斯达科技股份有限公司 图像配准方法、装置、电子设备及存储介质
CN112508037A (zh) * 2020-11-23 2021-03-16 北京配天技术有限公司 图像模板匹配方法、装置及存储装置

Also Published As

Publication number Publication date
CN113111212B (zh) 2024-05-17
CN113111212A (zh) 2021-07-13

Similar Documents

Publication Publication Date Title
WO2022205611A1 (zh) 一种图像匹配方法、装置、设备及存储介质
US11321593B2 (en) Method and apparatus for detecting object, method and apparatus for training neural network, and electronic device
US10984556B2 (en) Method and apparatus for calibrating relative parameters of collector, device and storage medium
WO2022007431A1 (zh) 一种Micro QR二维码的定位方法
CN108895981B (zh) 一种三维测量方法、装置、服务器和存储介质
CN108230292B (zh) 物体检测方法和神经网络的训练方法、装置及电子设备
WO2022100065A1 (zh) 图像配准方法、装置、电子设备及存储介质
CN110222703B (zh) 图像轮廓识别方法、装置、设备和介质
WO2017206099A1 (zh) 一种图像模式匹配的方法及装置
CN111767960A (zh) 一种应用于图像三维重建的图像匹配方法及系统
WO2022100068A1 (zh) 图像处理方法、装置、电子设备及存储介质
CN112419372B (zh) 图像处理方法、装置、电子设备及存储介质
WO2019029098A1 (zh) 基于对数极空间的图像物体大小和旋转估计计算方法
CN111914756A (zh) 一种视频数据处理方法和装置
CN110956131B (zh) 单目标追踪方法、装置及系统
CN110796108A (zh) 一种人脸质量检测的方法、装置、设备及存储介质
CN116958145A (zh) 图像处理方法、装置、视觉检测系统及电子设备
WO2022062853A1 (zh) 遥感图像的配准方法、装置、设备、存储介质及系统
CN113537026B (zh) 建筑平面图中的图元检测方法、装置、设备及介质
Malarvel et al. Edge and region segmentation in high-resolution aerial images using improved kernel density estimation: a hybrid approach
US8126275B2 (en) Interest point detection
CN113159103B (zh) 图像匹配方法、装置、电子设备以及存储介质
CN113284237A (zh) 一种三维重建方法、系统、电子设备及存储介质
CN113793370A (zh) 三维点云配准方法、装置、电子设备及可读介质
US20220311910A1 (en) Corner detection method and corner detection device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21934260

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21934260

Country of ref document: EP

Kind code of ref document: A1